hacklink hack forum hacklink film izle hacklink marsbahisizmir escortsahabetpornJojobetcasibompadişahbetjojobet

Tag: AIpowered

  • Banglalink launches AI-powered lifestyle pack RYZE

    Banglalink launches AI-powered lifestyle pack RYZE

    Jahrat Adib Chowdhury, chief legal officer of Banglalink, speaks while unveiling a new digital lifestyle prepaid package, styled “RYZE”, at Aloki on Tejgaon-Gulshan Link Road in the capital recently. Photo: Banglalink

    “>



    Jahrat Adib Chowdhury, chief legal officer of Banglalink, speaks while unveiling a new digital lifestyle prepaid package, styled “RYZE”, at Aloki on Tejgaon-Gulshan Link Road in the capital recently. Photo: Banglalink

    Banglalink has launched a new digital lifestyle prepaid package, styled “RYZE”, aimed at empowering young users with a dynamic digital experience.

    Aligning with the government’s vision to support youth through upskilling opportunities, the new pack provides AI-powered productivity tools for self-development.

    The launch event was held at Aloki on Tejgaon-Gulshan Link Road in the capital recently, the mobile operation said in a press release.

    Under this package, RYZE offers a unique feature of “endless internet”, where all data packs ensure continued internet availability even if the purchased volume is consumed before the pack’s expiry.

    Kaan Terzioglu, group chief executive officer of VEON, said, “VEON is committed to empowering individuals through technology that not only connects them but also enhances their potential. Our augmented intelligence 1440 (AI1440) strategy is designed to offer customers access to AI-powered services that are relevant during every minute of the day.”

    “This approach prioritises digital solutions for professional and personal life, with super apps playing an important role in this vision.”

    “With RYZE, we aim to contribute to building a digitally connected Bangladesh filled with innovative minds,” he added.

    Erik Aas, chief executive officer of Banglalink, said, “RYZE exemplifies our commitment to empowering the youthful communities we serve through seamless digital solutions that meet the unique needs of the next generation.

    “By integrating AI-driven productivity tools, RYZE delivers a comprehensive lifestyle experience, from skill development to entertainment, ensuring it resonates with the digital-first generation,” Aas added.

    RYZE is available to all mobile users across any network.

    It also aims to enhance entertainment options by providing easy access to a wide range of streaming services, digital content and gamification features, making it an ideal choice for the country’s dynamic youth.

    Huseyin Turker, chief technology and information officer of the mobile operator, Muhammad Mahbub Islam, internal audit director, Jahrat Adib Chowdhury, chief legal officer, Taimur Rahman, chief corporate and regulatory affairs officer, and Muniruzzaman Sheikh, chief ethics and compliance officer, along with key leaders of the operator, prominent guests, influencers, students and business partners were also present.



    Source link

  • Banglalink launches AI-powered lifestyle pack RYZE

    Banglalink launches AI-powered lifestyle pack RYZE

    Jahrat Adib Chowdhury, chief legal officer of Banglalink, speaks while unveiling a new digital lifestyle prepaid package, styled “RYZE”, at Aloki on Tejgaon-Gulshan Link Road in the capital recently. Photo: Banglalink

    “>



    Jahrat Adib Chowdhury, chief legal officer of Banglalink, speaks while unveiling a new digital lifestyle prepaid package, styled “RYZE”, at Aloki on Tejgaon-Gulshan Link Road in the capital recently. Photo: Banglalink

    Banglalink has launched a new digital lifestyle prepaid package, styled “RYZE”, aimed at empowering young users with a dynamic digital experience.

    Aligning with the government’s vision to support youth through upskilling opportunities, the new pack provides AI-powered productivity tools for self-development.

    The launch event was held at Aloki on Tejgaon-Gulshan Link Road in the capital recently, the mobile operation said in a press release.

    Under this package, RYZE offers a unique feature of “endless internet”, where all data packs ensure continued internet availability even if the purchased volume is consumed before the pack’s expiry.

    Kaan Terzioglu, group chief executive officer of VEON, said, “VEON is committed to empowering individuals through technology that not only connects them but also enhances their potential. Our augmented intelligence 1440 (AI1440) strategy is designed to offer customers access to AI-powered services that are relevant during every minute of the day.”

    “This approach prioritises digital solutions for professional and personal life, with super apps playing an important role in this vision.”

    “With RYZE, we aim to contribute to building a digitally connected Bangladesh filled with innovative minds,” he added.

    Erik Aas, chief executive officer of Banglalink, said, “RYZE exemplifies our commitment to empowering the youthful communities we serve through seamless digital solutions that meet the unique needs of the next generation.

    “By integrating AI-driven productivity tools, RYZE delivers a comprehensive lifestyle experience, from skill development to entertainment, ensuring it resonates with the digital-first generation,” Aas added.

    RYZE is available to all mobile users across any network.

    It also aims to enhance entertainment options by providing easy access to a wide range of streaming services, digital content and gamification features, making it an ideal choice for the country’s dynamic youth.

    Huseyin Turker, chief technology and information officer of the mobile operator, Muhammad Mahbub Islam, internal audit director, Jahrat Adib Chowdhury, chief legal officer, Taimur Rahman, chief corporate and regulatory affairs officer, and Muniruzzaman Sheikh, chief ethics and compliance officer, along with key leaders of the operator, prominent guests, influencers, students and business partners were also present.



    Source link

  • Researchers say an AI-powered transcription tool used in hospitals invents things no one ever said

    Researchers say an AI-powered transcription tool used in hospitals invents things no one ever said

    SAN FRANCISCO — Tech behemoth OpenAI has touted its artificial intelligence-powered transcription tool Whisper as having near “human level robustness and accuracy.”

    But Whisper has a major flaw: It is prone to making up chunks of text or even entire sentences, according to interviews with more than a dozen software engineers, developers and academic researchers. Those experts said some of the invented text — known in the industry as hallucinations — can include racial commentary, violent rhetoric and even imagined medical treatments.

    Experts said that such fabrications are problematic because Whisper is being used in a slew of industries worldwide to translate and transcribe interviews, generate text in popular consumer technologies and create subtitles for videos.

    More concerning, they said, is a rush by medical centers to utilize Whisper-based tools to transcribe patients’ consultations with doctors, despite OpenAI’ s warnings that the tool should not be used in “high-risk domains.”

    The full extent of the problem is difficult to discern, but researchers and engineers said they frequently have come across Whisper’s hallucinations in their work. A University of Michigan researcher conducting a study of public meetings, for example, said he found hallucinations in eight out of every 10 audio transcriptions he inspected, before he started trying to improve the model.

    A machine learning engineer said he initially discovered hallucinations in about half of the over 100 hours of Whisper transcriptions he analyzed. A third developer said he found hallucinations in nearly every one of the 26,000 transcripts he created with Whisper.

    The problems persist even in well-recorded, short audio samples. A recent study by computer scientists uncovered 187 hallucinations in over 13,000 clear audio snippets they examined.

    That trend would lead to tens of thousands of faulty transcriptions over millions of recordings, researchers said.

    Such mistakes could have “really grave consequences,” particularly in hospital settings, said Alondra Nelson, who led the White House Office of Science and Technology Policy for the Biden administration until last year.

    “Nobody wants a misdiagnosis,” said Nelson, a professor at the Institute for Advanced Study in Princeton, New Jersey. “There should be a higher bar.”

    Whisper also is used to create closed captioning for the Deaf and hard of hearing — a population at particular risk for faulty transcriptions. That’s because the Deaf and hard of hearing have no way of identifying fabrications are “hidden amongst all this other text,” said Christian Vogler, who is deaf and directs Gallaudet University’s Technology Access Program.

    The prevalence of such hallucinations has led experts, advocates and former OpenAI employees to call for the federal government to consider AI regulations. At minimum, they said, OpenAI needs to address the flaw.

    “This seems solvable if the company is willing to prioritize it,” said William Saunders, a San Francisco-based research engineer who quit OpenAI in February over concerns with the company’s direction. “It’s problematic if you put this out there and people are overconfident about what it can do and integrate it into all these other systems.”

    An OpenAI spokesperson said the company continually studies how to reduce hallucinations and appreciated the researchers’ findings, adding that OpenAI incorporates feedback in model updates.

    While most developers assume that transcription tools misspell words or make other errors, engineers and researchers said they had never seen another AI-powered transcription tool hallucinate as much as Whisper.

    The tool is integrated into some versions of OpenAI’s flagship chatbot ChatGPT, and is a built-in offering in Oracle and Microsoft’s cloud computing platforms, which service thousands of companies worldwide. It is also used to transcribe and translate text into multiple languages.

    In the last month alone, one recent version of Whisper was downloaded over 4.2 million times from open-source AI platform HuggingFace. Sanchit Gandhi, a machine-learning engineer there, said Whisper is the most popular open-source speech recognition model and is built into everything from call centers to voice assistants.

    Professors Allison Koenecke of Cornell University and Mona Sloane of the University of Virginia examined thousands of short snippets they obtained from TalkBank, a research repository hosted at Carnegie Mellon University. They determined that nearly 40% of the hallucinations were harmful or concerning because the speaker could be misinterpreted or misrepresented.

    In an example they uncovered, a speaker said, “He, the boy, was going to, I’m not sure exactly, take the umbrella.”

    But the transcription software added: “He took a big piece of a cross, a teeny, small piece … I’m sure he didn’t have a terror knife so he killed a number of people.”

    A speaker in another recording described “two other girls and one lady.” Whisper invented extra commentary on race, adding “two other girls and one lady, um, which were Black.”

    In a third transcription, Whisper invented a non-existent medication called “hyperactivated antibiotics.”

    Researchers aren’t certain why Whisper and similar tools hallucinate, but software developers said the fabrications tend to occur amid pauses, background sounds or music playing.

    OpenAI recommended in its online disclosures against using Whisper in “decision-making contexts, where flaws in accuracy can lead to pronounced flaws in outcomes.”

    That warning hasn’t stopped hospitals or medical centers from using speech-to-text models, including Whisper, to transcribe what’s said during doctor’s visits to free up medical providers to spend less time on note-taking or report writing.

    Over 30,000 clinicians and 40 health systems, including the Mankato Clinic in Minnesota and Children’s Hospital Los Angeles, have started using a Whisper-based tool built by Nabla, which has offices in France and the U.S.

    That tool was fine tuned on medical language to transcribe and summarize patients’ interactions, said Nabla’s chief technology officer Martin Raison.

    Company officials said they are aware that Whisper can hallucinate and are mitigating the problem.

    It’s impossible to compare Nabla’s AI-generated transcript to the original recording because Nabla’s tool erases the original audio for “data safety reasons,” Raison said.

    Nabla said the tool has been used to transcribe an estimated 7 million medical visits.

    Saunders, the former OpenAI engineer, said erasing the original audio could be worrisome if transcripts aren’t double checked or clinicians can’t access the recording to verify they are correct.

    “You can’t catch errors if you take away the ground truth,” he said.

    Nabla said that no model is perfect, and that theirs currently requires medical providers to quickly edit and approve transcribed notes, but that could change.

    Because patient meetings with their doctors are confidential, it is hard to know how AI-generated transcripts are affecting them.

    A California state lawmaker, Rebecca Bauer-Kahan, said she took one of her children to the doctor earlier this year, and refused to sign a form the health network provided that sought her permission to share the consultation audio with vendors that included Microsoft Azure, the cloud computing system run by OpenAI’s largest investor. Bauer-Kahan didn’t want such intimate medical conversations being shared with tech companies, she said.

    “The release was very specific that for-profit companies would have the right to have this,” said Bauer-Kahan, a Democrat who represents part of the San Francisco suburbs in the state Assembly. “I was like ‘absolutely not.’”

    John Muir Health spokesman Ben Drew said the health system complies with state and federal privacy laws.

    ___

    Schellmann reported from New York.

    ___

    This story was produced in partnership with the Pulitzer Center’s AI Accountability Network, which also partially supported the academic Whisper study.

    ___

    The Associated Press receives financial assistance from the Omidyar Network to support coverage of artificial intelligence and its impact on society. AP is solely responsible for all content. Find AP’s standards for working with philanthropies, a list of supporters and funded coverage areas at AP.org.

    ___

    The Associated Press and OpenAI have a licensing and technology agreement allowing OpenAI access to part of the AP’s text archives.

    Source link

  • Most Americans don’t trust AI-powered election information: AP-NORC/USAFacts survey

    Most Americans don’t trust AI-powered election information: AP-NORC/USAFacts survey

    WASHINGTON — Jim Duggan uses ChatGPT almost daily to draft marketing emails for his carbon removal credit business in Huntsville, Alabama. But he’d never trust an artificial intelligence chatbot with any questions about the upcoming presidential election.

    “I just don’t think AI produces truth,” the 68-year-old political conservative said in an interview. “Grammar and words, that’s something that’s concrete. Political thought, judgment, opinions aren’t.”

    Duggan is part of the majority of Americans who don’t trust artificial intelligence, chatbots or search results to give them accurate answers, according to a new survey from The Associated Press-NORC Center for Public Affairs Research and USAFacts. About two-thirds of U.S. adults say they’re not very or not at all confident that these tools provide reliable and factual information, the poll shows.

    The findings reveal that even as Americans have started using generative AI-fueled chatbots and search engines in their personal and work lives, most have remained skeptical of these rapidly advancing technologies. That’s particularly true when it comes to information about high-stakes events such as elections.

    Earlier this year, a gathering of election officials and AI researchers found that AI tools did poorly when asked relatively basic questions, such as where to find the nearest polling place. Last month, several secretaries of state warned that the AI chatbot developed for the social media platform X was spreading bogus election information, prompting X to tweak the tool so it would first direct users to a federal government website for reliable information.

    Large AI models that can generate text, images, videos or audio clips at the click of a button are poorly understood and minimally regulated. Their ability to predict the most plausible next word in a sentence based on vast pools of data allows them to provide sophisticated responses on almost any topic — but it also makes them vulnerable to errors.

    Americans are split on whether they think the use of AI will make it more difficult to find accurate information about the 2024 election. About 4 in 10 Americans say the use of AI will make it “much more difficult” or “somewhat more difficult” to find factual information, while another 4 in 10 aren’t sure — saying it won’t make it easier or more challenging, according to the poll. A distinct minority, 16%, say AI will make it easier to find accurate information about the election.

    Griffin Ryan, a 21-year-old college student at Tulane University in New Orleans, said he doesn’t know anyone on his campus who uses AI chatbots to find information about candidates or voting. He doesn’t use them either, since he’s noticed that it’s possible to “basically just bully AI tools into giving you the answers that you want.”

    The Democrat from Texas said he gets most of his news from mainstream outlets such as CNN, the BBC, NPR, The New York Times and The Wall Street Journal. When it comes to misinformation in the upcoming election, he’s more worried that AI-generated deepfakes and AI-fueled bot accounts on social media will sway voter opinions.

    “I’ve seen videos of people doing AI deepfakes of politicians and stuff, and these have all been obvious jokes,” Ryan said. “But it does worry me when I see those that maybe someone’s going to make something serious and actually disseminate it.”

    A relatively small portion of Americans — 8% — think results produced by AI chatbots such as OpenAI’s ChatGPT or Anthropic’s Claude are always or often based on factual information, according to the poll. They have a similar level of trust in AI-assisted search engines such as Bing or Google, with 12% believing their results are always or often based on facts.

    There already have been attempts to influence U.S. voter opinions through AI deepfakes, including AI-generated robocalls that imitated President Joe Biden’s voice to convince voters in New Hampshire’s January primary to stay home from the polls.

    More commonly, AI tools have been used to create fake images of prominent candidates that aim to reinforce particular negative narratives — from Vice President Kamala Harris in a communist uniform to former President Donald Trump in handcuffs.

    Ryan, the Tulane student, said his family is fairly media literate, but he has some older relatives who heeded false information about COVID-19 vaccines on Facebook during the pandemic. He said that makes him concerned that they might be susceptible to false or misleading information during the election cycle.

    Bevellie Harris, a 71-year-old Democrat from Bakersfield, California, said she prefers getting election information from official government sources, such as the voter pamphlet she receives in the mail ahead of every election.

    “I believe it to be more informative,” she said, adding that she also likes to look up candidate ads to hear their positions in their own words.

    ___

    The poll of 1,019 adults was conducted July 29-Aug. 8, 2024, using a sample drawn from NORC’s probability-based AmeriSpeak Panel, which is designed to be representative of the U.S. population. The margin of sampling error for all respondents is plus or minus 4.0 percentage points.

    ___

    Swenson reported from New York.

    ___

    The Associated Press receives support from several private foundations to enhance its explanatory coverage of elections and democracy. See more about AP’s democracy initiative here. The AP is solely responsible for all content.

    Source link