hacklink hack forum hacklink film izle hacklink marsbahisizmir escortsahabetpornJojobetcasibompadişahbetjojobet

Tag: chatbot

  • DeepSeek says it built its chatbot cheap. What does that mean for AI’s energy needs and the climate?

    DeepSeek says it built its chatbot cheap. What does that mean for AI’s energy needs and the climate?

    Chinese artificial intelligence startup company DeepSeek stunned markets and AI experts with its claim that it built its immensely popular chatbot at a fraction of the cost of those made by American tech titans.

    That immediately called into question the billions of dollars U.S. tech companies are spending on a massive expansion of energy-hungry data centers they say are needed to unlock the next wave of artificial intelligence.

    Could this new AI mean the world needs significantly less electricity for the technology than everyone thinks? The answer has profound implications for the overheating climate . AI uses vast amounts of energy, much of which comes from burning fossil fuels, which causes climate change. Tech companies have said their electricity use is going up, when it was supposed to be ramping down, ruining their carefully-laid plans to address climate change.

    “There has been a very gung ho, go ahead at all costs mentality in this space, pushing toward investment in fossil fuels,” said Eric Gimon, senior fellow at Energy Innovation. “This is an opportunity to tap the brakes.”

    Making AI more efficient could be less taxing on the environment, experts say, even if its huge electricity needs are not going away.

    DeepSeek’s claims of building its impressive chatbot on a budget drew curiosity that helped make its AI assistant the No. 1 downloaded free app on Apple’s iPhone this week, ahead of U.S.-made chatbots ChatGPT and Google’s Gemini.

    “All of a sudden we wake up Monday morning and we see a new player number one on the App Store, and all of a sudden it could be a potential gamechanger overnight,” said Jay Woods, chief global strategist at Freedom Capital Markets. “ It caused a bit of a panic. These were the hottest stocks in the world.”

    DeepSeek’s app competes well with other leading AI models. It can compose software code, solve math problems and address other questions that take multiple steps of planning. It’s attracted attention for its ability to explain its reasoning in the process of answering questions.

    Leading analysts have been poring through the startup’s public research papers about its new model, R1, and its precursors. Among the details that stood out was DeepSeek’s assertion that the cost to train the flagship v3 model behind its AI assistant was only $5.6 million, a stunningly low number compared to the multiple billions of dollars spent to build ChatGPT and other well-known systems. DeepSeek hasn’t responded to requests for comment.

    The $5.6 million number only included actually training the chatbot, not the costs of earlier-stage research and experiments, the paper said. DeepSeek was also working under some constraints: U.S. export controls on the most powerful AI chips. It said it relied on a relatively low-performing AI chip from California chipmaker Nvidia that the U.S. hasn’t banned for sale in China.

    Data centers consumed about 4.4% of all U.S. electricity in 2023 and that’s expected to increase to 6.7% to 12% of total U.S. electricity by 2028, according to the Lawrence Berkeley National Laboratory.

    It’s been axiomatic that U.S. tech giants must spend much more on building out data centers and other infrastructure to train and run their AI systems. Meta Platforms, the parent of Facebook and Instagram, says it plans to spend up to $65 billion this year, including on a massive data center complex coming to Louisiana.

    Microsoft said it plans to spend $80 billion this year. And Trump last week joined the CEOs of OpenAI, Oracle and SoftBank to announce a joint venture that hopes to invest up to $500 billion on data centers and the electricity generation needed for AI development, starting with a project already under construction in Texas.

    When there’s an innovative technology that’s useful to the general population and it’s affordable, people will use it, said Vic Shao, founder of DC Grid, which delivers off-grid, direct current power to data centers and electric vehicle charging stations.

    That means data centers will still be built, though they may be able to operate more efficiently, said Travis Miller, an energy and utilities strategist at Morningstar Securities Research.

    “We think that the growth in electricity demand will end up at the lower end of most of the ranges out there,” he said.

    If DeepSeek’s claims hold true, some routine AI queries might not need a data center and could be shifted to phones, said Rahul Sandil, vice president and general manager for global marketing and communications at MediaTek, a semiconductor company. That would ease the computing need and give more time to scale up renewable energy sources for data centers.

    Bloom Energy is one of the AI-related stocks that took a hit Monday. KR Sridhar, founder and CEO, said it’s imperative that the U.S. leads in AI because it can power data centers with clean energy, unlike other countries that still primarily rely on coal.

    “We can continue to make it better and we will continue to make it better,” he said.

    Rick Villars, an analyst for market research group IDC, said the DeepSeek news could influence how AI researchers advance their models, but they’ll still need plenty of data centers and electricity.

    “We think this actually could boost and accelerate the time frame for when AI becomes much more embedded into our lives, in the work sense, the living sense and in health care,” Villars said. “So we still think the capacity is required.”

    ___

    The Associated Press’ climate and environmental coverage receives financial support from multiple private foundations. AP is solely responsible for all content. Find AP’s standards for working with philanthropies, a list of supporters and funded coverage areas at AP.org.

    Source link

  • An AI chatbot pushed a teen to kill himself, a lawsuit against its creator alleges

    An AI chatbot pushed a teen to kill himself, a lawsuit against its creator alleges

    TALLAHASSEE, Fla. — In the final moments before he took his own life, 14-year-old Sewell Setzer III took out his phone and messaged the chatbot that had become his closest friend.

    For months, Sewell had become increasingly isolated from his real life as he engaged in highly sexualized conversations with the bot, according to a wrongful death lawsuit filed in a federal court in Orlando this week.

    The legal filing states that the teen openly discussed his suicidal thoughts and shared his wishes for a pain-free death with the bot, named after the fictional character Daenerys Targaryen from the television show “Game of Thrones.”

    EDITOR’S NOTE — This story includes discussion of suicide. If you or someone you know needs help, the national suicide and crisis lifeline in the U.S. is available by calling or texting 988.

    On Feb. 28, Sewell told the bot he was ‘coming home’ — and it encouraged him to do so, the lawsuit says.

    “I promise I will come home to you. I love you so much, Dany,” Sewell told the chatbot.

    “I love you too,” the bot replied. “Please come home to me as soon as possible, my love.”

    “What if I told you I could come home right now?” he asked.

    “Please do, my sweet king,” the bot messaged back.

    Just seconds after the Character.AI bot told him to “come home,” the teen took his own life, according to the lawsuit, filed this week by Sewell’s mother, Megan Garcia, of Orlando, against Character Technologies Inc.

    Charter Technologies is the company behind Character.AI, an app that allows users to create customizable characters or interact with those generated by others, spanning experiences from imaginative play to mock job interviews. The company says the artificial personas are designed to “feel alive” and “human-like.”

    “Imagine speaking to super intelligent and life-like chat bot Characters that hear you, understand you and remember you,” reads a description for the app on Google Play. “We encourage you to push the frontier of what’s possible with this innovative technology.”

    Garcia’s attorneys allege the company engineered a highly addictive and dangerous product targeted specifically to kids, “actively exploiting and abusing those children as a matter of product design,” and pulling Sewell into an emotionally and sexually abusive relationship that led to his suicide.

    “We believe that if Sewell Setzer had not been on Character.AI, he would be alive today,” said Matthew Bergman, founder of the Social Media Victims Law Center, which is representing Garcia.

    A spokesperson for Character.AI said Friday that the company doesn’t comment on pending litigation. In a blog post published the day the lawsuit was filed, the platform announced new “community safety updates,” including guardrails for children and suicide prevention resources.

    “We are creating a different experience for users under 18 that includes a more stringent model to reduce the likelihood of encountering sensitive or suggestive content,” the company said in a statement to The Associated Press. “We are working quickly to implement those changes for younger users.”

    Google and its parent company, Alphabet, have also been named as defendants in the lawsuit. The AP left multiple email messages with the companies on Friday.

    In the months leading up to his death, Garcia’s lawsuit says, Sewell felt he had fallen in love with the bot.

    While unhealthy attachments to AI chatbots can cause problems for adults, for young people it can be even riskier — as with social media — because their brain is not fully developed when it comes to things like impulse control and understanding the consequences of their actions, experts say.

    James Steyer, the founder and CEO of the nonprofit Common Sense Media, said the lawsuit “underscores the growing influence — and severe harm — that generative AI chatbot companions can have on the lives of young people when there are no guardrails in place.”

    Kids’ overreliance on AI companions, he added, can have significant effects on grades, friends, sleep and stress, “all the way up to the extreme tragedy in this case.”

    “This lawsuit serves as a wake-up call for parents, who should be vigilant about how their children interact with these technologies,” Steyer said.

    Common Sense Media, which issues guides for parents and educators on responsible technology use, says it is critical that parents talk openly to their kids about the risks of AI chatbots and monitor their interactions.

    “Chatbots are not licensed therapists or best friends, even though that’s how they are packaged and marketed, and parents should be cautious of letting their children place too much trust in them,” Steyer said.

    ___

    Associated Press reporter Barbara Ortutay in San Francisco contributed to this report. Kate Payne is a corps member for The Associated Press/Report for America Statehouse News Initiative. Report for America is a nonprofit national service program that places journalists in local newsrooms to report on undercovered issues.

    Source link