hacklink hack forum hacklink film izle hacklink marsbahisizmir escortsahabetpornJojobetcasibompadişahbetGorabetcasibom9018betgit casinojojobetmarsbahismatbetmatbet

Tag: Regulators

  • US regulators seek to break up Google, forcing Chrome sale as part of monopoly punishment

    US regulators seek to break up Google, forcing Chrome sale as part of monopoly punishment

    U.S. regulators want a federal judge to break up Google to prevent the company from continuing to squash competition through its dominant search engine after a court found it had maintained an abusive monopoly over the past decade.

    The proposed breakup floated in a 23-page document filed late Wednesday by the U.S. Department of Justice calls for sweeping punishments that would include a sale of Google’s industry-leading Chrome web browser and impose restrictions to prevent Android from favoring its own search engine.

    Although regulators stopped short of demanding Google sell Android too, they asserted the judge should make it clear the company could still be required to divest its smartphone operating system if its oversight committee continues to see evidence of misconduct.

    The broad scope of the recommended penalties underscores how severely regulators operating under President Joe Biden’s administration believe Google should be punished following an August ruling by U.S. District Judge Amit Mehta that branded the company as a monopolist.

    The Justice Department decision-makers who will inherit the case after President-elect Donald Trump takes office next year might not be as strident. The Washington, D.C. court hearings on Google’s punishment are scheduled to begin in April and Mehta is aiming to issue his final decision before Labor Day.

    If Mehta embraces the government’s recommendations, Google would be forced to sell its 16-year-old Chrome browser within six months of the final ruling. But the company certainly would appeal any punishment, potentially prolonging a legal tussle that has dragged on for more than four years.

    Google didn’t have an immediate comment about the filing, but has previously asserted the Justice Department is pushing penalties that extend far beyond the issues addressed in its case.

    Besides seeking a Chrome spinoff and a corralling of the Android software, the Justice Department wants the judge to ban Google from forging multibillion-dollar deals to lock in its dominant search engine as the default option on Apple’s iPhone and other devices. It would also ban Google from favoring its own services, such as YouTube or its recently-launched artificial intelligence platform, Gemini.

    Regulators also want Google to license the search index data it collects from people’s queries to its rivals, giving them a better chance at competing with the tech giant. On the commercial side of its search engine, Google would be required to provide more transparency into how it sets the prices that advertisers pay to be listed near the top of some targeted search results

    The measures, if they are ordered, threaten to upend a business expected to generate more than $300 billion in revenue this year.

    “The playing field is not level because of Google’s conduct, and Google’s quality reflects the ill-gotten gains of an advantage illegally acquired,” the Justice Department asserted in its recommendations. “The remedy must close this gap and deprive Google of these advantages.”

    It’s still possible that the Justice Department could ease off attempts to break up Google, especially if Trump takes the widely expected step of replacing Assistant Attorney General Jonathan Kanter, who was appointed by Biden to oversee the agency’s antitrust division.

    Although the case targeting Google was originally filed during the final months of Trump’s first term in office, Kanter oversaw the high-profile trial that culminated in Mehta’s ruling against Google. Working in tandem with Federal Trade Commission Chair Lina Khan, Kanter took a get-tough stance against Big Tech that triggered other attempted crackdowns on industry powerhouses such as Apple and discouraged many business deals from getting done during the past four years.

    Trump recently expressed concerns that a breakup might destroy Google but didn’t elaborate on alternative penalties he might have in mind. “What you can do without breaking it up is make sure it’s more fair,” Trump said last month. Matt Gaetz, the former Republican congressman that Trump nominated to be the next U.S. Attorney General, has previously called for the breakup of Big Tech companies.

    Gaetz, a firebrand for Trump, faces a tough confirmation hearing.

    This latest filing gave Kanter and his team a final chance to spell out measures that they believe are needed to restore competition in search. It comes six weeks after Justice first floated the idea of a breakup in a preliminary outline of potential penalties.

    But Kanter’s proposal is already raising questions about whether regulators seek to impose controls that extend beyond the issues covered in last year’s trial, and — by extension — Mehta’s ruling.

    Banning the default search deals that Google now pays more than $26 billion annually to maintain was one of the main practices that troubled Mehta in his ruling.

    It’s less clear whether the judge will embrace the Justice Department’s contention that Chrome needs to be spun out of Google and or Android should be completely walled off from its search engine.

    “It is probably going a little beyond,” Syracuse University law professor Shubha Ghosh said of the Chrome breakup. “The remedies should match the harm, it should match the transgression. This does seem a little beyond that pale.”

    Trying to break up Google harks back to a similar punishment initially imposed on Microsoft a quarter century ago following another major antitrust trial that culminated in a federal judge deciding the software maker had illegally used his Windows operating system for PCs to stifle competition.

    However, an appeals court overturned an order that would have broken up Microsoft, a precedent many experts believe will make Mehta reluctant to go down a similar road with the Google case.

    Source link

  • Japanese regulators disqualify a reactor under post-Fukushima safety standards for the first time

    Japanese regulators disqualify a reactor under post-Fukushima safety standards for the first time

    TOKYO — Japan’s nuclear watchdog on Wednesday formally disqualified a reactor in the country’s north-central region for a restart, the first rejection under safety standards that were reinforced after the 2011 Fukushima disaster. The decision is a setback for Japan as it seeks to accelerate reactor restarts to maximize nuclear power.

    The Nuclear Regulation Authority at a regular meeting Wednesday announced the Tsuruga No. 2 reactor is “unfit” as its operator failed to address safety risks stemming from possible active faults underneath it.

    Tsuruga No. 2, operated by the Japan Atomic Power Co., is the first reactor to be rejected under the safety standards adopted in 2013 based on lessons from the 2011 Fukushima Daiichi meltdown disaster following a massive earthquake and tsunami.

    “We reached our conclusion based on a very strict examination,” NRA chairperson Shinsuke Yamanaka told reporters.

    The verdict comes after more than eight years of safety reviews that were repeatedly disrupted by data coverups and mistakes by the operator, Yamanaka said. He called the case “abnormal” and urged the utility to take the result seriously.

    The decision is a blow to Japan Atomic Power because it virtually ends its hopes for a restart. The operator, which is decommissioning its other reactor, Tsuruga No. 1, had hoped to put No. 2 back online, but it would require an examination of dozens of faults around the reactor to prove their safety.

    An NRA safety panel concluded three months ago there’s no evidence denying the possibility of active faults about 300 meters (330 yards) north of the No. 2 reactor stretching to right underneath the facility, meaning the reactor cannot be operated.

    Japan’s government in 2022 adopted a plan to maximize the use of nuclear energy, pushing to accelerate reactor restarts to secure a stable energy supply and meet its pledge to reach carbon neutrality by 2050.

    Concern about the government’s revived push for nuclear energy grew after a magnitude 7.5 earthquake hit Japan’s Noto Peninsula on Jan. 1, 2024, killing more than 400 people and damaging more than 100,000 structures. The quake caused minor damage to two nearby nuclear facilities, and evacuation plans for the region were found to be inadequate.

    Building key nuclear facilities, such as reactors, directly above active faults is prohibited in earthquake-prone Japan.

    Yamanaka said the NRA is not immediately ordering a decommissioning because the reactor, which is offline and its spent fuel safely cooled, will not pose a major threat if active faults move.

    If the utility decides to reapply, it must address not only the faults issue but it also must implement adequate safety measures for the entire plant, Yamanaka said. Providing scientific proof of the status of faults underneath key nuclear facilities is difficult, but other operators that obtained restart permits all cleared the requirement, he noted.

    The Tsuruga No. 2 reactor first started commercial operation in February 1987 and has been offline since May 2011. The operator denied the NRA panel’s 2013 on-site inspection results, which concluded that the faults under the No. 2 reactor were active, and it applied for a restart in 2015.

    Source link

  • British gambling regulator prosecutes Sorare football game | Regulators

    Britain’s Gambling Commission is to prosecute Sorare, a multibillion-pound company that makes a fantasy football game promoted by the Premier League, for providing unlicensed gambling.

    Sorare, which is valued at $4.3bn (£3.21bn) and counts major international investment firms such as SoftBank among its backers, will appear in court on 4 October in what will be an extremely rare use of the gambling regulator’s prosecutorial powers. The company denies the charge of unlicensed gambling.

    Developed in 2018 by Nicolas Julia and Adrien Montfort, Sorare describes itself as a fantasy sport cryptocurrency-based video game. Players can create their own “football club” with cards in the form of tradable non-fungible tokens (NFTs), competing for prizes including cash, VIP tickets and signed kits.

    The commission said in October 2021 that it was investigating whether the products provided by the Paris-based firm were online gambling and required a licence.

    Almost exactly two years later, the regulator has now charged the company with offering unlicensed gambling, with the company due to appear in Birmingham magistrates court.

    Since it was established in 2005, the Gambling Commission is thought to have used its prosecutorial powers only once, in a case of cheating involving a man who had drugged dogs to fix greyhound races.

    A spokesperson for Sorare said: “We are aware of the claims made by the Gambling Commission and have instructed our UK counsel to challenge them. We firmly deny any claims that Sorare is a gambling product under UK laws.

    “The commission has misunderstood our business and wrongly determined that gambling laws apply to Sorare. We cannot comment further whilst legal proceedings are under way.”

    Sorare’s website boasts of partnerships with major leagues and 317 clubs around the world, including every Premier League club and European giants such as Real Madrid, Barcelona and Bayern Munich.

    The French international and Real Madrid superstar Kylian Mbappé has featured in an advertising campaign for Sorare. Photograph: Alberto Gardin/NurPhoto/Rex/Shutterstock

    It has promoted its games via an ad campaign featuring the French striker Kylian Mbappé and also claims to have partnerships in the US, with the National Basketball Association and Major League Baseball.

    In 2023, the Premier League granted Sorare a four-year licence to sell digital sports cards of players from all 20 Premier League clubs, a deal Sky News said at the time could be worth £30m a year.

    A section on the Premier League website lists Sorare among the league’s partners.

    skip past newsletter promotion

    The page states that Sorare also counts the sportspeople Serena Williams, Lionel Messi, Zinedine Zidane, Rio Ferdinand, Antoine Griezmann, Gerard Piqué, Blake Griffin, and Rudy Gobert among its investors, ambassadors and advisers.

    The Premier League website also describes Sorare as one of Europe’s fastest-growing startups, pointing to a recent $680m (£508m) fundraising effort that valued the company at $4.3bn.

    Investors in Sorare include SoftBank, Accel and Benchmark.

    The company, which employs 160 people in New York and Paris, also boasts of having 3 million users in 180 markets.

    The Gambling Commission said it had charged Sorare with “providing unlicensed gambling facilities to consumers in Britain” but that it could not comment any further.

    The Guardian has approached the Premier League for comment and has attempted to reach Sorare and its founders.

    Source link

  • Judge gives US regulators until December to propose penalties for Google’s illegal search monopoly

    Judge gives US regulators until December to propose penalties for Google’s illegal search monopoly

    A federal judge on Friday gave the U.S. Justice Department until the end of the year to outline how Google should be punished for illegally monopolizing the internet search market and then prepare to present its case for imposing the penalties next spring.

    The loose-ended timeline sketched out by U.S. District Judge Amit Mehta came during the first court hearing since he branded Google as a ruthless monopolist in a landmark ruling issued last month.

    Mehta’s decision triggered the need for another phase of the legal process to determine how Google should be penalized for years of misconduct and forced to make other changes to prevent potential future abuses by the dominant search engine that’s the foundation of its internet empire.

    Attorneys for the Justice Department and Google were unable to reach a consensus on how the time frame for the penalty phase should unfold in the weeks leading up to Friday’s hearing in Washington D.C., prompting Mehta to steer them down the road that he hopes will result in a decision on the punishment before Labor Day next year.

    To make that happen, Mehta indicated he would like the trial in the penalty phase to happen next spring. The judge said March and April look like the best months on his court calendar.

    If Mehta’s timeline pans out, a ruling on Google’s antitrust penalties would come nearly five years after the Justice Department filed the lawsuit that led to a 10-week antitrust trial last autumn. That’s similar to the timeline Microsoft experienced in the late 1990s when regulators targeted them for its misconduct in the personal computer market.

    The Justice Department hasn’t yet given any inkling on how severely Google should be punished. The most likely targets are the long-running deals that Google has lined up with Apple, Samsung, and other tech companies to make its search engine the default option on smartphones and web browsers.

    In return for the guaranteed search traffic, Google has been paying its partners more than $25 billion annually — with most of that money going to Apple for the prized position on the iPhone.

    In a more drastic scenario, the Justice Department could seek to force Google to surrender parts of its business, including the Chrome web browser and Android software that powers most of the world’s smartphones because both of those also lock in search traffic.

    In Friday’s hearing, Justice Department lawyers said they need ample time to come up with a comprehensive proposal that will also consider how Google has started to deploy artificial intelligence in its search results and how that technology could upend the market.

    Google’s lawyers told the judge they hope the Justice Department proposes a realistic list of penalties that address the issues in the judge’s ruling rather than submit extreme measures that amount to “political grandstanding.”

    Mehta gave the two sides until Sept. 13 to file a proposed timeline that includes the Justice Department disclosing its proposed punishment before 2025.

    Source link

  • How do you know when AI is powerful enough to be dangerous? Regulators try to do the math

    How do you know when AI is powerful enough to be dangerous? Regulators try to do the math

    How do you know if an artificial intelligence system is so powerful that it poses a security danger and shouldn’t be unleashed without careful oversight?

    For regulators trying to put guardrails on AI, it’s mostly about the arithmetic. Specifically, an AI model trained on 10 to the 26th floating-point operations per second must now be reported to the U.S. government and could soon trigger even stricter requirements in California.

    Say what? Well, if you’re counting the zeroes, that’s 100,000,000,000,000,000,000,000,000, or 100 septillion, calculations each second, using a measure known as flops.

    What it signals to some lawmakers and AI safety advocates is a level of computing power that might enable rapidly advancing AI technology to create or proliferate weapons of mass destruction, or conduct catastrophic cyberattacks.

    Those who’ve crafted such regulations acknowledge they are an imperfect starting point to distinguish today’s highest-performing generative AI systems — largely made by California-based companies like Anthropic, Google, Meta Platforms and ChatGPT-maker OpenAI — from the next generation that could be even more powerful.

    Critics have pounced on the thresholds as arbitrary — an attempt by governments to regulate math.

    “Ten to the 26th flops,” said venture capitalist Ben Horowitz on a podcast this summer. “Well, what if that’s the size of the model you need to, like, cure cancer?”

    An executive order signed by President Joe Biden last year relies on that threshold. So does California’s newly passed AI safety legislation — which Gov. Gavin Newsom has until Sept. 30 to sign into law or veto. California adds a second metric to the equation: regulated AI models must also cost at least $100 million to build.

    Following Biden’s footsteps, the European Union’s sweeping AI Act also measures floating-point operations per second, or flops, but sets the bar 10 times lower at 10 to the 25th power. That covers some AI systems already in operation. China’s government has also looked at measuring computing power to determine which AI systems need safeguards.

    No publicly available models meet the higher California threshold, though it’s likely that some companies have already started to build them. If so, they’re supposed to be sharing certain details and safety precautions with the U.S. government. Biden employed a Korean War-era law to compel tech companies to alert the U.S. Commerce Department if they’re building such AI models.

    AI researchers are still debating how best to evaluate the capabilities of the latest generative AI technology and how it compares to human intelligence. There are tests that judge AI on solving puzzles, logical reasoning or how swiftly and accurately it predicts what text will answer a person’s chatbot query. Those measurements help assess an AI tool’s usefulness for a given task, but there’s no easy way of knowing which one is so widely capable that it poses a danger to humanity.

    “This computation, this flop number, by general consensus is sort of the best thing we have along those lines,” said physicist Anthony Aguirre, executive director of the Future of Life Institute, which has advocated for the passage of California’s Senate Bill 1047 and other AI safety rules around the world.

    Floating point arithmetic might sound fancy “but it’s really just numbers that are being added or multiplied together,” making it one of the simplest ways to assess an AI model’s capability and risk, Aguirre said.

    “Most of what these things are doing is just multiplying big tables of numbers together,” he said. “You can just think of typing in a couple of numbers into your calculator and adding or multiplying them. And that’s what it’s doing — ten trillion times or a hundred trillion times.”

    For some tech leaders, however, it’s too simple and hard-coded a metric. There’s “no clear scientific support” for using such metrics as a proxy for risk, argued computer scientist Sara Hooker, who leads AI company Cohere’s nonprofit research division, in a July paper.

    “Compute thresholds as currently implemented are shortsighted and likely to fail to mitigate risk,” she wrote.

    Venture capitalist Horowitz and his business partner Marc Andreessen, founders of the influential Silicon Valley investment firm Andreessen Horowitz, have attacked the Biden administration as well as California lawmakers for AI regulations they argue could snuff out an emerging AI startup industry.

    For Horowitz, putting limits on “how much math you’re allowed to do” reflects a mistaken belief there will only be a handful of big companies making the most capable models and you can put “flaming hoops in front of them and they’ll jump through them and it’s fine.”

    In response to the criticism, the sponsor of California’s legislation sent a letter to Andreessen Horowitz this summer defending the bill, including its regulatory thresholds.

    Regulating at over 10 to the 26th flops is “a clear way to exclude from safety testing requirements many models that we know, based on current evidence, lack the ability to cause critical harm,” wrote state Sen. Scott Wiener of San Francisco. Existing publicly released models “have been tested for highly hazardous capabilities and would not be covered by the bill,” Wiener said.

    Both Wiener and the Biden executive order treat the metric as a temporary one that could be adjusted later.

    Yacine Jernite, who works on policy research at the AI company Hugging Face, said the flops metric emerged in “good faith” ahead of last year’s Biden order but is already starting to grow obsolete. AI developers are doing more with smaller models requiring less computing power, while the potential harms of more widely used AI products won’t trigger California’s proposed scrutiny.

    “Some models are going to have a drastically larger impact on society, and those should be held to a higher standard, whereas some others are more exploratory and it might not make sense to have the same kind of process to certify them,” Jernite said.

    Aguirre said it makes sense for regulators to be nimble, but he characterizes some opposition to the flops threshold as an attempt to avoid any regulation of AI systems as they grow more capable.

    “This is all happening very fast,” Aguirre said. “I think there’s a legitimate criticism that these thresholds are not capturing exactly what we want them to capture. But I think it’s a poor argument to go from that to, ‘Well, we just shouldn’t do anything and just cross our fingers and hope for the best.’”

    Source link