hacklink hack forum hacklink film izle hacklink marsbahisizmir escortsahabetpornJojobetcasibompadişahbetGorabetcasibom9018betgit casinojojobetmarsbahismatbetmatbet

Tag: Governor

  • New Jersey governor wants more federal resources for probe into drone sightings

    New Jersey governor wants more federal resources for probe into drone sightings

    TOMS RIVER, N.J. — Gov. Phil Murphy has asked the Biden administration to put more resources into an investigation of mysterious drone sightings that have been reported in New Jersey and nearby states.

    Murphy, a Democrat, made the request in a letter Thursday, noting that state and local law enforcement remain “hamstrung” by existing laws and policies in their efforts to successfully counteract any nefarious activity of unmanned aircraft. He posted a copy of the letter on the social media platform X.

    “This leaves action surrounding the (drones) squarely on the shoulders of the federal government,” Murphy said. “More federal resources are needed to understand what is behind this activity.”

    Murphy and other officials have repeatedly stressed that there is no evidence that the aircraft pose a national security or a public safety threat, or have a foreign nexus. The Pentagon also has said they are not U.S. military drones.

    The drones have drawn intense public concern and curiosity since residents first reported seeing them last month. Assemblywoman Dawn Fantasia said from four to 180 aircraft have been reported to authorities since Nov. 18, appearing from dusk till 11 p.m.

    The flying objects have been spotted near the Picatinny Arsenal, a U.S. military research and manufacturing facility, and over President-elect Donald Trump’s golf course in Bedminster, but the number of reported sightings has grown greatly since then. Drones were also spotted in Pennsylvania, New York, Connecticut and other parts of the Mid-Atlantic region.

    The FBI, Federal Aviation Administration and other state and federal agencies involved in the investigation have not corroborated any of the reported sightings with electronic detection, and reviews of available images appear to show many of the reported drones are actually manned aircraft. They also say there have been no confirmed sightings in restricted air space. It’s also possible that a single drone has been seen and reported more than once, officials said.

    Some federal lawmakers have called on the military to “shoot down” the drones. The drones also appear to avoid detection by traditional methods such as helicopter and radio, according to a state lawmaker who was briefed by the Department of Homeland Security.

    In one case, a medevac helicopter was unable to pick up a seriously injured car accident victim in Branchburg Township in Somerset County late last month due to drones hovering near the planned landing zone, according to NJ.com. The FAA said Thursday that it does not have a report on this incident.

    Drones are legal in New Jersey for recreational and commercial use but are subject to local and FAA regulations and flight restrictions. Operators must be FAA certified.

    Witnesses say the drones they think they have seen in New Jersey appear to be larger than those typically used by hobbyists.

    Source link

  • Healthy lifestyle key to avoiding diabetes: governor

    Healthy lifestyle key to avoiding diabetes: governor



    Punjab Governor Sardar Saleem Haider Khan addresses a public awareness event organised by the Internal Society of Internal Medicine regarding World Diabetes Day at Governor’s House on November 14, 2024. — Facebook@sardarsaleemhaidergroup
     Punjab Governor Sardar Saleem Haider Khan addresses a public awareness event organised by the Internal Society of Internal Medicine regarding World Diabetes Day at Governor’s House on November 14, 2024. — Facebook@sardarsaleemhaidergroup

    LAHORE: Punjab Governor Sardar Saleem Haider Khan said that diabetes could be avoided by adopting a healthy lifestyle. People cannot get diabetes treatment due to lack of resources in the villages.

    He expressed these views while addressing a public awareness event organised by the Internal Society of Internal Medicine regarding World Diabetes Day at Governor’s House on Thursday.Addressing the ceremony, Punjab Governor said that it is the responsibility of the government to provide health and education facilities to every citizen. He said that all the health centres of Punjab should have free sugar test and treatment facility. He said that for a healthy body, at least one hour should be allocated for exercise every day.

    Punjab governor also requested Prof Dr Javed Akram to provide free treatment to the diabetic government employees of the Governor’s House from one to ten scale.The governor also led a walk to raise awareness about diabetes prevention. Former provincial health minister and President Pakistan Society of Internal Medicine Prof Dr Javed Akram and Vice Chancellor King Edward Medical University Mahmood Ayaz and other doctors also addressed the ceremony. Vice Chancellor Fatima Jinnah Medical University Khalid Masood Gondal, medical students and others were present in the ceremony.

    Meanwhile, Vice Chairperson Overseas Pakistanis Commission Punjab Barrister Amjad Malik called on Punjab Governor Sardar Saleem Haider Khan at Governor’s House. Barrister Amjad briefed the governor about the performance of the institution.

    Speaking on this occasion, Punjab governor said that overseas Pakistanis are the asset of the country who are contributing a lot to the economy of the country by sending remittances. He said that the doors of the Governor’s House are open for overseas Pakistanis. He said that no effort will be spared to solve the problems of the overseas Pakistanis.

    Governor Punjab said that whenever there was a difficult time for the motherland, Pakistanis living abroad always came forward to help. He further said that the protection of life and property of Overseas Pakistanis is the first priority of the government.Barrister Amjad Malik said that prominent overseas Pakistanis living abroad will be included in the advisory process through district overseas committees and advisories.


    Source link

  • California governor signs bills to protect children from AI deepfake nudes

    California governor signs bills to protect children from AI deepfake nudes

    SACRAMENTO, Calif. — California Gov. Gavin Newsom signed a pair of proposals Sunday aiming to help shield minors from the increasingly prevalent misuse of artificial intelligence tools to generate harmful sexual imagery of children.

    The measures are part of California’s concerted efforts to ramp up regulations around the marquee industry that is increasingly affecting the daily lives of Americans but has had little to no oversight in the United States.

    Earlier this month, Newsom also has signed off on some of the toughest laws to tackle election deepfakes, though the laws are being challenged in court. California is wildly seen as a potential leader in regulating the AI industry in the U.S.

    The new laws, which received overwhelming bipartisan support, close a legal loophole around AI-generated imagery of child sexual abuse and make it clear child pornography is illegal even if it’s AI-generated.

    Current law does not allow district attorneys to go after people who possess or distribute AI-generated child sexual abuse images if they cannot prove the materials are depicting a real person, supporters said. Under the new laws, such an offense would qualify as a felony.

    “Child sexual abuse material must be illegal to create, possess, and distribute in California, whether the images are AI generated or of actual children,” Democratic Assemblymember Marc Berman, who authored one of the bills, said in a statement. “AI that is used to create these awful images is trained from thousands of images of real children being abused, revictimizing those children all over again.”

    Newsom earlier this month also signed two other bills to strengthen laws on revenge porn with the goal of protecting more women, teenage girls and others from sexual exploitation and harassment enabled by AI tools. It will be now illegal for an adult to create or share AI-generated sexually explicit deepfakes of a person without their consent under state laws. Social media platforms are also required to allow users to report such materials for removal.

    But some of the laws don’t go far enough, said Los Angeles County District Attorney George Gascón, whose office sponsored some of the proposals. Gascón said new penalties for sharing AI-generated revenge porn should have included those under 18, too. The measure was narrowed by state lawmakers last month to only apply to adults.

    “There has to be consequences, you don’t get a free pass because you’re under 18,” Gascón said in a recent interview.

    The laws come after San Francisco brought a first-in-the-nation lawsuit against more than a dozen websites that AI tools with a promise to “undress any photo” uploaded to the website within seconds.

    The problem with deepfakes isn’t new, but experts say it’s getting worse as the technology to produce it becomes more accessible and easier to use. Researchers have been sounding the alarm these past two years on the explosion of AI-generated child sexual abuse material using depictions of real victims or virtual characters.

    In March, a school district in Beverly Hills expelled five middle school students for creating and sharing fake nudes of their classmates.

    The issue has prompted swift bipartisan actions in nearly 30 states to help address the proliferation of AI-generated sexually abusive materials. Some of them include protection for all, while others only outlaw materials depicting minors.

    Newsom has touted California as an early adopter as well as regulator of AI technology, saying the state could soon deploy generative AI tools to address highway congestion and provide tax guidance, even as his administration considers new rules against AI discrimination in hiring practices.

    Source link

  • California governor vetoes bill to create first-in-nation AI safety measures

    California governor vetoes bill to create first-in-nation AI safety measures

    SACRAMENTO, Calif. — California Gov. Gavin Newsom vetoed a landmark bill aimed at establishing first-in-the-nation safety measures for large artificial intelligence models Sunday.

    The decision is a major blow to efforts attempting to rein in the homegrown industry that is rapidly evolving with little oversight. The bill would have established some of the first regulations on large-scale AI models in the nation and paved the way for AI safety regulations across the country, supporters said.

    Earlier this month, the Democratic governor told an audience at Dreamforce, an annual conference hosted by software giant Salesforce, that California must lead in regulating AI in the face of federal inaction but that the proposal “can have a chilling effect on the industry.”

    The proposal, which drew fierce opposition from startups, tech giants and several Democratic House members, could have hurt the homegrown industry by establishing rigid requirements, Newsom said.

    “While well-intentioned, SB 1047 does not take into account whether an AI system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data,” Newsom said in a statement. “Instead, the bill applies stringent standards to even the most basic functions — so long as a large system deploys it. I do not believe this is the best approach to protecting the public from real threats posed by the technology.”

    Newsom on Sunday instead announced that the state will partner with several industry experts, including AI pioneer Fei-Fei Li, to develop guardrails around powerful AI models. Li opposed the AI safety proposal.

    The measure, aimed at reducing potential risks created by AI, would have required companies to test their models and publicly disclose their safety protocols to prevent the models from being manipulated to, for example, wipe out the state’s electric grid or help build chemical weapons. Experts say those scenarios could be possible in the future as the industry continues to rapidly advance. It also would have provided whistleblower protections to workers.

    The bill’s author, Democratic state Sen. Scott Weiner, called the veto “a setback for everyone who believes in oversight of massive corporations that are making critical decisions that affect the safety and the welfare of the public and the future of the planet.”

    “The companies developing advanced AI systems acknowledge that the risks these models present to the public are real and rapidly increasing. While the large AI labs have made admirable commitments to monitor and mitigate these risks, the truth is that voluntary commitments from industry are not enforceable and rarely work out well for the public,” Wiener said in a statement Sunday afternoon.

    Wiener said the debate around the bill has dramatically advanced the issue of AI safety, and that he would continue pressing that point.

    The legislation is among a host of bills passed by the Legislature this year to regulate AI, fight deepfakes and protect workers. State lawmakers said California must take actions this year, citing hard lessons they learned from failing to rein in social media companies when they might have had a chance.

    Proponents of the measure, including Elon Musk and Anthropic, said the proposal could have injected some levels of transparency and accountability around large-scale AI models, as developers and experts say they still don’t have a full understanding of how AI models behave and why.

    The bill targeted systems that require more than $100 million to build. No current AI models have hit that threshold, but some experts said that could change within the next year.

    “This is because of the massive investment scale-up within the industry,” said Daniel Kokotajlo, a former OpenAI researcher who resigned in April over what he saw as the company’s disregard for AI risks. “This is a crazy amount of power to have any private company control unaccountably, and it’s also incredibly risky.”

    The United States is already behind Europe in regulating AI to limit risks. The California proposal wasn’t as comprehensive as regulations in Europe, but it would have been a good first step to set guardrails around the rapidly growing technology that is raising concerns about job loss, misinformation, invasions of privacy and automation bias, supporters said.

    A number of leading AI companies last year voluntarily agreed to follow safeguards set by the White House, such as testing and sharing information about their models. The California bill would have mandated AI developers to follow requirements similar to those commitments, said the measure’s supporters.

    But critics, including former U.S. House Speaker Nancy Pelosi, argued that the bill would “kill California tech” and stifle innovation. It would have discouraged AI developers from investing in large models or sharing open-source software, they said.

    Newsom’s decision to veto the bill marks another win in California for big tech companies and AI developers, many of whom spent the past year lobbying alongside the California Chamber of Commerce to sway the governor and lawmakers from advancing AI regulations.

    Two other sweeping AI proposals, which also faced mounting opposition from the tech industry and others, died ahead of a legislative deadline last month. The bills would have required AI developers to label AI-generated content and ban discrimination from AI tools used to make employment decisions.

    The governor said earlier this summer he wanted to protect California’s status as a global leader in AI, noting that 32 of the world’s top 50 AI companies are located in the state.

    He has promoted California as an early adopter as the state could soon deploy generative AI tools to address highway congestion, provide tax guidance and streamline homelessness programs. The state also announced last month a voluntary partnership with AI giant Nvidia to help train students, college faculty, developers and data scientists. California is also considering new rules against AI discrimination in hiring practices.

    Earlier this month, Newsom signed some of the toughest laws in the country to crack down on election deepfakes and measures to protect Hollywood workers from unauthorized AI use.

    But even with Newsom’s veto, the California safety proposal is inspiring lawmakers in other states to take up similar measures, said Tatiana Rice, deputy director of the Future of Privacy Forum, a nonprofit that works with lawmakers on technology and privacy proposals.

    “They are going to potentially either copy it or do something similar next legislative session,” Rice said. “So it’s not going away.”

    —-

    The Associated Press and OpenAI have a licensing and technology agreement that allows OpenAI access to part of AP’s text archives.

    Source link

  • California governor to sign a law to protect children from social media addiction

    California governor to sign a law to protect children from social media addiction

    SACRAMENTO, Calif. — California will make it illegal for social media platforms to knowingly provide addictive feeds to children without parental consent beginning in 2027 under a bill Democratic Gov. Gavin Newsom will sign, his office said Friday.

    California will follow New York state, which passed a law earlier this year allowing parents to block their kids from getting social media posts suggested by a platform’s algorithm. Utah has passed laws in recent years aimed at limiting children’s access to social media, but they have faced challenges in court.

    The California bill will take effect in a state home to some of the largest technology companies in the world after similar proposals have failed to pass in recent years. It is part of a growing push in states across the country to try to address the impacts of social media on the well-being of children.

    “Every parent knows the harm social media addiction can inflict on their children — isolation from human contact, stress and anxiety, and endless hours wasted late into the night,” Newsom said in a statement. “With this bill, California is helping protect children and teenagers from purposely designed features that feed these destructive habits.”

    The bill bans platforms from sending notifications without permission from parents to minors between 12 a.m. and 6 a.m., and between 8 a.m. and 3 p.m. on weekdays from September through May, when children are typically in school. The legislation also makes platforms set children’s accounts to private by default.

    Opponents of the legislation say it could inadvertently prevent adults from accessing content if they cannot verify their age. Some argue it would threaten online privacy by making platforms collect more information on users.

    The bill defines an “addictive feed” as a website or app “in which multiple pieces of media generated or shared by users are, either concurrently or sequentially, recommended, selected, or prioritized for display to a user based, in whole or in part, on information provided by the user, or otherwise associated with the user or the user’s device,” with some exceptions.

    The subject garnered renewed attention in June when U.S. Surgeon General Vivek Murthy called on Congress to require warning labels on social media platforms and their impacts on young people. Attorneys general in 42 states endorsed the plan in a letter sent to Congress last week.

    State Sen. Nancy Skinner, a Democrat representing Berkeley who authored the California bill, said after lawmakers approved the bill last month that “social media companies have designed their platforms to addict users, especially our kids.”

    “With the passage of SB 976, the California Legislature has sent a clear message: When social media companies won’t act, it’s our responsibility to protect our kids,” she said in a statement.

    ___

    Associated Press writer Trân Nguyễn contributed to this report.

    ___

    Austin is a corps member for The Associated Press/Report for America Statehouse News Initiative. Report for America is a nonprofit national service program that places journalists in local newsrooms to report on undercovered issues. Follow Austin on X: @sophieadanna



    Source link

  • Oregon governor uses new land use law to propose rural land for semiconductor facility

    Oregon governor uses new land use law to propose rural land for semiconductor facility

    SALEM, Ore. — Oregon Gov. Tina Kotek is using a new land use law to propose a rural area for a semiconductor facility, as officials seek to lure more of the multibillion-dollar semiconductor industry to the state.

    Kotek has proposed expanding the city boundaries of Hillsboro, a suburb west of Portland that’s home to chip giant Intel, to incorporate half a square mile of new land for industrial development, Oregon Public Broadcasting reported. The land would provide space for a major new research center.

    Oregon, which has been a center of semiconductor research and production for decades, is competing against other states to host multibillion-dollar microchip factories.

    The CHIPS and Science Act passed by Congress in 2022 provided $39 billion for companies building or expanding facilities that will manufacture semiconductors and those that will assemble, test and package the chips.

    A state law passed last year allowed the governor to designate up to eight sites where city boundaries could be expanded to provide land for microchip companies. The law created an exemption to the state’s hallmark land use policy, which was passed in the 1970s to prevent urban sprawl and protect nature and agriculture.

    A group that supports Oregon’s landmark land use policy, Friends of Smart Growth, said in a news release that it would oppose Kotek’s proposal, OPB reported.

    “While the governor hopes this will prove a quick and relatively painless way to subvert the planning and community engagement that Oregon’s land use system is famous for,” the release said, “local and statewide watchdog groups promise a long and difficult fight to preserve the zoning protections that have allowed walkable cities, farmland close to cities, and the outdoor recreation Oregon is famous for.”

    Under the 2023 state law, Kotek must hold a public hearing on proposed expansions of so-called “urban growth boundaries” and allow a 20-day period for public comment before issuing an executive order to formally expand such boundaries. This executive power expires at the end of the year.

    The public hearing on the proposed expansion will be held in three weeks at the Hillsboro Civic Center, according to Business Oregon, the state’s economic development agency.

    The Oregon Legislature also chipped away at the state’s land use policy earlier this year in a bid to address its critical housing shortage. That law, among other things, granted a one-time exemption to cities looking to acquire new land for the purpose of building housing.

    Source link

  • California governor signs laws to crack down on election deepfakes created by AI

    California governor signs laws to crack down on election deepfakes created by AI

    SACRAMENTO, Calif. — California Gov. Gavin Newsom signed three bills Tuesday to crack down on the use of artificial intelligence to create false images or videos in political ads ahead of the 2024 election.

    A new law, set to take effect immediately, makes it illegal to create and publish deepfakes related to elections 120 days before Election Day and 60 days thereafter. It also allows courts to stop distribution of the materials and impose civil penalties.

    “Safeguarding the integrity of elections is essential to democracy, and it’s critical that we ensure AI is not deployed to undermine the public’s trust through disinformation -– especially in today’s fraught political climate,” Newsom said in a statement. “These measures will help to combat the harmful use of deepfakes in political ads and other content, one of several areas in which the state is being proactive to foster transparent and trustworthy AI.”

    Large social media platforms are also required to remove the deceptive material under a first-in-the-nation law set to be enacted next year. Newsom also signed a bill requiring political campaigns to publicly disclose if they are running ads with materials altered by AI.

    The governor signed the bills at an event hosted by Salesforce, a major software company, in San Francisco.

    The new laws reaffirms California’s position as a leader in regulating AI in the U.S., especially in combating election deepfakes. The state was the first in the U.S. to ban manipulated videos and pictures related to elections in 2019. Measures in technology and AI proposed by California lawmakers have been used as blueprints for legislators across the country, industry experts said.

    With AI supercharging the threat of election disinformation worldwide, lawmakers across the country have raced to address the issue over concerns the manipulated materials could erode the public’s trust in what they see and hear.

    “With fewer than 50 days until the general election, there is an urgent need to protect against misleading, digitally-altered content that can interfere with the election,” Assemblymember Gail Pellerin, author of the law banning election deepfakes, said in a statement. “California is taking a stand against the manipulative use of deepfake technology to deceive voters.”

    Newsom’s decision followed his vow in July to crack down on election deepfakes in response to a video posted by X-owner Elon Musk featuring altered images of Vice President and Democratic presidential nominee Kamala Harris.

    The new California laws come the same day as members of Congress unveiled federal legislation aiming to stop election deepfakes. The bill would give the Federal Election Commission the power to regulate the use of AI in elections in the same way it has regulated other political misrepresentation for decades. The FEC has started to consider such regulations after outlawing AI-generated robocalls aimed to discourage voters in February.

    Newsom has touted California as an early adopter as well as regulator, saying the state could soon deploy generative AI tools to address highway congestion and provide tax guidance, even as his administration considers new rules against AI discrimination in hiring practices.

    He also signed two other bills Tuesday to protect Hollywood performers from unauthorized AI use against their consent.

    Source link

  • California governor signs laws to protect actors against unauthorized use of AI

    California governor signs laws to protect actors against unauthorized use of AI

    SACRAMENTO, Calif. — California Gov. Gavin Newsom signed off Tuesday on legislation aiming at protecting Hollywood actors and performers against unauthorized artificial intelligence that could be used to create digital clones of themselves without their consent.

    The new laws come as California legislators ramped up efforts this year to regulate the marquee industry that is increasingly affecting the daily lives of Americans but has had little to no oversight in the United States.

    The laws also reflect the priorities of the Democratic governor who’s walking a tightrope between protecting the public and workers against potential AI risks and nurturing the rapidly evolving homegrown industry.

    “We continue to wade through uncharted territory when it comes to how AI and digital media is transforming the entertainment industry, but our North Star has always been to protect workers,” Newsom said in a statement. “This legislation ensures the industry can continue thriving while strengthening protections for workers and how their likeness can or cannot be used.”

    Inspired by the Hollywood actors’ strike last year over low wages and concerns that studios would use AI technology to replace workers, a new California law will allow performers to back out of existing contracts if vague language might allow studios to freely use AI to digitally clone their voices and likeness. The law is set to take effect in 2025 and has the support of the California Labor Federation and the Screen Actors Guild-American Federation of Television and Radio Artists, or SAG-AFTRA.

    Another law signed by Newsom, also supported by SAG-AFTRA, prevents dead performers from being digitally cloned for commercial purposes without the permission of their estates. Supporters said the law is crucial to curb the practice, citing the case of a media company that produced a fake, AI-generated hourlong comedy special to recreate the late comedian George Carlin’s style and material without his estate’s consent.

    “It is a momentous day for SAG-AFTRA members and everyone else because the AI protections we fought so hard for last year are now expanded upon by California law thanks to the legislature and Governor Gavin Newsom,” SAG-AFTRA President Fran Drescher said in a statement. “They say as California goes, so goes the nation!”

    California is among the first states in the nation to establish performer protection against AI. Tennessee, long known as the birthplace of country music and the launchpad for musical legends, led the country by enacting a similar law to protect musicians and artists in March.

    Supporters of the new laws said they will help encourage responsible AI use without stifling innovation. Opponents, including the California Chamber of Commerce, said the new laws are likely unenforceable and could lead to lengthy legal battles in the future.

    The two new laws are among a slew of measures passed by lawmakers this year in an attempt to reign in the AI industry. Newsom signaled in July that he will sign a proposal to crack down on election deepfakes but has not weighed in other legislation, including one that would establish first-in-the-nation safety measures for large AI models.

    The governor has until Sept. 30 to sign the proposals, veto them or let them become law without his signature.

    Source link