hacklink hack forum hacklink film izle hacklink marsbahisizmir escortsahabetpornJojobetcasibompadişahbetjojobetBakırköy Escort

Tag: abuse

  • British man sentenced to 18 years for using AI to make child sexual abuse imagery

    British man sentenced to 18 years for using AI to make child sexual abuse imagery

    LONDON — A British man who used artificial intelligence to create images of child abuse was sent to prison for 18 years on Monday.

    The court sentenced Hugh Nelson, 27, after he pleaded guilty to a number of sexual offenses including making and distributing indecent images of children and distributing “indecent pseudo photographs of children.” He also admitted to encouraging the rape of a child.

    Nelson took commissions from people in online chatrooms for custom explicit images of children being harmed both sexually and physically.

    Police in Manchester, in northern England, said he used AI software from a U.S. company, Daz 3D, that has an “AI function” to generate images that he both sold to online buyers and gave away for free. The police force said it was a landmark case for its online child abuse investigation team.

    The company said the licensing agreement for its Daz Studio 3D rendering software prohibits its use for creating images that “violate child pornography or child sexual exploitation laws, or are otherwise harmful to minors.”

    “We condemn the misuse of any software, including ours, for such purposes, and we are committed to continuously improving our ability to prevent it,” Daz 3D said in a statement, adding that its policy is to assist law enforcement “as needed.”

    Bolton Crown Court, near Manchester, heard that Nelson, who has a master’s degree in graphics, also used images of real children for some of his computer-generated artwork.

    Judge Martin Walsh said it was impossible to determine whether a child was sexually abused as a result of his images but Nelson intended to encourage others to commit child rape and had “no idea” how his images would be used.

    Nelson, who had no previous convictions, was arrested last year. He told police he had met like-minded people on the internet and eventually began to create images for sale.

    Prosecutor Jeanette Smith said outside court that it was “extremely disturbing” that Nelson was able to “take normal photographs of children and, using AI tools and a computer program, transform them and create images of the most depraved nature to sell and share online.”

    Prosecutors have said the case stemmed from an investigation into AI and child sexual exploitation while police said it presented a test of existing legislation because using computer programs the way Nelson did is so new that it isn’t specifically mentioned in current U.K. law.

    The case mirrors similar efforts by U.S. law enforcement to crack down on a troubling spread of child sexual abuse imagery created through artificial intelligence technology — from manipulated photos of real children to graphic depictions of computer-generated kids. The Justice Department recently brought what’s believed to be the first federal case involving purely AI-generated imagery — meaning the children depicted are not real but virtual.

    Source link

  • AI-generated child sexual abuse images are spreading. Law enforcement is racing to stop them

    AI-generated child sexual abuse images are spreading. Law enforcement is racing to stop them

    WASHINGTON — A child psychiatrist who altered a first-day-of-school photo he saw on Facebook to make a group of girls appear nude. A U.S. Army soldier accused of creating images depicting children he knew being sexually abused. A software engineer charged with generating hyper-realistic sexually explicit images of children.

    Law enforcement agencies across the U.S. are cracking down on a troubling spread of child sexual abuse imagery created through artificial intelligence technology — from manipulated photos of real children to graphic depictions of computer-generated kids. Justice Department officials say they’re aggressively going after offenders who exploit AI tools, while states are racing to ensure people generating “deepfakes” and other harmful imagery of kids can be prosecuted under their laws.

    “We’ve got to signal early and often that it is a crime, that it will be investigated and prosecuted when the evidence supports it,” Steven Grocki, who leads the Justice Department’s Child Exploitation and Obscenity Section, said in an interview with The Associated Press. “And if you’re sitting there thinking otherwise, you fundamentally are wrong. And it’s only a matter of time before somebody holds you accountable.”

    The Justice Department says existing federal laws clearly apply to such content, and recently brought what’s believed to be the first federal case involving purely AI-generated imagery — meaning the children depicted are not real but virtual. In another case, federal authorities in August arrested a U.S. soldier stationed in Alaska accused of running innocent pictures of real children he knew through an AI chatbot to make the images sexually explicit.

    The prosecutions come as child advocates are urgently working to curb the misuse of technology to prevent a flood of disturbing images officials fear could make it harder to rescue real victims. Law enforcement officials worry investigators will waste time and resources trying to identify and track down exploited children who don’t really exist.

    Lawmakers, meanwhile, are passing a flurry of legislation to ensure local prosecutors can bring charges under state laws for AI-generated “deepfakes” and other sexually explicit images of kids. Governors in more than a dozen states have signed laws this year cracking down on digitally created or altered child sexual abuse imagery, according to a review by The National Center for Missing & Exploited Children.

    “We’re playing catch-up as law enforcement to a technology that, frankly, is moving far faster than we are,” said Ventura County, California District Attorney Erik Nasarenko.

    Nasarenko pushed legislation signed last month by Gov. Gavin Newsom which makes clear that AI-generated child sexual abuse material is illegal under California law. Nasarenko said his office could not prosecute eight cases involving AI-generated content between last December and mid-September because California’s law had required prosecutors to prove the imagery depicted a real child.

    AI-generated child sexual abuse images can be used to groom children, law enforcement officials say. And even if they aren’t physically abused, kids can be deeply impacted when their image is morphed to appear sexually explicit.

    “I felt like a part of me had been taken away. Even though I was not physically violated,” said 17-year-old Kaylin Hayman, who starred on the Disney Channel show “Just Roll with It” and helped push the California bill after she became a victim of “deepfake” imagery.

    Hayman testified last year at the federal trial of the man who digitally superimposed her face and those of other child actors onto bodies performing sex acts. He was sentenced in May to more than 14 years in prison.

    Open-source AI-models that users can download on their computers are known to be favored by offenders, who can further train or modify the tools to churn out explicit depictions of children, experts say. Abusers trade tips in dark web communities about how to manipulate AI tools to create such content, officials say.

    A report last year by the Stanford Internet Observatory found that a research dataset that was the source for leading AI image-makers such as Stable Diffusion contained links to sexually explicit images of kids, contributing to the ease with which some tools have been able to produce harmful imagery. The dataset was taken down, and researchers later said they deleted more than 2,000 weblinks to suspected child sexual abuse imagery from it.

    Top technology companies, including Google, OpenAI and Stability AI, have agreed to work with anti-child sexual abuse organization Thorn to combat the spread of child sexual abuse images.

    But experts say more should have been done at the outset to prevent misuse before the technology became widely available. And steps companies are taking now to make it harder to abuse future versions of AI tools “will do little to prevent” offenders from running older versions of models on their computer “without detection,” a Justice Department prosecutor noted in recent court papers.

    “Time was not spent on making the products safe, as opposed to efficient, and it’s very hard to do after the fact — as we’ve seen,” said David Thiel, the Stanford Internet Observatory’s chief technologist.

    The National Center for Missing & Exploited Children’s CyberTipline last year received about 4,700 reports of content involving AI technology — a small fraction of the more than 36 million total reports of suspected child sexual exploitation. By October of this year, the group was fielding about 450 reports per month of AI-involved content, said Yiota Souras, the group’s chief legal officer.

    Those numbers may be an undercount, however, as the images are so realistic it’s often difficult to tell whether they were AI-generated, experts say.

    “Investigators are spending hours just trying to determine if an image actually depicts a real minor or if it’s AI-generated,” said Rikole Kelly, deputy Ventura County district attorney, who helped write the California bill. “It used to be that there were some really clear indicators … with the advances in AI technology, that’s just not the case anymore.”

    Justice Department officials say they already have the tools under federal law to go after offenders for such imagery.

    The U.S. Supreme Court in 2002 struck down a federal ban on virtual child sexual abuse material. But a federal law signed the following year bans the production of visual depictions, including drawings, of children engaged in sexually explicit conduct that are deemed “obscene.” That law, which the Justice Department says has been used in the past to charge cartoon imagery of child sexual abuse, specifically notes there’s no requirement “that the minor depicted actually exist.”

    The Justice Department brought that charge in May against a Wisconsin software engineer accused of using AI tool Stable Diffusion to create photorealistic images of children engaged in sexually explicit conduct, and was caught after he sent some to a 15-year-old boy through a direct message on Instagram, authorities say. The man’s lawyer, who is pushing to dismiss the charges on First Amendment grounds, declined further comment on the allegations in an email to the AP.

    A spokesperson for Stability AI said that man is accused of using an earlier version of the tool that was released by another company, Runway ML. Stability AI says that it has “invested in proactive features to prevent the misuse of AI for the production of harmful content” since taking over the exclusive development of the models. A spokesperson for Runway ML didn’t immediately respond to a request for comment from the AP.

    In cases involving “deepfakes,” when a real child’s photo has been digitally altered to make them sexually explicit, the Justice Department is bringing charges under the federal “child pornography” law. In one case, a North Carolina child psychiatrist who used an AI application to digitally “undress” girls posing on the first day of school in a decades-old photo shared on Facebook was convicted of federal charges last year.

    “These laws exist. They will be used. We have the will. We have the resources,” Grocki said. “This is not going to be a low priority that we ignore because there’s not an actual child involved.”

    Source link

  • ‘A target on their back’: college athletes face wave of abuse amid gambling boom | Gambling

    College athletes are facing “significant abuse” amid a surge in harassment unleashed by America’s gambling boom, according to US sports officials who say students are increasingly subject to death threats, harassment and demands for money.

    A handful of state regulators have moved to ban legal gambling platforms from offering certain types of bets on collegiate sports as a result of the “inherently problematic” surge in harassment of college athletes online, at venues and in dorms.

    But some of the gambling sector’s biggest players have lobbied against these moves, according to documents seen by the Guardian – claiming such restrictions pose a “far more significant” risk.

    The National Collegiate Athletics Association (NCAA) is calling for a ban on “proposition bets” – prop bets – linked to specific student athletes. These side wagers – the first player to score a touchdown, for example – are not directly tied to a game’s final result.

    For student athletes as young as 18, prop bets “put a target on their back”, said Clint Hangebrauck, managing director of enterprise risk management at the NCAA, and leave them “much more susceptible to receive harassment”.

    As gambling exploded across college campuses, harassment has “really steadily increased in almost direct correlation with the steady increase of legalized sports betting in America”, Hangebrauck said in an interview. “It’s really been an unfortunate growing phenomenon.”

    Officials are particularly concerned about the safety of athletes and the integrity of games, beyond the bright lights of collegiate sports’ top tiers. “This is not just happening at the elite levels,” said John Parsons, interim senior vice-president of the NCAA’s Sport Science Institute. “This is happening across all of our divisions.”

    Sports betting is now legal across 38 states. Each time a state legalized the activity, the NCAA has seen a notable increase in students and coaches in the region facing abuse, according to Hangebrauck.

    He described a wave of “highly negative and critical messages” aimed at students, officials and coaches that he claimed had a “direct” link to sports betting. “There’s no doubt the nexus of how this abuse is generated is somebody angry because they lost a bet.”

    ‘They don’t deserve that’

    Four states have this year banned prop bets on specific collegiate athletes. Ohio was first, in February.

    The previous year, a prominent college basketball coach in the state had broken from his typical postgame recap to issue a blunt intervention. “I have to say something because I think it’s just necessary at this point,” Anthony Grant, coach of the Dayton Flyers, remarked at a press conference in January 2023.

    Urging fans to remember “we’re dealing with 18, 21, 22 year-olds”, he said: “There’s some laws that have recently been enacted, that really to me – it could really change the landscape of what college sports is all about. And when we have people that make it about themselves and attack kids because of their own agenda, it sickens me. They have families. They don’t deserve that. Mental health is real.”

    Grant was prompted to speak out when, following a game, his team received a torrent of abuse on social media from bettors. Sports gambling had been legal in Ohio for just 16 days.

    This case did not prove to be an outlier. The Ohio Casino Control Commission started to hear “a lot” about student athletes “getting Venmo requests from their peers when they lost a game, or didn’t make a free throw”, Amanda Blackford, its director of operations, told the Guardian.

    There has “certainly been a shift” since sports betting was legalized, according to Blackford. “Social media’s always been rough for athletes,” she said. “But it was never about money, or the bets they were making.”

    Betting firms push back

    After a request from the NCAA, the Ohio commission scrutinized the prop betting market around collegiate sports and concluded a ban on student-specific prop bets would be a sound trade-off, Blackford explained: between a “hopefully minimal” impact on sports betting firms’ profits, and a “potentially significantly larger” impact on the safety and wellbeing of young athletes.

    But the industry pushed back. A string of legal gambling firms lobbied the state to reconsider.

    Penn Entertainment, which struck a deal with Disney to wrap ESPN, the biggest brand in US sports broadcasting, around its wagering platform, cautioned a ban “may serve only to push these wagers” to the illegal market, which it said would constitute a “far more significant” risk than the status quo.

    In a joint letter BetMGM, DraftKings, FanDuel and Fanatics – four of America’s dominant sports betting groups – suggested a ban “could in fact increase” problems. College athletes and their sports are “better protected in the light of licensed sports wagering than in the darkness of illegal gambling”, the firms argued.

    Penn Entertainment and BetMGM did not respond to requests for comment on whether they commissioned any analysis which prompted these warnings. FanDuel, DraftKings and Fanatics declined to comment.

    Hangebrauck expressed skepticism over the warnings. “If you have data that supports that, pray tell,” he said. “We really haven’t seen anything that’s supportive of it.”

    Ohio plowed ahead, confident that removing prop bets on students from legal gambling platforms will reduce harassment. “Having it as an illegal activity hopefully means they don’t feel like they can openly come after athletes in the way that they have,” said Blackford.

    With the latest college football season now in full swing, the operators did not comment on whether issues had increased as a result of Ohio’s ban, as they cautioned could happen.

    ‘We can’t put our head in the sand’

    Do betting firms agree that harassment has worsened since legalization? “Individuals who harass athletes, amateur or professional, over a sports bet should not be tolerated,” said Joe Maloney, senior vice-president at the American Gaming Association, a gambling industry lobby group. “Importantly, the legal sports wagering market is providing the transparency critical to discuss solutions to reducing player harassment for the first time – an opportunity illegal market actors do not provide.”

    After Ohio, Maryland, Vermont and Louisiana introduced their own bans on student-based prop bets in college sports.

    Unlike these states, Massachusetts did not allow such wagers when it legalized sports betting in the first place. “These are kids,” Jordan Maynard, interim chair of the state’s gaming commission, said this summer.

    At a conference organized in July by the National Council on Problem Gambling, Maynard gave a frank assessment of gambling’s impact on college sports. “We’ve all been at these games. Don’t lie – to yourself, or to anybody else,” he said. “The people screaming at these kids, this has gotten worse since sports wagering passed … We can’t put our head in the sand and say it’s not an issue.”

    But operators, the moderator suggested, would likely argue that banning legal prop bets on collegiate athletes will drive gamblers to illegal sports books. “I have a lot of thoughts on the boogie man,” Maynard replied.

    ‘I hope your dog gets cancer’

    “Even if you just go to a game, it’s so prevalent now that you just overlook it,” said Ricardo Hill, basketball coach at Indian Hill high school in Ohio. “You can hear it at every game.”

    His former players, having reached college, are now grappling with the impact of gambling on their sports. Several have described to him “how the fans are harassing them”, Hill told the Guardian.

    In a new statewide campaign, collegiate athletes in Ohio read out the messages they have been sent. “You deserve to get unalive for blowing my bet,” said one received by Tyler, a pre-law student. “You cost me two grand,” read a message sent to another student, “I hope your dog gets cancer.”

    Officials hope the campaign, More Than a Bet, will make gamblers think twice before sending abusive messages.

    Hill has seen enough. As far as he’s concerned, gambling and college sports should not go together. “It’s too dangerous and too risky for the collegiate athletes.”

    “Sportsbetting is a billion-dollar industry,” he said. “That’s what’s driving the changes. Unfortunately the athletes are on the bottom of the totem in decision making.”

    ‘It’s not something we condone’

    As the Guardian reported this story, the Responsible Online Gaming Association (ROGA)– a new body formed by betting firms – announced plans to roll out an education program next year, with videos and events for students.

    Several operators that declined to comment on the issues unfolding around college sports referred the Guardian to ROGA. Does the association – or the gambling companies behind it – agree with the NCAA, regulators, coaches and students who say harassment has increased markedly since sports betting’s legalization?

    “I don’t know we have enough information to make that judgment,” said Jennifer Shatley, executive director of ROGA. ​“I will say that perception does point to the importance of responsible gaming, and having these types of programs in the first place.”

    Harassment of student athletes “is sort of outside the realm of what we’re doing”, she added. “However, I will say obviously it’s not something we condone.”

    Operators have invested in ads and marketing around “responsible gaming”, reminding gamblers to bet responsibly. Critics argue this approach overlooks those at risk of developing gambling problems, and shifts responsibility away from the industry.

    “Everybody involved in legalized gambling has some responsibility – be it governments, be it operators, be it the players,” said Shatley. “Everybody has a shared responsibility.

    “So it’s really [about] making sure we’re all fulfilling our own responsibilities. But absolutely, everyone that’s involved in the industry has a responsibility.”

    Source link

  • Nearly half of women fans in England and Wales suffer sexist abuse at soccer matches

    Nearly half of women fans in England and Wales suffer sexist abuse at soccer matches

    LONDON : Close to half of women soccer fans in England and Wales have personally experienced sexist or misogynistic abuse at matches but most have never reported it to authorities, a new study by anti-discrimination charity Kick It Out revealed on Wednesday.

    Wolf-whistling, being questioned about their knowledge of the rules and persistent badgering were some of the forms of sexist behaviour experienced by the 1,502 people surveyed, of whom 7 per cent said they had been touched inappropriately, 3 per cent were victims of physical violence and 2 per cent sexually assaulted or harassed.

    Although the research showed sexism was still a significant issue for female match-goers, as well as for non-binary fans, 77 per cent said they felt safe attending matches and four in 10 stated their experiences had been improving over time.

    But ethnic minorities, members of the LGBTQ community, those with disabilities and younger people were more likely to feel unsafe and experience sexism in a soccer setting, the research said.

    The vast majority of those surveyed, 85 per cent, said they had never reported the abuse, mostly because they didn’t think it would make a difference.

    “Football needs to step up to ensure sexism is taken seriously and that women feel safe and confident to report discrimination,” said Hollie Varney, from Kick It Out. “We’ve seen reports of sexism to Kick It Out increase significantly in recent seasons.”

    The research has also highlighted the use of sexist language, with 53 per cent of respondents saying they had experienced or witnessed women being told that they should be elsewhere, such as “back in the kitchen”.

    Using the research data, Kick It Out has launched a campaign to ensure women fans know sexist abuse is discrimination and can be reported, and to show male fans how they can challenge those behaviours when they see them.

    Reports of sexism in soccer go beyond fans’ experiences. In 2014, a female employee exposed sexist emails Premier League former chief executive Richard Scudamore sent to friends, forcing him to apologise.

    In 2018, the British Football Association was forced to apologise after it was accused of sexism for sharing a picture on X of the England women’s soccer team with the caption: “Scrub up well, don’t they?”

    In Spain, former soccer federation chief Luis Rubiales will stand trial for his unsolicited kiss of women’s national team player Jenni Hermoso in August last year. For players and fans, this proved that despite progress in the women’s game more structural change was needed.

    Source link

  • Child abuse images removed from AI image-generator training source, researchers say

    Child abuse images removed from AI image-generator training source, researchers say

    Artificial intelligence researchers said Friday they have deleted more than 2,000 web links to suspected child sexual abuse imagery from a database used to train popular AI image-generator tools.

    The LAION research database is a huge index of online images and captions that’s been a source for leading AI image-makers such as Stable Diffusion and Midjourney.

    But a report last year by the Stanford Internet Observatory found it contained links to sexually explicit images of children, contributing to the ease with which some AI tools have been able to produce photorealistic deepfakes that depict children.

    That December report led LAION, which stands for the nonprofit Large-scale Artificial Intelligence Open Network, to immediately remove its dataset. Eight months later, LAION said in a blog post that it worked with the Stanford University watchdog group and anti-abuse organizations in Canada and the United Kingdom to fix the problem and release a cleaned-up database for future AI research.

    Stanford researcher David Thiel, author of the December report, commended LAION for significant improvements but said the next step is to withdraw from distribution the “tainted models” that are still able to produce child abuse imagery.

    One of the LAION-based tools that Stanford identified as the “most popular model for generating explicit imagery” — an older and lightly filtered version of Stable Diffusion — remained easily accessible until Thursday, when the New York-based company Runway ML removed it from the AI model repository Hugging Face. Runway said in a statement Friday it was a “planned deprecation of research models and code that have not been actively maintained.”

    The cleaned-up version of the LAION database comes as governments around the world are taking a closer look at how some tech tools are being used to make or distribute illegal images of children.

    San Francisco’s city attorney earlier this month filed a lawsuit seeking to shut down a group of websites that enable the creation of AI-generated nudes of women and girls. The alleged distribution of child sexual abuse images on the messaging app Telegram is part of what led French authorities to bring charges on Wednesday against the platform’s founder and CEO, Pavel Durov.

    Durov’s arrest “signals a really big change in the whole tech industry that the founders of these platforms can be held personally responsible,” said David Evan Harris, a researcher at the University of California, Berkeley who recently reached out to Runway asking about why the problematic AI image-generator was still publicly accessible. It was taken down days later.

    Source link