Last week, an undisclosed number of girls at a New Jersey high school learned that one or more students at their school had used an artificial intelligence tool to generate what appeared to be nude images of them. Worse, those images, which used at least a dozen photographs of girls on campus, were being shared among some boys at the school in group chats. There’s an ongoing investigation, the local police are involved, and counseling has been offered to affected students. If you think there ought to be a federal law against such harmful exploitation of underage victims, or even adults, for that matter, I agree with you. Sadly, there is no such crime that covers AI-generated nudes.
If you think there ought to be a federal law against such harmful exploitation of underage victims, or even adults, for that matter, I agree with you.
So-called deepfake photos and videos are proliferating almost as fast as people can download the software that generates them. In addition to creating fictionalized products that don’t resemble particular people, deepfakes can use the face, voice or partial image of a real person and meld it with other imagery to make it look or sound like a depiction of that person. Last spring, a deepfake photo of Pope Francis wearing a stylish Balenciaga brand puffy coat went viral. The pope might be fashionable, but that wasn’t him in the image.
In 2019, researchers concluded that 96% of the 14,000 online deepfake videos they found were pornographic. Genevieve Oh, an independent internet researcher who has tracked the rise of deepfakes, told The Washington Post that 143,00 videos on 40 popular websites for fakes had been seen 4.2 billion times.
From high school students to YouTube influencers and celebrities, the list of victims continues to grow, and the FBI warns that its “sextortion” caseload is expanding.
Across the internet, especially in the rank recesses of the dark web, pedophiles are having a field day with AI technology. Importantly, AI-generated child pornography includes some images borrowed from known, identifiable children, as well as wholly concocted fabrications based on thousands of digitized images strewn across the web. While the U.S. Justice Department claims this kind of content would be prosecutable under existing federal child pornography laws that cover drawings and cartoons depicting minors engaged in explicit sex, it can’t point to a single prosecution for AI child porn under this theory. Even if the Justice Department were to pursue someone for producing or possessing such material, it would have to prove that the images met the required definition of child porn as depicting minors engaged in explicit sex acts.
Federal child porn laws also wouldn’t help the high school students in New Jersey if they were portrayed as nude but not engaging in graphic conduct. Nor does that legal distinction offer much solace if the nonconsensual images of these victims live forever online to haunt them as they apply to colleges and jobs. We need a federal law to fill this widening gap between outdated legislation and high-tech reality.
That’s why the attorneys general of all 50 states are calling on Congress to do something. It’s also why President Joe Biden, in his Oct. 30 executive order addressing the need to understand and regulate AI, directed his administration to seek solutions for: preventing generative AI from producing child sexual abuse material or producing nonconsensual intimate imagery of real people (to include intimate digital depictions of the body or body parts of an identifiable person).
In June, Rep. Joe Morelle, D-N.Y., introduced the Preventing Deepfakes of Intimate Images Act in an attempt to address the continued use of AI to produce nude depictions of real victims. The legislation is a step in the right direction toward reining in the use of AI to hurt people.
Nine states — including California, Texas and Virginia — have laws that offer some legal recourse for victims of AI-generated images. Some states allow civil lawsuits by victims, others have criminalized such conduct, and still others are considering mandating digital watermarks that would help trace images back to their producers. Some internet platforms facilitate “takedown” requests from victims and claim that they remove content when requested by a verified victim.
AI-generated porn doesn’t victimize its targets just once.
All of these options, including a requirement that manufacturers of AI image-making software automatically insert a “deepfake” label on fabricated content, must be explored now by Congress and federal agencies before additional victims are faced with the horror of having this technology used against them.
AI-generated porn doesn’t victimize its targets just once. These victims are violated every time someone clicks on the images. If nine states have figured this out, then our federal government should be able to, as well. This is a rare bipartisan issue that lawmakers should seize upon to do some good before bad actors use AI to wreak more havoc on more innocent people’s lives.