[ad_1]
A new Louisiana law set to go into effect on August 1 will criminalize the production and possession of deepfakes depicting the sexual abuse of children.
Louisiana legislative bill SB175, signed into law by Louisiana Governor John Bel Edwards, deems that anyone convicted of creating, distributing, or possessing unlawful deepfake images depicting minors could face a mandatory five to 20 years in prison, a fine of up to $10,000, or both.
Deepfakes are AI-generated videos that fabricate persons, places, and events. Deepfakes pose a growing challenge for cybersecurity and law enforcement due to advancements in AI that make deepfakes increasingly difficult to detect.
Louisiana—which ranks 49th out of 50 states in child well-being and second in poverty—joins several other U.S. states, including California, Texas, and Virginia, that have regulated or outlawed deepfakes.
Another section of SB175, aimed at online platforms and sites that host so-called “revenge porn,” says that anyone knowingly advertising, distributing, or selling sexual deepfakes involving a person without their consent—or that features a minor—could face a mandatory 10 to 30 years in prison, a fine up to $50,000, or both.
If time behind bars and a hefty fine were not enough, Louisiana lawmakers were sure to include that any sentence issued under the new law would consist of “hard labor.”
In May, deepfakes of child murder victims went viral on social media after being uploaded to TikTok. One of the clips featured an AI-generated recreation of Royalty Marie Floyd, whose grandmother was charged with her murder in 2018.
In June, UN Secretary-General António Guterres warned about the use of AI and deepfakes on social media to fuel hate and violence in conflict zones.
“We’re getting into an era where we can no longer believe what we see,” Marko Jak, co-founder and CEO of Austin-based Secta Labs, told Decrypt in an interview. “Right now, it’s easier because the deep fakes are not that good yet, and sometimes you can see it’s obvious.”
Law enforcement agencies have already sounded the alarm on criminals using deepfakes for scams and extortion. Last month, the U.S. Federal Bureau of Investigation said the agency continues to receive reports of victims, including minors, whose photos and videos were used in explicit content.
Citing the potential misuse of its latest AI-generated voice platform, Voicebox, Meta said it would not release the AI to the public.
“While we believe it is important to be open with the AI community and to share our research to advance the state of the art in AI,” the Meta spokesperson told Decrypt in an email. “It’s also necessary to strike the right balance between openness with responsibility.”
Stay on top of crypto news, get daily updates in your inbox.
[ad_2]
Source link