[ad_1]
The U.S. Federal Election Commission has moved forward a petition to prohibit the use of artificial intelligence in campaign ads leading into the 2024 election season. The agency is now seeking public comments on the petition before proceeding with “full rulemaking.”
In July, the FEC said it received a petition asking for new rules surrounding the use of AI-generated content during elections. The petition specifically called upon the FEC to amend regulations regarding fraudulent misrepresentation of “campaign authority” and clarify that the prohibition applies to deliberately deceptive AI campaign advertisements.
“The deepfake is fraudulent because the deepfaked candidate, in fact, did not say or do what is depicted by the deepfake and because the deepfake aims to deceive the public,” the petition said.
A deepfake is an increasingly common type of video or audio content created with artificial intelligence that convincingly depicts false events but is done so in a way that can be very difficult to identify as fake.
An FEC spokesperson told Decrypt that more comments are being sought, and referenced two prior hearings on July 13 and August 10, where members of the commission discussed and heard testimony on the petition.
While the petition was advanced unanimously, some voiced concern over the precedent it might set.
“There are serious First Amendment concerns lurking in the background of this effort,” FEC Commissioner Allen Dickerson said during an open FEC meeting last week. “Precision of regulation is a requirement in our work. And if the commission has authority to act in this area, I hope that commentators will also demonstrate that it is possible to tailor a regulation to truly fraudulent activity without slowing protected expression.”
The petition, the FEC explained, claims that generative AI and deepfake technology are being “used to create convincing images, audio, and video hoaxes.” Recent examples of AI-generated deepfakes emerging online appear to support the petition’s claim.
The FEC’s statutes, Dickerson said, prohibits a person from fraudulently misrepresenting themselves as acting for or on behalf of another candidate, but not the candidate itself.
“The statute is carefully limited and is directed at fraudulent agency,” Dickerson said. “In other words, it’s directed at fraudulently pretending that you yourself represent or work for another candidate. It does not reach fraudulently claiming that your opponent said or did something that he or she did not do.”
In May, a GOP campaign video released on YouTube used AI-generated images to show the aftermath of the potential reelection of U.S. President Joe Biden. That video came after the campaign of Donald Trump used AI deepfakes to troll Florida Governor and rival for the GOP nomination, Ron DeSantis, after a rocky start to his Presidential campaign.
In June, the United Nations sounded the alarm on the potential use of AI-generated deepfakes on social media, particularly in conflict zones where the deceptive images could fuel hate and violence.
The threat of AI-generated deepfakes has even led Pope Francis to talk about the technology in an upcoming sermon for World Peace Day.
Last month, a Los Angeles-based political satirist, Justin Brown, came under fire for posting AI-generated images that showed Donald Trump, Barack Obama, and Joe Biden cheating on their spouses. The images were all fake but showed the power of generative AI to create lifelike replicas of prominent people.
The Federal Elections Commission is asking for public comments on the petition before moving forward on any changes to campaign rules. The FEC says comments must be submitted within 60 days of the petition’s publication in the federal register.
Stay on top of crypto news, get daily updates in your inbox.
[ad_2]
Source link