Bipartisan Bill Seeks to Combat AI Deepfake Misuse with Watermarking and Provenance

Bybit
Bipartisan Senators introduce the COPIED Act to regulate AI deepfake content with watermarking and provenance requirements.
Paxful

Bipartisan Bill Seeks to Combat AI Deepfake Misuse with Watermarking and Provenance

A bipartisan bill has been introduced by Senator Maria Cantwell (D-WA), Senator Marsha Blackburn (R-TN), and Senator Martin Heinrich (D-NM) to handle the increasing misuse of artificial intelligence (AI) deepfakes. The bill sets new standards for content creation through watermarking and provenance tracking for AI-generated content which provide clear information and help original creators in protection.

COPIED Act: A Step Towards Transparency

The proposed law, titled the Content Origin Protection and Integrity from Edited and Deepfaked Media Act (COPIED), aims to develop a definite mode for watermarking AI-generated content. On the one hand, as indicated by the Senator Cantwell, the bill will “guarantee the much-needed transparency” of AI-generated content and make it possible for “creators, including local journalists, artists, and musicians, to have control over their content”.

Enforcement and Oversight

Should the bill receive the green light, providers of artificial intelligence services will be required to include the origin of the information in a form readable by machines, thus making it impossible to alter the information by way of AI-based tools. The FTC will be in charge of the enforcement of the COPIED Act, meaning that violations will be treated like other breaches under the FTC Act as unfair or deceptive acts.

Rising Concerns Over AI Misuse

The operation of artificial intelligence has caused some heated discourse around its moral considerations, specifically those dealing with the technology’s capacity to gather tremendous volumes of data from the Internet. Among other issues, these apprehensions emerged as Microsoft was hesitant in accepting board seats at OpenAI due to ethical objections.

Sen. Blackburn called out such risks with AI as she said, “The malicious use of artificial intelligence has provided individuals with the ability to make deepfakes of literally anybody without their consent, in the process of which the actors are able to make an illegal profit off their content”.

Betfury

Deepfake Fraud on the Rise

The bill coincided with the most common deepfake frauds and scams which are using deepfake content. One report from Bitget suggests that losses because of these criminal schemes could be as much as $10 billion by 2025. It is notable, however, that the crypto world seems to be the one most frequently involved in these scams, which use deepfake videos for some platform to get the advantage of the incoming of such figures like Elon Musk and Vitalik Buterin.

OKX, a Hong Kong-based crypto exchange, had one of its clients swindle $2.5 million from them in a deepfake attack. Furthermore, a bust by Hong Kong authorities recently caught a scam platform using Elon Musk’s image to deceive its investors.

Calls for Better Preventive Measures

More guidelines for the fraud of deepfake is the thing that now is of most serious urgency. The National Cybersecurity Center (NCC) founder Michael Marcotte has just condemned Google for their poor preventive measures about crypto-related deepfakes. The COPIED Act is the government’s response that will force the industry to make AI-generated content visible and transparent and traceable.

A new turning point in the effective regulation of artificial intelligence deepfakes is achieved through bipartisan policies that clearly indicate the necessity for severe actions aimed at the prevention of counterfeit. Both creators and consumers are increasingly at risk of being victims of these studies, as a result of the dangerous acceleration of the technology.

Coinbase