New bill aims to force AI firms to disclose use of copyrighted material
Lawmakers in the U.S. Congress introduced a new bill on Tuesday that would force artificial intelligence companies such as OpenAI and Google LLC to reveal exactly what copyrighted material they use to train their large language models, which power generative AI chatbots such as ChatGPT and Bard.
The legislation is the latest in a series of efforts by lawmakers and other groups to compel AI firms to reveal how they use creative works, such as songs, books, art and movies, to train their software.
The bill, known as the Generative AI Copyright Disclosure Act, was introduced by California Democratic Rep. Adam Schiff, and if it’s passed it will require AI creators to submit information on any copyrighted work that was used in their training datasets to the Register of Copyrights. They’ll have to do this at least 30 days before they publicly release any new models trained on copyrighted material. Failure to do so could result in significant financial penalties.
AI companies such as OpenAI typically use enormous amounts of data encompassing billions of lines of text, millions of images and millions of hours of music and movies to train their models.
“AI has the disruptive potential of changing our economy, our political system, and our day-to-day lives,” Schiff said in a statement to the press. “We must balance the immense potential of AI with the crucial need for ethical guidelines and protections.”
Schiff’s bill, first reported by Billboard, comes at a time of rising complaints from artists, authors and other content creators who believe that generative AI models are being trained on their copyrighted material without obtaining permission first. Some artists and authors have already filed lawsuits against AI makers in an effort to assert their rights.
The copyright issue is a tricky one for AI companies to navigate, as there is no clear answer as to how generative AI models use or mimic protected content. AI developers insist that their models are trained on “publicly available data,” but the sheer amount of information they use in their training datasets makes it difficult to know exactly what data is copyrighted. Firms such as OpenAI argue that any copyrighted materials used in AI training fall under the legal doctrine of “fair use,” which allows for unlicensed use of such content under certain conditions.
This legal strategy by AI firms poses a key test for copyright law, and whatever outcome arises could have serious implications, either for content creators or AI companies. In a submission to a U.K. government committee earlier this year, OpenAI’s lawyers argued that “legally, copyright law does not forbid training.” It also stated the importance of copyrighted material for training, saying in the same submission that its models would likely cease to function without access.
Constellation Research Inc. analyst Holger Mueller said AI needs data, but the problem is that lots of the essential data it requires is protected as intellectual property. “The question over fair use pertains to exactly how AI model providers are using that IP-protected content and if they’re monetizing it,” he said. “As with all regulation, it’s going to be tricky to find the right balance and not stymie the AI industry’s progress.”
Artists have gotten a lot of support from entertainment industry organizations and unions, with the Recording Industry Association of America, Professional Photographers of America, Directors Guild of America and the Screen Actors Guild-American Federation of Television and Radio Artists all backing them.
“Everything generated by AI ultimately originates from a human creative source,” said Duncan Crabtree-Ireland, SAG-AFTRA’s national executive director and chief negotiator. “That’s why human creative content – intellectual property — must be protected.”
The American standup comedian, actress and writer Sarah Silverman and the New York Times, have both filed copyright infringement lawsuits against OpenAI. In response, the AI company has hired an army of lawyers to try and fend off these legal challenges.
AI firms are also facing intense public pressure. Last week, a group of more than 200 high-profile musicians signed an open letter calling for better protections against AI, calling for companies to avoid developing tools that could undermine or eventually replace them.
Image: DC Studio/Freepik
A message from John Furrier, co-founder of SiliconANGLE:
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU