Generative artificial intelligence (AI) has transformed how many businesses operate. As its adoption has grown, though, so has attention around its unique legal and ethical concerns. A rising number of AI copyright infringement cases stand out as particularly noteworthy.
As AI-generated images and other content become more common, organizations should consider their use of AI and how it might impact copyright considerations. Here’s a closer look at the relationship between generative AI and copyright laws to help.
Does Generative AI Infringe Copyright Laws?
Many content creators have spoken out against generative AI, claiming it infringes their copyright protections. AI-generated outputs do not inherently violate these laws, as the Copyright Office does protect some AI-assisted works, but they can still face uncertain legal hurdles.
The big issue here is that many AI models learn from copyrighted material without the original creators’ knowledge or consent. As a result, their output may be little more than a reworking of these existing copyrighted works. Depending on how people profit from those outputs and how it affects the original copyright holders, that could constitute infringement.
Legislation about AI has been slower to come out than people’s use of the technology. Consequently, there’s no overarching law on where AI-generated content falls in terms of copyright infringement, leading courts to tackle it on a case-by-case basis.
Are AI-Generated Images Copyrighted?
While not all AI-generated images violate copyright laws, many of them don’t share the same protections as entirely human-made works. The use of AI doesn’t outright bar something from getting copyrighted, but it does introduce limits.
U.S. copyright laws only recognize human-made content, so an entirely AI-generated image cannot be copyrighted. However, an artist can use AI as a tool in the creative process and still acquire copyright if they were manually involved enough. Where that line lies is a little vague, but it does open the door to securing protections over AI in creative workflows.
A company can’t copyright something produced by simply prompting a generative AI model. It may be able to secure such protection for something it generates and then extensively edits and modifies manually, though. Creating an outline or rough draft with AI but using manual workflows to complete the work is a good example.
AI Copyright Infringement Court Cases
Because the laws leave so much to interpretation, it may be more helpful to see how actual AI copyright infringement cases have played out. Organizations can review these trials and decisions to estimate where their AI usage may fall under the eyes of the law.
One of the most prominent recent cases came from an accusation from news giant Thomson Reuters against an AI startup. Courts found that the startup’s use of Reuters data in training did not qualify as fair use, meaning the AI model violated Returers’s copyright. The key takeaway here is that simply training a model on copyrighted material has been enough to convict people of copyright infringement in the past.
There are dozens of AI copyright infringement cases currently ongoing, too, with many dealing with the use of copyrighted material in model training. These cases target the AI companies themselves, not necessarily the clients using their AI services, but it’s worth noting.
While many of these suits are still underway, judges have dropped some, creating uncertainty. The sheer number of legal cases out there is worth noting, though, as it highlights how risky it can be to try to profit from generative AI as it relates to copyrighted material.
How Can You Prevent AI Copyright Infringement?
In light of this uncertainty and significant legal action, businesses should approach generative AI with caution. Avoiding AI copyright issues is possible with attention to a few best practices.
Reconsider Generative AI Use Cases
The most important step is to reconsider how employees use generative AI. Given how much this technology has stirred up in legal circles, it’s safest to avoid using generative models to produce profitable content. However, using it as a tool in a larger, more manual creative workflow is acceptable.
For example, writers can use AI to create outlines for pieces they then write themselves. Similarly, artists can generate images for inspiration before designing their own works manually. In these situations, humans still perform most of the creative work on their own, reducing the role of models that may entail some dubious copyright considerations.
Leadership can employ tools and techniques to catch AI in employees’ work to hold them accountable. Making such avoidance and responsible use of these tools a matter of workplace policy will strengthen the company’s standing.
Conduct Thorough Due Diligence Over AI
Next, teams need to be careful about the AI tools they use. Most of the legal issues in this realm stem from the developers behind AI products, so it’s important to ensure a service is reliable and in good legal standing before using it.
Leadership should review any AI company’s legal history before using its services. Just because a model has run into issues in the past doesn’t mean it’s necessarily unsafe, but it may strengthen the argument to avoid using it to produce anything the business will directly profit from.
Any operation building or training its own AI model should avoid using others’ copyrighted works in the process. Previous court rulings show that training does not always qualify as fair use, so it’s best to remove any copyrighted content in model development.
Review Data Privacy Considerations
Similarly, the AI model’s privacy protections deserve attention. Laws like the GDPR and CCPA impose guidelines on AI data that can introduce legal concerns outside of copyright infringement.
Third-party AI solutions should undergo review to ensure they comply with any applicable privacy and cybersecurity regulations. Any service that cannot prove it meets such standards is too risky to justify using.
Businesses should also consider how their AI usage impacts privacy. All AI data should contain minimal, if any, personal information and feature strict protections like encryption and real-time monitoring. Specific requirements can vary between regulations, so a use case-specific audit may be necessary.
Stay Up to Date on Legal Changes
Finally, organizations should keep an eye on developing AI copyright infringement cases to see how laws may change. As this technology grows, the law will adapt to include clearer, firmer language on what constitutes acceptable use and what’s copyright infringement. The best way to stay safe is to keep up with these shifts.
Even before any larger legal changes take place, new court decisions can reveal how judges are handling AI copyright cases. Precedence is a good indicator of what a business can expect from a legal proceeding.
Anything resulting in a conviction for users or creators of generative AI is worth noting. Learning from these cases to avoid similar situations is key to staying safe before the letter of the law clarifies things.
AI Copyright Infringement Issues Demand Attention
Generative AI is still relatively new, so where it stands under the law is still unclear in many cases. Given this uncertainty, businesses must approach it with caution. Failure to address the technology’s copyright infringement controversies can result in severe consequences.
About The Author
Eleanor Hecks is the Editor-in-Chief of Designerly Magazine, an online publication dedicated to providing in-depth content from the design and marketing industries. When she's not designing or writing code, you can find her exploring the outdoors with her husband and dog in their RV, burning calories at a local Zumba class, or curled up with a good book with her cats Gem and Cali.
You can find more of Eleanor's work at www.eleanorhecks.com.