Introduction
In an era where machines are learning, creating, and innovating at an unprecedented pace, the very foundations of intellectual property (IP) law are being challenged and reshaped by the rise of Artificial Intelligence (AI). AI is pushing back on the traditional concepts under intellectual property law such as human authorship and attempting to redefine who will be liable in the face of intellectual property infringement. This article examines ownership and liability considerations in the evolving landscape of intellectual property in the age of AI.
Who Owns AI Generated Content?
AI tools are being increasingly used to generate text, music, videos, and images, such as ChatGPT (text) and Udio (music). These systems produce high-quality content quickly, based on prompts from users, saving both time and effort. However, this raises a significant legal question: who owns the copyright to content generated by AI? Can the creator of the prompt, the developer of the AI tool, or the AI itself claim authorship?
The standard is established by the U.S. Supreme Court in 1989 in a case entitled Community for Creative Non-Violence v. Reed (490 U.S. 730 (1989)): “the author [of a copyrighted work] is . . . the person who translates an idea into a fixed, tangible expression entitled to copyright protection.” The U.S. Copyright Office observes that, to date, “no court has recognized copyright in material created by non-humans, and those that have spoken on the issue have rejected the possibility.”
Under the copyright laws in various jurisdictions, like the Nigerian Copyright Act 2022 and the Copyright Law of the United States, ownership of copyright is granted to the author of the original work. Under the Nigerian Copyright Act, the law is clear that authorship for copyright is legal personality authorship. This refers to either a human or an incorporated entity. Therefore, copyright can only be granted to the creator of the prompt or the developer. Copyright under the Nigerian law cannot be granted to the AI tool, being an inanimate object.
Also, in the current edition of the Compendium of U.S. Copyright Office Practices, the Office states that “to qualify as a work of ‘authorship’ a work must be created by a human being” and that it “will not register works produced by a machine or mere mechanical process that operates randomly or automatically without any creative input or intervention from a human author.
Liability in the Face of IP Infringement
IP rights in the age of AI give rise to complex questions about liability. The issue of liability in the face of IP infringement presents itself on multiple fronts, raising important questions that need to be addressed, such as: At what point does IP infringement occur to give rise to liability? On which party in the chain does liability lie? How is such liability addressed?
Infringement of intellectual property rights occurs when these rights are used without the owner’s authorization and in a manner that causes violation. For instance, where a copyrighted work is reproduced, published, or broadcasted without authorization, an infringement of such exclusive right will be said to occur. This, however, excludes situations where there is fair dealing. In traditional IP, the point of liability is usually clear, such as when a book is published, or when music is released to the market without authorization. However, in the age of Generative AI (GenAI), the lines are shifting, particularly in infringement claims which have largely been founded on copyright infringement. In such cases, liability is said to arise on either of two fronts: (1) Direct infringement – where the claimant argues that their copyrighted work was used without permission to train AI models as in Andersen v. Stability AI Ltd. (23-cv-00201-WHO (N.D. Cal. Aug. 12, 2024); or (2) Infringement through Derivative Works where infringement is claimed in the AI model’s outputs, as in Kadrey v. Meta Platforms Inc. (No. 3:23-cv-03417). Thus, infringement may be claimed at the training stage or at the output stage. With GenAI, distinct parties are seen as responsible for liability at the different stages.
There is therefore the need to ascertain which party in the chain of AI development and deployment will be liable for infringing works produced by GenAI. Should the AI developer, the user, or the AI technology itself be held responsible for infringing works?
One perspective is to place the liability on the user who provides prompts for generated content. If the generated content closely resembles a work protected by IP rights and the user uses it in a way that infringes on the original owner’s rights—such as selling the work—shouldn’t they be held liable for infringement? IP infringement, like copyright violation, occurs when an act is performed or threatened that violates the exclusive rights of the owner, as outlined in the Nigerian Copyright Act 2022. However, the rise of GenAI has introduced a new way of creating content and products, which end users have eagerly adopted, often without the intent to replicate existing works. If IP infringement is judged solely based on the act itself, without considering intent, it could discourage the use of GenAI tools and hinder the development of these technologies. Legislators may want to consider revising current laws to incorporate intent as an element in IP infringement cases.
How should intent be determined? In the case of GenAI, one approach is to consider intent at the point when the prompt is given. A user may specifically request content that closely resembles or directly copies IP-protected material, or they may simply ask for content, which could inadvertently generate existing works.
One possible solution is to require users to sign in to GenAI tools. This would allow for monitoring of prompts and ensure traceability in case of infringement. However, this approach could raise privacy concerns.
Another option is for developers to integrate features that block prompts likely to request content similar to copyrighted or patented material. This is similar to how ChatGPT blocks prompts that violate its usage policies, such as those related to illegal activities like stealing. However, developers may struggle to predict and address all possible prompts that could lead to IP infringement.
The second perspective is to place the liability on the developers, as they are responsible for providing AI technology with training data, some of which could be protected by existing IP rights. Should developers be primarily liable in IP infringement claims? A growing number of copyright infringement cases across various jurisdictions involve claimants seeking compensation for works used as training data or for generated content that closely resembles their own. These rising claims could deter new entrants into the GenAI development market.
Developers can address this issue in several ways. One approach is to obtain licenses for using IP-protected works as training data. However, this may be impractical, given the vast datasets used to train AI models. The cost and operational challenges of acquiring such licenses could hinder innovation. Another solution is for developers to use more open-source data to reduce their liability. Developers could also attribute the source of generated content, much like how Google credits websites in its AI-generated responses. Additionally, developers have attempted to address liability concerns through the Terms of Use for their GenAI tools, outlining user responsibilities, intellectual property rights, data usage, and limitations of liability.
The third perspective is to impute liability on the GenAI tool. Here, proponents argue for a review of IP laws to recognize AI tools as legal entities capable of owning rights. However, it should be noted that the AI tools are owned by companies with legal personalities, such as ChatGPT and DALL·E that are owned by OpenAI. This raises potential concerns about the duplication of liability and could lead to legal uncertainty.
Policy and Legislative Review for Liability Claims
While developers are working to address liability issues through industry practices, the creation of policies and laws is essential to ensure widespread compliance and legal certainty. These policies can mandate adherence to ethical industry standards, such as requiring developers to block prompts that request outputs in the style of well-known creators. For example, DALL-E 3’s training prevents it from generating outputs based on prompts requesting works in the style of artists like Pablo Picasso. Additionally, policies and laws could establish guidelines to minimize the risk of infringing outputs, such as requiring transparency about the source of training data and mandating attribution for generated content.
On the user side, a review of the laws may be necessary to incorporate intent as a factor in IP infringement, which would extend liability to users in cases of infringement.
These policies could also mandate periodic audits of AI systems to ensure compliance with ethical standards. As AI technology and its legal framework evolve, liability cases will likely be addressed on a case-by-case basis. The presence of clear policies and laws will provide direction and certainty, allowing courts to assess compliance by each party in the liability chain and determine at which point an infringement occurred due to a failure to adhere to established guidelines.
Policymakers include national IP offices like the U.S. Copyright Office and AI policy actors such as the Ministry of Communications, Innovation, and Digital Economy (NITDA). Policy development and legislative review regarding liability in the context of AI-driven IP infringement should strike a balance between safeguarding the IP rights of creators, while fostering an environment that encourages innovation.
Conclusion
IP rights remain a major point of discussion in the development and deployment of AI. AI’s rapid development challenges the traditional understanding of intellectual property. As AI evolves and as more IP-related issues come to the fore, it provides an opportunity for policy development, legislative review, and jurisprudential resources on the subject. Policymakers and global leaders must take a proactive role in addressing the legal challenges that arise in order to create a stable environment for investment and innovation.
CONTRIBUTORS
Uche Wigwe – Managing Partner at Wigwe & Partners
Tilewa Oyefeso – Partner
Emaediong Lawrence – Associate
More Information: [email protected]
DISCLAIMER
This article is for informational purposes only and does not constitute legal advice or establish a lawyer-client relationship.
Join BusinessDay whatsapp Channel, to stay up to date
Open In Whatsapp