AI Risks Persist Despite Advances in Guidance and Policy – What Businesses Need to Know in 2024

By: Andrew J. Costa, Esq., Associate

Artificial intelligence (AI) took center stage in 2023. From the Copyright Office’s Notice of Inquiry and Guidance, Congressional Hearings with AI leaders and President Biden’s AI Executive Order, to the avalanche of AI-related lawsuits (including NYT v. OpenAI) and the simple daily blast of AI-derived content appearing on our social media feeds, the AI craze shows no sign of fading in 2024. Indeed, interest only grows as more businesses harness AI’s capabilities and lawmakers grapple with how AI may influence the 2024 Presidential Election – ever more important now that we need to contend with OpenAI’s new AI-video service Sora. In this environment, it may seem like it’s necessary to either jump on the AI bandwagon or be lost in the torrent. It might even seem like adopting AI is necessary for a business to stay competitive. Yet despite the exuberance around AI, important risks persist both from an intellectual property (IP) and legal perspective that businesses should consider before incorporating AI-technology into their models. While some risks may need to be mitigated by either Congress or clearer guidance from the USPTO and Copyright Office, others may simply require a renewed look at internal operations and some strategic planning. In either case, the first step for any business is to increase their awareness of key risks so it can seek advice of counsel when necessary and make educated decisions on AI. Below are some of the risks that have captured our attention in the new year and which we think businesses should think about when considering whether to use AI technology.

Copyright for AI-Derived Works Remains Elusive, and a Minefield

First, and perhaps most obviously, the degree of copyright protection available for works incorporating AI-derived content remains unclear, despite guidance published by the Copyright Office last year. In 2023, the first federal court to rule on the central question found that an entirely AI-generated work was not entitled to copyright protection; that human authorship, and in particular, human creativity, was the “sine qua non at the core of copyrightability.” (Thaler v. Perlmutter et al., D.D.C. (Aug. 18, 2023)). The Copyright Office therefore will refuse to register works made entirely by AI, and requires authors to disclose (and essentially disclaim) any parts of a work that are AI-derived in a registration application. (Copyright Registration Guidance, pg. 5-6). Even then, for such “mixed-authorship” cases, the sliding scale of how much human authorship versus AI-derived content will support a registration remains unclear and the subject of litigation. In the case of compilations of wholly-AI derived works, such as the Zarya of the Dawn comic book, copyright protection remained available for the compilation, but not the images themselves. With these complications, some businesses may want to pursue copyright for the prompts themselves, but the copyrightability of such prompts remains an open question. To date, we’ve uncovered no guidance on this aspect. But even assuming such prompts are copyrightable (and not insufficiently original or expressive), it’s an open question how useful that protection would be if the output itself isn’t protectable.

Given these unknowns, it’s important for businesses that depend upon copyright protection (such as publishers, content creators, artists, and even software developers and programmers) to understand the current limits of existing copyright protection for works that include AI-derived content, and weigh the risks of using that content – especially because they will need to disclose AI-derived content on a copyright application. For example, businesses may want to carefully consider which aspects of their business (such as branding or marketing) require air-tight copyright, versus areas where protection is less important and strategize around those needs. Businesses may also want to establish clear employee guidelines, policies and procedures for AI use. Indeed, some 56% of US workers report using AI at work. Especially when a company does not provide clear guidelines on AI use, employers may not even know whether their employees’ work products contain AI-derived content, which can later frustrate protecting those work products through copyright. However, from a risk-management standpoint, companies should assume that employees, especially those in environments where content is created under high pressure, may occasionally resort to using forms of AI, even when guidelines prohibit it, and plan accordingly. (For example, it may be worth running sensitive content through tools to detect plagiarism and/or incorporate a process to fact checking written materials.) It’s worth noting that Congress and the Copyright Office continue to investigate how existing copyright law should accommodate AI technology (if at all), and whether legislation is necessary (including executive action); thus today’s guidance may be stale tomorrow, and businesses should be prepared to meet that change if they’ve embraced AI.

Second, it’s important for businesses to consider the extent to which their AI use could expose them to potential copyright infringement claims. AI tools like ChatGPT, Midjourney, DALL-E and now Sora are each trained on large datasets that include copyrighted material; and users have successfully prompted these tools to produce “plagiaristic outputs” that are nearly identical to copyrighted works in their underlying databases. Use of these outputs then could expose a business to copyright infringement, even though (for now) companies like OpenAI are indemnifying certain users. Indeed this is exactly what the New York Times alleges ChatGPT does in its lawsuit against OpenAI (see pages 30-37 of the Complaint for examples of such outputs). However, while some examples of plagiaristic outputs prompted by users consist of frames from blockbuster films like Dune or Iron Man – which a user would likely recognize as potentially infringing – for others, it is much more difficult, especially when the output is text. Furthermore, prompts do not need to be complicated to return such potentially infringing outputs (see here for various examples of prompts that return potentially infringing content); thus businesses should be exceedingly careful not to reproduce AI-outputs verbatim and, again, should attempt to verify the integrity of any output before using that output in a work product or commercial activity.

Potential Loss of Trade Secrets

Another key concern is whether disclosing information to an AI tool amounts to a disclosure that would destroy trade secret protection. Trade secrets, by definition, must remain secret to be protected (although absolute secrecy is not generally required). See Defend Trade Secrets Act 18 U.S.C. § 1839(3).

While it may seem obvious that one shouldn’t put a trade secret like the Coca-Cola secret formula into an AI tool, inadvertent disclosure can occur easily and unintentionally by an unsuspecting user. For example, because AI is extremely helpful in iterating or improving the material it ingests, employees who work with trade secrets (or who work with other confidential information that could be trade secret) may innocently provide parts or the whole of a trade secret to an AI (like ChatGPT) simply for the purpose of aiding or improving their work. They might not know their work is trade secret (which is itself a different and challenging issue) or they might mistakenly believe that the AI tool is confidential. Further, because AI is effective at condensing and “improving” writing, employees could unknowingly upload documents containing trade secret information into an AI tool for summarizing, report generation, or even simply editing or proofreading. By doing so, an employee may have just unknowingly disclosed a trade secret to the AI system.

What’s even more concerning is that many AI tools continue “learning” from the material they ingest, thus turning user inputs into part of their training data. This is particularly worrying when we consider how easily users can prompt a verbatim reproduction of ingested material, including the verbatim reproductions alleged in the New York Times v. OpenAI case. Indeed, AI systems are colloquially known as “black boxes,” so the exact relationship of input, database, and output is not precisely known. If a business has disclosed a trade secret to an AI system, a nefarious actor or competitor may be able to, through creative prompting, learn that trade secret from the system itself. It’s still unknown whether this kind of behavior would constitute an “improper means” of acquiring a trade secret under the Defend Trade Secrets Act, 18 U.S. Code § 1839(6), but it’s wise not to assume information fed to an AI system remains confidential and inaccessible.

While the best practice may be to absolutely refrain from inputting trade secrets into an AI tool, it’s equally as important for businesses to design, develop and deploy an overall trade secrets strategy that establishes clear guidelines for employee use of AI and prevents any unintended disclosures. An obvious first step is making sure that employees know whether they’re working with trade secrets or confidential information and understand their obligations to protect secrecy.

Existing Contracts

Lastly, businesses should review their existing contracts and templates, including with customers, to identify AI risks, better understand their obligations and mitigate unwanted exposure. Businesses may want to assess whether their existing contracts with customers allow them to use AI when fulfilling the terms of the contract. Just because a contract is silent on AI does not mean the contract allows for AI use. Liability could arise from representations and warranties or indemnity provisions (among others) – especially where work product originality is concerned. Conversely, companies that are more leery of AI should review their agreements with vendors, suppliers, or independent contractors to ensure that AI expectations are clear. This could mean prohibiting AI use by independent contractors, vendors, or third parties, or requiring disclosure or consent before such tools are utilized. Some examples of vendors who may be actively leveraging AI to provide services include: marketing firms; graphic designers; illustrators; website and UX/UI designers; copyrighters and editors; content creators; software developers and general programmers. The important thing here is to know where exposure may lie and address it before it becomes a problem.

Closing

These are just a few of the various key risks that have captured our attention in the new year and which we think businesses should consider before adopting AI. There are, of course, many more, such as algorithmic disgorgement (which is an FTC remedy used against businesses who improperly use customer data to train or develop AI systems – more on that here) and algorithmic biases (both of which are beyond the scope of this article); but copyright and trade secret concerns offer a good starting place for businesses considering AI to examine. None of these are a reason alone not to use AI, but each represents an important reason to think deeply about it before embracing it in your business – but, then again, perhaps it’s simply easier to ask AI itself about its own risks! Why don’t you give it a try? ChatGPT.

AI Risks Persist Despite Advances in Guidance and Policy – What Businesses Need to Know in 2024
Tagged on: