•  
  •  
 

Document Type

Article

Media Type

text

Publication Title

Northern Illinois University Law Review

Abstract

As Chief Justice John Roberts noted in his State of the Judiciary address in late December 2023, artificial intelligence has not only had a seismic effect on society and the legal profession, but it is also presenting courts with novel questions to resolve. To date, most of the legal scholarship discussing generative AI has focused on areas like the ethical dimensions of its use, copyright infringement implications, AI governance issues, and the evidentiary questions raised by the use of this technology. However, a void in the scholarship exists with respect to the question of who should be liable for AI-generated content.

This is an increasingly important issue: a major airline (Air Canada) recently and unsuccessfully argued that its AI chatbot should be responsible for its own actions. Old Navy’s chatbots have been accused in a federal lawsuit of illegal wiretapping, and the National Eating Disorder Association’s chatbot gave people medically dangerous advice. In addition, “hallucinations” by AI tools have resulted in the dissemination of false information, resulting in multiple defamation lawsuits.

With more and more businesses using and relying on AI chatbots, courts will be confronted with novel legal arguments as they navigate applying traditional legal concepts to disputes involving generative AI. Is a chatbot analogous to an agent such that vicarious liability principles apply, or should a product liability approach be taken such that the focus should be on the developers who trained and coded the AI model? Should the user who entered the prompt bear any responsibility?

This Article explores these questions and presents potential answers. Currently, ambiguity exists as to whether Section 230 of the Communications Decency Act applies to generative AI output. On the legislative front, countries such as Singapore and states like Utah and Colorado have attempted to create a framework for responsibility. Utah’s Artificial Intelligence Policy Act (which took effect May 1, 2024) updates state consumer protection law to hold companies accountable for their generative AI’s deceptive outputs.

This Article explores these questions and presents potential answers. Currently, ambiguity exists as to whether Section 230 of the Communications Decency Act applies to generative AI output. On the legislative front, countries such as Singapore and states like Utah and Colorado have attempted to create a framework for responsibility. Utah’s Artificial Intelligence Policy Act (which took effect May 1, 2024) updates state consumer protection law to hold companies accountable for their generative AI’s deceptive outputs. Colorado’s AI Act similarly aims to protect consumers from harms associated with AI by holding those who develop or deploy AI accountable. This Article looks at whether it makes sense for other states to follow this approach and apply this to tort liability.

First Page

340

Last Page

367

Publication Date

6-1-2025

Department

College of Law

Suggested Citation

John G. Browning, Whose Bot Is It Anyway? Determining Liability for AI-Generated Content, 45 N. Ill. Univ. L. Rev. 340 (2025)

Share

COinS
 
 

To view the content in your browser, please download Adobe Reader or, alternately,
you may Download the file to your hard drive.

NOTE: The latest versions of Adobe Reader do not support viewing PDF files within Firefox on Mac OS and if you are using a modern (Intel) Mac, there is no official plugin for viewing PDF files within the browser window.