Document Type
Article
Publication Title
Indiana Journal of Global Legal Studies
Abstract
This article addresses the complex and burgeoning issue of worldwide and national AI regulation in a legal context, with insights from international business law. The present regulatory efforts throughout the globe are a messy tapestry of initiatives with varying approaches that do not coalesce for cross-border multinational corporate entities (MNE). Most AI technologies are developed by multinational enterprises that need a more uniform international regulatory environment and structure for “responsible AI” enforcement. The human rights models of corporate social responsibility and human rights mechanisms that regulate the protection of fundamental rights and “responsible AI” to prevent harms in the context of AI innovation may provide useful frameworks for uniform legal norms. Human rights guidance of a “responsible AI” regulatory framework will provide a foundational ethical infrastructure for MNEs. This article first gives an overview of the AI legal frameworks blossoming in the United States, with an overview of Federal and State legislative, case law, and policy efforts towards AI regulation. The U.S. takes an industry approach to AI legal norms compared to other world jurisdictions. Second, the article provides a comparative study and overview to other comparative jurisdictions for illuminating other worldwide approaches to AI law. These jurisdictions include the EU’s risks-based AI regulatory approach, China’s state-based approach, and the Organization for Economic Cooperation and Development’s (OECD) robust metalevel framework for “responsible AI” governance in the setting of international business transactions. Finally, the article reveals how the traditional international human rights regime norms for international business and corporations that cross international boundaries can be useful for future AI legal approaches and more uniform regulation. These useful human rights frameworks include the Ruggie Principles, Responsibility to Protect Doctrine, UN basic human rights treaties, the doctrine of Corporate Social Responsibility, and norms for State practice and customary international law as applied to AI governance. Together these international business and humanitarian law norms may offer a creative solution to guiding MNEs in the tricky landscape of AI governance and varying legal norms, responsible AI development, and working with nations in the global playing field that we are seeing for the impact of these generative AI technologies and foreign technologies. The article concludes with the position that there is certain promise that the solidified international corporate social responsibility doctrine and international human rights norms might work in tandem to provide more uniform AI norms. This human rights ethical model will be embraced by corporate executives and companies that are striving to create a safer AI habitat while balancing innovation and the inherent risks that these technologies pose for global societies and cross-border business transactions.
First Page
1
Last Page
38
Publication Date
Fall 2024
Recommended Citation
Heidi L. Frostestad, AI Regulation in a ChatGPT Era: Cross-border Cooperation and Hope in a Sudden Storm, 32 Ind. J. Glob. Legal Stud. 1 (2024).
Department
College of Law
Suggested Citation
Heidi L. Frostestad, AI Regulation in a ChatGPT Era: Cross-border Cooperation and Hope in a Sudden Storm, 32 Ind. J. Glob. Legal Stud. 1 (2024).