Posted on: April 12, 2025 Posted by: admin Comments: 0

Author: Yash Yadav, a student in BA.LL. B(Hons.) at Amity University Lucknow

Co-Author: Dr. Juhi Saxena, Assistant Professor at Amity University, Lucknow.

ABSTRACT

The rapid advancement and widespread adoption of Artificial Intelligence (AI) across critical sectors such as healthcare, transportation, finance, and defense have prompted an urgent need to re-evaluate existing legal frameworks for liability. This paper investigates the evolving legal responses to AI-related harm across different jurisdictions, focusing mainly on the European Union, the United States, and select Asian countries. These regions present varying approaches shaped by distinct legal traditions, regulatory philosophies, and socio-political priorities.

A central concern explored in this study is the attribution of liability in instances where autonomous AI systems operate with limited human oversight. Traditional liability models centered on concepts of foreseeability, control, and human intent often fall short when applied to systems capable of independent decision-making. Civil and product liability laws face challenges in assigning responsibility when the causal link between the harm and the human actor becomes blurred. Similarly, criminal liability is complicated by the difficulty of proving mens rea in AI-related cases. Through comparative legal analysis, this research highlights how some jurisdictions have begun to adapt by introducing AI-specific regulations or by interpreting existing laws in innovative ways.

The study also examines key policy and legal debates around the need for new liability regimes tailored to the unique characteristics of AI. In light of the global nature of AI development and deployment, the paper underscores the importance of international cooperation in creating harmonized legal standards. Without such alignment, inconsistencies across borders could undermine both accountability and innovation. The paper concludes by recommending a balanced framework that incorporates risk-based regulation, mandatory transparency mechanisms, and clearly defined liability pathways. Such a framework would help ensure that victims of AI-related harm receive redress while also promoting responsible innovation and cross-border technological collaboration.

Keywords: Artificial Intelligence, Liability, Human Intent, Innovative, Technological.

Leave a Comment