Scroll Top

AI AND THE LAW: CAN ROBOTS BE HELD LEGALLY RESPONSIBLE?

We are currently living in a time where machines can do everything from drive cars to compose poems and even have conversations. Artificial Intelligence (AI) isn’t just an idea from the

AI AND THE LAW CAN ROBOTS BE HELD LEGALLY RESPONSIBLE

INTRODUCTION

We are currently living in a time where machines can do everything from drive cars to compose poems and even have conversations.[1] Artificial Intelligence (AI) isn’t just an idea from the films; it’s currently realised and gets smarter every day, more and more.[2]So, nowadays people have started to depend on AI totally, which gives rise to the big question that is: what do we do when things go wrong? Can we hold a robot, or AI, legally responsible for its actions?[3]

This question strikes at the heart of how we think about responsibility and fairness. Our laws have always assumed that behind every action is a human mind, capable of intention and understanding. But now, with AI making decisions on its own, we’re left wondering — who should take the blame when things go wrong? It’s not just a legal puzzle, but a human one, because at the end of the day, it’s real people who are affected by these choices.

If we look deeply inside this, it includes:

“LEGAL RESPONSIBILITY” ACTUAL MEANING?

Before we start blaming robots for faulty diagnoses, let’s first understand what we mean by legal responsibility.[4] In layman’s language, responsibility exists when you can be held accountable for your actions by the law.[5] A person who commits a crime or breaches a contract can be punished in a court of law. But this thing gives birth to the main problem…AI is not a person. It lacks the capability for comparison along a continuum of right and wrong with a sense of intention or conscience.[6]

THE PROBLEM OF “AGENCY”

One of the key ideas in law is something called “agency”. It means having the power to act and make decisions.[7] Humans have agency—we can decide to do something/act and not act. But AI systems can only do work on things for which they are programmed.[8]

Let’s understand this with the help of the following example:

Suppose there is a self-driving car, and if it crashes, who will be responsible for to blame? The car? The software? The company that made it? Or the person sitting in the driver’s seat?[9] Right now, our laws don’t have clear answers to that. We’re trying to fit 21st-century technology into legal rules made for the 20th century.[10]

LEGAL CASE (SIMILAR TO A PROBLEM)

In 2018, a self-driving Uber car killed a pedestrian by mistake in Arizona.[11] The safety driver was in the car, but was not paying attention. Uber’s software didn’t react in time. So, who was responsible? The driver faced charges, but Uber didn’t. The AI that failed to act? Well, it couldn’t be charged because, legally, it’s not a “person.”[12]

This case opened up a flood of discussions. If AI makes a fatal mistake, and there’s no human directly to blame, is it fair to let it slide? Or should we start thinking about new legal categories just for AI?[13]

COULD AI BE A “LEGAL PERSON”?

This concept may seem odd, but it has already been utilised for other purposes. For example, companies are considered “legal persons”. They can sue and be sued. They pay taxes. They follow laws.[14] Could we give AI a similar legal status?

Some experts say yes. If we want to hold AI accountable, we might need to give it legal personhood, not as a human, but as a new category.[15] That way, AI could be held responsible for the damages or harm it causes. But that also raises more questions: Who pays the penalty? The robot? The maker? The owner?[16]

THE ROLE OF THE PROGRAMMER AND MANUFACTURER

Since AI doesn’t have intentions of its own, many argue that responsibility should fall on the humans who built or deployed it.[17] If an AI makes a mistake, maybe the programmer coded it wrong.

Or maybe the company released it before testing it properly.[18]

But here’s the twist: AI can learn and advance itself. That’s the entire premise behind machine learning. Which means that, after doing everything right, the AI may still act in unexpected ways.[19]Then what? Should the creator still be blamed?[20]

STRICT LIABILITY: A POSSIBLE SOLUTION

One concept that has been presented is extending “strict liability” to AI. [21] To say “strict liability” is to say that the creator (or end-user) must assume complete and absolute responsibility if something goes wrong, whether or not they are liable.[22] Strict liability is applied to participate in certain dangerous activities, such as keeping wild animals or running factories.[23] If one is engaging in something that operates under strict liability, that person is responsible for everything that occurs in the process.

This approach may sound draconian, however, that type of regulation might force companies to create safer AI and impose safeguards against harm.[24] Not only does this reflect ambition, but it also provides victims with a simple process for the pursuit of justice.[25]

THE ETHICS FACTOR

Beyond the law, there’s an ethical side too. AI decisions can have real consequences—someone getting denied a loan, getting fired, or even ending up in jail because of an algorithm.[26] If there’s no one to hold accountable, it creates a moral vacuum.[27]

People want answers. They want to know who runs the machine. If AI is going to be part of our lives, then we should be concerned about fairness, transparency, and accountability.[28]

LEGAL DEVELOPMENTS IN THE OUTER WORLD

Countries are slowly catching up. The European Union has proposed the AI Act, which would set rules and responsibilities for high-risk AI systems.[29] It talks about human oversight, transparency, and liability. Meanwhile, in the U.S., different states are working on their own rules, but there’s no national law yet.[30]

In India, discussions are still in early stages. The government has talked about AI guidelines, but there’s a long way to go before we have solid laws on AI responsibility.[31]

WHAT ABOUT CRIMINAL RESPONSIBILITY?

It is a tough question. A robot cannot go to jail. But can it be punished in some other way? Maybe it can be deactivated, recalled, or destroyed.[32] But again, that’s not the same as legal accountability.[33]

Some argue that we need new forms of punishment for non-human agents. Others say we should stick to punishing the humans behind the AI [34]

Hence, in the current legal system, there is no answer to this kind of problem.[35]

WHAT IS THE ANSWER THEN?

To be honest, there’s no easy answer. AI is evolving faster than the law can keep up with. However, one thing is clear: we need new legal frameworks that take into account the distinct features of AI. Maybe we create special laws for high-risk AI. Maybe we treat them like corporations. Or maybe we invent an entirely new legal category.[36]

Whatever the answer is, the main goal should be the same: fairness and safety.[37]

CONCLUSION

AI is already shaping our world in ways we couldn’t even imagine.[38] But when there is great power, it brings great responsibility also, and we cannot ignore that just because the “person” in question is made of code and circuits.[39]

Laws are to protect people, and that includes protecting them from harm caused by machines. So, as we move forward, we need to ask tough questions, have honest conversations, and build legal systems that reflect this new reality.[40]

Because at the end of the day, it’s not just about blaming the robot. It’s about making sure someone, somewhere, is accountable when things go wrong.

Author(s) Name: Name: Pulkit Mittal (Guru Gobind Singh Indraprastha University)

References:

[1] Stuart Russell and Peter Norvig, Artificial Intelligence: A Modern Approach (4th edition, Pearson 2021).

[2]Luciano Floridi and Josh Cowls, ‘A Unified Framework of Five Principles for AI in Society’ (2019) 5 Harvard Data Science Review 1.

[3]Gabriel Hallevy, When Robots Kill: Artificial Intelligence under Criminal Law (Nijhoff 2013).

[4]HLA Hart, Punishment and Responsibility: Essays in the Philosophy of Law (2nd edition, OUP 2008).

[5]John Gardner, Offences and Defences: Selected Essays in the Philosophy of Criminal Law (OUP 2007).

[6]Gabriel Hallevy, Liability for Crimes Involving Artificial Intelligence Systems (Springer 2015).

[7]Michael A Eisenberg, Agency Law in Legal Theory (Harvard Law School Working Paper 2010).

[8] Thomas Burri, ‘The Law of Artificial Intelligence and the EU’ (2021) 12 Journal of European Law 45.

[9]Patrick Lin, ‘Why Ethics Matters for Autonomous Cars’ in Markus Maurer and others (eds), Autonomes Fahren (Springer 2016).

[10]Jack Balkin, ‘The Path of Robotics Law’ (2015) 6 California Law Review Circuit 45.

[11]Niraj Chokshi, ‘Self-Driving Uber Car Kills Pedestrian in Arizona, Where Robots Roam’ The New York Times (New York, 19 March 2018) https://www.nytimes.com/2018/03/19/technology/uber-driverless-fatality.html accessed 15 July 2025.

[12]Gabriel Hallevy, When Robots Kill: Artificial Intelligence under Criminal Law (Nijhoff 2013).

[13]Ugo Pagallo, ‘The Laws of Robots: Crimes, Contracts, and Torts’ (Springer 2013).

[14]Salomon v A Salomon & Co Ltd [1897] AC 22 (HL).

[15]European Parliament, ‘Civil Law Rules on Robotics’ (2017) 2015/2103(INL) https://www.europarl.europa.eu/doceo/document/TA-8-2017-0051_EN.html accessed 15 July 2025.

[16]Shawn Bayern, Autonomous Organizations (Cambridge University Press, 2021).

[17]Joanna Bryson, Mihailis Diamantis and Thomas Grant, ‘Of, for, and by the People: The Legal Lacuna of Synthetic Persons’ (2017) 25 Artificial Intelligence and Law 273.

[18]Matthias Uhl, ‘Ethical and Legal Challenges of AI in Safety-Critical Systems’ (2019) 10 AI & Ethics Review 12.

[19]Ian Goodfellow, Yoshua Bengio, and Aaron Courville, Deep Learning (MIT Press, 2016).

[20]Thomas Burri, ‘Liability for Artificial Intelligence and EU Law’ (2021) 59 Common Market Law Review 1527.

[21]Ugo Pagallo, ‘Robots of Just War: A Legal Perspective’ (2013) 3 Ethics and Information Technology 219.

[22]Rylands v Fletcher (1868) LR 3 HL 330.

[23]John G Fleming, The Law of Torts (9th edition, LBC Information Services, 1998).

[24]Thomas Burri, ‘Liability for Artificial Intelligence and EU Law’ (2021) 59 Common Market Law Review 1527.

[25]Mark A. Lemley and Bryan Casey, ‘Remedies for Robots’ (2019) 86 University of Chicago Law Review 1311.

[26]Cathy O’Neil, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (Crown 2016).

[27]Luciano Floridi, ‘What the Near Future of Artificial Intelligence Could Be’ (2019) 29 Philosophy & Technology 1.

[28]Virginia Dignum, Responsible Artificial Intelligence: How to Develop and Use AI Responsibly (Springer 2019).

[29]European Commission, ‘Proposal for a Regulation Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act)’ COM (2021) 206 final.

[30]Alex Engler, ‘The Rapidly Evolving Landscape of US AI Regulation’ (Brookings, 2022) https://www.brookings.edu/research/the-rapidly-evolving-landscape-of-us-ai-regulation/accessed 15 July 2025.

[31]NITI Aayog, ‘National Strategy for Artificial Intelligence #AIforAll’ (Government of India, 2018) https://www.niti.gov.in accessed 15 July 2025.

[32]Gabriel Hallevy, When Robots Kill: Artificial Intelligence under Criminal Law (Nijhoff 2013).

[33]Joanna Bryson, ‘The Artificial Intelligence of the Ethics of Artificial Intelligence’ in Markus Dubber, Frank Pasquale and Sunit Das (eds), The Oxford Handbook of Ethics of AI (OUP 2020).

[34]Thomas Burri, ‘Liability for Artificial Intelligence and EU Law’ (2021) 59 Common Market Law Review 1527.

[35]Roger Brownsword, ‘Law, Technology, and Society: Re-imagining the Regulatory Environment’ (Routledge 2019).

[36]Shawn Bayern, Autonomous Organizations (Cambridge University Press, 2021).

[37]Virginia Dignum, Responsible Artificial Intelligence: How to Develop and Use AI Responsibly (Springer 2019).

[38]Cathy O’Neil, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (Crown 2016).

[39]Luciano Floridi, ‘What the Near Future of Artificial Intelligence Could Be’ (2019) 29 Philosophy & Technology 1.

[40] European Commission, ‘Proposal for a Regulation Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act)’ COM (2021) 206 final.

logo juscorpus wo
Submit your post here:
thejuscorpus@gmail(dot)com
Ads/campaign query:
Phone: +91 950 678 8976
Email: support@juscorpus(dot)com
Working Hours:

Mon-Fri: 10:00 – 17:30 Hrs

Latest posts
Newsletter

Subscribe newsletter to stay up to date about latest opportunities and news.