Artificial Intelligence (AI) is the branch of computer science dealing with the simulation of intelligent behavior in computers. AI is currently a fast-growing industry globally. With its many fields of application ranging from search engines like Siri and ChatGPT to the medical and healthcare field in detecting diseases and customizing treatments, it is not an exaggeration to call it the fourth industrial revolution. Despite the many opportunities the advent of AI brings with it, there is a significant lack of laws regulating it which makes the question of liability ambiguous. This has also led people to take advantage of AI without giving it due credit. One solution is to give AI legal personhood treating it as a human entity with rights and duties. But legal personhood to AI is a much-debated topic.


Black’s Law Dictionary defines a person as any being whom the law regards as capable of rights and duties. There are two types of persons in law:

1) Natural person is a human being naturally born and thereby inheriting rights and duties and the ability to exercise these rights.

2) A legal person in an entity that is given the status of an artificial person with rights and duties and who can sue and be sued.  

While natural personhood is attributed to a naturally born human being, legal personhood is one given to entities that may be a group of people like a corporation or company, or deities/idols in temples who are considered the legal owner of the temple property and wealth.


As stated above a person is any being capable of rights and duties. While AI is not a natural person it can exhibit autonomous behavior, it can function by itself from one point on without the creator’s interference. Machine learning allows the AI to create accurate responses to new data which was not present in the original dataset. The ability of AI to develop itself to function and make decisions independently may be enough to make it capable of rights and duties.

Humans have traits that artificial intelligence cannot replicate at least not naturally without an algorithm dedicated to the same. People experience emotions and poses moral codes. Emotions and morals are important even in the legal field as they help in deciding the intent and motive behind actions and omissions. However, this is one challenge AI poses since we cannot understand its internal workings. This is called the black box paradox.  Artificial Intelligence can be a person and yet not a person. As established it is not a natural person but capable of exercising rights and duties with no moral obligations or constraints. This could be the very reason why people fear AI.


Granting AI legal personhood would mean giving it the power to sue or be sued, and exercise rights and duties. But the personhood and its implications of liability and rights will differ for AI as compared to other legal persons at present. The reason is that AI is an autonomous entity, capable of decision making unlike other legal persons like a deity, river, and a corporation (a group of people who function as a single person in exercising their rights and duties). Some argue that the autonomous nature of AI is a precursor for granting it legal personhood and some argue against it on the AI’s lack of human traits like morality.


The Morality argument: Just because it is intelligent it does not mean it can integrate into society as a normal human being as is the case with other legal persons. Respecting others’ liberty and the moral implications of law cannot be replicated by AI which is prone to inaccuracy, and bias. It makes decisions based on the goal or the purpose of its creation in a logical way that does not entail any moral obligations upon it. Morals are yardsticks of social consciousness on standard behavior and basically what is accepted and what is not. AI is biased from the structure of its datasets itself, so how can one trust it to make judgments morally defying the logic ingrained in its algorithm, logic while accurate is cold and often needs to be paired with moral and social factors. Humans are quite sensitive to the question of morality and fairness, etc, and AI today is not equipped to meet these expectations.

The Punishment and accountability: Punishment is awarded to a person by a court of law to deter him/her from committing further crimes and also to discourage everyone in the public domain from committing crimes. However, such punishments would not have the same effects on an autonomous entity like AI. People have a lot to lose when punished and there is also the element of fear and remorse which is not felt by AI. If the AI performs to achieve its objective or goal in an illegal manner or by a criminal action that is not reasonably foreseeable by its designer then should the AI be held responsible for its automated decision? Did it have the intent to cause harm (mens rea)? Is the designer responsible for all the decisions made as the respondeat superior? Should the designer be held liable even though s/he neither directly commanded such action nor could reasonably foresee the outcome? And on and on, these questions are still difficult to answer and a lot more technological, legal, and philosophical cooperation is needed to unravel and clarify these questions. So, as a legal person how can AI be punished or held liable, is it even capable of being held liable? Is being autonomous enough to hold it accountable for its action?

The ambiguous nature of accountability and punishment of AI makes the precursor of a legal person to be ‘sued’ difficult and confusing concerning AI.

Misuse of legal status: The designers or owners may take advantage of the legal person status of AI. They would be able to escape any liability if legal personhood is granted to AI as it is, without changes to suit the system’s autonomous nature.


Solves the issue of accountability and liability: While the above arguments provided challenges faced if AI was made a legal person, the solution for the same could be granting legal personhood itself. By granting legal personhood AI systems can be punished and brought under the jurisdiction of civil and criminal law. AI systems could be treated and tried similarly to corporate entities. But a much more comprehensive and neo interpretations of negligence (on the part of the designers), technological advancements (to understand the internal working of AI), and Liability tests (apart from conventional liability measures) are needed.

To give due credit to AI systems: Artificial intelligence has become very easily accessible and while it is helpful in many ways, it is also important to give it due credit for any assistance AI provides. AI can now be used to paint, write research papers, create music, etc, and adding to this, the accessibility has made it easier for people to take advantage of and present AI works as their own. Because AI is not considered a person plagiarism using OpenAI tools to finish assignments and produce creative pieces has become rampant. Granting legal personhood to AI can give patent and copyright rights to AI and deter such malpractices.


Granting legal personhood can be both beneficial as well as dangerous concerning AI systems. Beneficial to protect AI through intellectual property rights and decide the ambiguous liability aspect. And dangerous if the autonomous nature is not stressed to interpret the legal personhood implications specifically suited for AI. Here the solution is a problem if not solved properly. Currently, legal personhood for AI needs an all-new interpretation and definition considering the technological and legal aspects with great scrutiny. And the most pressing issue would be liability which calls for legal personhood in the first place, so compensation and liability mechanisms for AI irregularities must be dealt with first, before deciding upon the question of whether to grant legal personhood to AI or not.

Author(s) Name: Tanvitha Reddy. K (Osmania University College of Law)

error: Content is protected !!