Scroll Top




Artificial Intelligence (AI) has become a part of our lives without any of us realizing it. From personalized shopping experience on e-commerce sites to video recommendations based on our search and watch history on YouTube and Netflix, AI technologies cater to our tastes and preferences. They are also capable of doing our jobs. Kerala inducted a robot into their police force while people in Chennai and Hyderabad had robots serving and interacting with them at restaurants. From detecting diseases and performing surgeries to precision agriculture through crop classification and area estimation, AI technologies can be involved in every field. They have also been involved to solve social issues like identifying and curbing school dropouts in Andhra Pradesh. But like every other thing regulations are necessary for the AI field also.


To develop regulations and laws, there should be a clear understanding of what constitutes artificial intelligence, if it is a legal entity and who will have liability in case of damage among other things.

AI can be considered an individual legal entity under the law because, after a certain stage, AI learns from human activities and performs its functions, the creator or programmer does not have any role. Being a legal entity, AI will have rights and duties and will be subjected to the laws of the country. Here questions as to how the laws are to be enforced on such technologies or how they are to be punished arise. Most importantly, the problem of liability arises. Being an inanimate thing, liability will fall only on the programmer. In such a situation the programmer can claim that the AI did the act based on machine learning without the programmer’s interference. If AI technologies are given rights or citizenship like Sophia, the robot, again there would be many questions and issues to consider like whether the technology can buy the property or be allowed to vote. The state will have to consider whether AI can enter into contracts and be bound by it.

There is a gap between AI-generated works and copyright laws. Under the Copyright Act, copyright can be granted only to an author who is a natural person, a human being, and not an artificial person. Additionally, when something is produced by AI without any human assistance or input, the authorship is vested with the author who developed the AI. This is based on the assumption that the program or code made by the author resulted in the output.


Data security and privacy is another major area concerned with AI. AI uses a large amount of data of its users to generate responses and perform functions. In K.S. Puttuswamy v. Union of India, right to privacy in cyberspace was held to be a fundamental right. This calls for the passing of The Personal Data Protection Bill, 2019, and calls for government and non-government actors to strike a balance between AI learning and the right to privacy.

The European Union has the General Data Protection Regulation (GDPR) for data privacy and usage.


No country has been able to understand AI technologies completely or define them yet. So it is unwise to eliminate human oversight over the functioning of these technologies and eliminating full human interference. Here if humans can interfere with the technologies, it will allow people to feed biased data to the technology and the results could end up getting affected. The courts and the executive bodies will have a hard time finding a third party’s involvement in offenses committed by AI.

Also however much the technology can grasp human thinking and actions, it would not be able to understand nor follow human values and ethics. This is a problem especially when such technologies are used in place of lawyers or civil servants.


Given the above-mentioned problems, most countries have taken steps to define AI and formulate legislation to regulate the same. The UK has formed a committee in the House of Lords that publishes reports and recommendations. The European Union has a resolution called the EU Resolution on Robotics that defines AI, covers liability issues, and provides basic rules and ethics to be followed by developers, operators, and manufacturers in the robotics field. They are based on the three laws of robot technology by Asimov. According to the legislation, the robot cannot be held liable and the responsibility falls on the user or manufacturer. But this raises the issue of liability for a decision made by the robot itself or such actions and inactions that cannot be predicted.

The regulations in EU countries are otherwise focused on automated and semi-automated vehicles. For example, Germany has the German Traffic Act which imposes responsibility on the owner of the automated or semi-automated vehicle for managing the vehicle.

Russia is in the process of drafting a law called the Grishin Law, which is similar to the EU legislation bill. The draft law puts all responsibility on the robot’s developer, operator, or manufacturer. It also deals with other nuances like the robot’s representation in court, supervisory agencies, etc. Russia also has a draft Model Convention on Robotics and AI that deals with creating and using robots and AI.

The US does not give AI legal status as an individual and defines AI in Section 3 of its bill ‘Artificial Intelligence Initiative Act’ as artificial systems capable of performing tasks without human presence and can learn from experiences, computer software, or physical hardware that can solve tasks and think or act like humans and systems that act rationally and can achieve goals.

So countries are not in favor of granting legal status to AI technologies or robots and want the owners to be liable for all actions or inactions on the part of the AI machine. Such an approach can significantly affect AI development.


It would be unwise and impossible to resist the advances in AI given its scope and possibilities in various fields. What needs to be done to ensure the best application and use of AI in the country would be to put up adequate regulations and laws. For this first, a clear understanding and definition of AI and its legal position, liability, etc. are required keeping in mind that the ultimate end of AI is to make human life easier. So the definition and laws should be formulated keeping in mind the best interests of the larger community.

Author(s) Name: Anugra Anna Shaju (National University of Advanced Legal Studies, Kochi)



Atabekov, A. and O. Yastrebov. “Legal Status of Artificial Intelligence Across Countries: Legislation on the Move.” European Research Studies Journal 21.4 (2018): 773 – 782.

Bajpai, G.S. and Mohsina Irshad. “Artificial Intelligence, the law and the future.” 11 June 2019. The Hindu. 24 January 2021 <>.

Bhattacharya, Pramit. “Core Legal Issues with Artificial Intelligence in India.” 25 September 2020. Fox Mandal. 24 January 2021 <>.

Gupta, Pallavi. “Artificial Intelligence: Legal Challenge in India.” May 2019. Research Gate. 24 January 2021 <>.

Related Posts