Artificial Intelligence (AI) is becoming increasingly prevalent in modern society, with a growing number of industries and sectors using AI-powered tools to streamline processes, improve efficiency, and enhance decision-making. However, as the use of AI becomes more widespread, so too do concerns about the potential for harm or damage caused by AI. As a result, the issue of liability for harm or damage caused by AI has become a critical topic of discussion in legal and policy circles. The question of liability in the context of AI is complex and multifaceted and involves a range of legal, ethical, and practical considerations. One key challenge is the fact that AI systems can operate autonomously and make decisions independently, raising questions about who should be held responsible in the event of harm or damage. Additionally, AI systems are often trained on large datasets, which can contain biases and inaccuracies that can result in discriminatory or harmful outcomes.
INDIAN LIABILITY FRAMEWORK FOR AI HARM OR DAMAGE
In India, the legal framework for AI harm or damage is governed by tort law. Liability can be imposed under the principles of strict liability, negligence, and vicarious liability. Strict liability applies when harm is caused by a defect in the AI system, and the injured party does not need to prove fault. Negligence applies when harm is caused by the failure to exercise reasonable care in the development, design, or operation of the AI system. Vicarious liability applies when the AI system is owned and operated by a third party, and the third party is held liable for the harm caused by the AI system. The Indian Information Technology (Reasonable Security Practices and Procedures and Sensitive Personal Data or Information) Rules, 2011, and the Personal Data Protection Bill, 2019 are two relevant regulations related to AI harm or damage in India. These regulations establish the obligation of organizations to implement reasonable security practices and procedures to safeguard sensitive personal data or information from unauthorized access, use, disclosure, and destruction.
COMPARISON OF INDIAN LIABILITY FRAMEWORK WITH OTHER COUNTRIES
The liability framework for AI harm or damage varies in different countries. In the United States, the legal framework is similar to India’s, where the principles of strict liability, negligence, and vicarious liability apply. However, in the European Union, the liability framework is more stringent, as the EU has proposed a regulation that would establish a mandatory insurance requirement for AI systems. This would ensure that compensation is available in the event of AI harm or damage, even if the injured party is unable to prove fault or negligence. In India, the principles of proportionality and reasonableness apply to AI harm or damage. The principle of proportionality suggests that the level of responsibility should correspond to the level of control the AI system has over the harm or damage. The principle of reasonableness suggests that the standard of care should be determined by what is reasonable under the circumstances. The Indian case of K.R. Purushothaman v. Union of India and Others is an example of a case where AI harm occurred due to negligence. “In this case, a glitch in the computerized evaluation system resulted in an incorrect valuation of answer sheets. The court held the authorities liable”. Another relevant principle in India is the doctrine of ‘res ipsa loquitur,’ which means ‘the thing speaks for itself.’ “This principle applies when the harm or damage caused by the AI system is so obvious that it can be inferred that the defendant was negligent. For example, if an autonomous vehicle crashes into a pedestrian, it can be inferred that the manufacturer was negligent, as the vehicle is designed to prevent such accidents”. Another Indian case relevant to AI harm or damage is the case of the Aadhaar data breach . “In this case, the government and UIDAI were held responsible for the data breach and were ordered to take steps to improve data security”. This case highlights the importance of implementing reasonable security practices and procedures to prevent harm or damage caused by AI systems. The Supreme Court of India has also recognized the right to privacy as a fundamental right under the Indian Constitution, which may be violated by the use of AI systems. In comparison to other countries, India’s liability framework for AI harm or damage is relatively similar to that of the United States, with principles of strict liability, negligence, and vicarious liability applying. However, the European Union’s proposed mandatory insurance requirement for AI systems goes beyond India’s liability framework and may provide greater protection for injured parties.
International Agreements on AI Liability
The legal frameworks for liability for AI-related harm may vary across different countries and regions, which could create challenges for the development and use of AI systems across international borders. The following are the few international bodies and statutes:
- United Nations Guiding Principles on Business and Human Rights: These principles were endorsed by the UN Human Rights Council in 2011 and provide a framework for businesses to respect human rights, including the right to remedy for harms caused by their activities, products, or services.
- The Organisation for Economic Cooperation and Development (OECD) Principles on Artificial Intelligence: These principles were adopted in May 2019 and outline five principles for the responsible development and deployment of AI, including the principle of accountability, which states that those responsible for developing, deploying, or operating AI systems should be accountable for their proper functioning and any harm caused.
- The European Union’s General Data Protection Regulation (GDPR): The GDPR, which came into effect in May 2018, sets out rules for the protection of personal data and imposes liability on data controllers and processors for any harm caused by their processing activities.
- The Council of Europe’s Convention on Cybercrime: This convention, which was adopted in 2001, aims to harmonize national laws related to cybercrime, including the criminalization of certain acts related to computer systems, and includes provisions related to liability for damage caused by cybercrime. Harmonizing liability laws across international borders may be particularly challenging.
Efficient Solutions Aganist AI Harm
The intersection of artificial intelligence (AI) and law presents both opportunities and challenges. On the one hand, AI has the potential to transform the legal industry by making legal research and analysis more efficient, improving access to justice, and even helping to predict legal outcomes. On the other hand, AI raises significant ethical and legal questions, such as who is liable when an AI system causes harm or makes a mistake, and how to ensure that AI systems are transparent, unbiased, and trustworthy. To move forward in this area, several steps can be taken. One is to continue to develop and refine AI systems that can assist legal professionals in their work. For example, AI can help lawyers review large amounts of data, identify relevant cases and statutes, and even predict legal outcomes. However, these systems must be transparent and explainable so that legal professionals can understand how they arrived at their recommendations. Another important step is to establish legal frameworks and regulations that address the unique challenges posed by AI. For example, liability laws may need to be updated to account for the fact that AI systems can cause harm or make mistakes, and regulations may need to be put in place to ensure that AI systems are transparent, fair, and trustworthy. Finally, it is important to engage in public debate and education around AI and the law. This includes not only educating legal professionals and policymakers about the potential benefits and risks of AI but also involving the public in discussions about how AI should be regulated and deployed in the legal industry.
The increasing use of AI systems in various industries in India and other countries raises the potential for harm or damage caused by these systems. The liability framework for AI harm or damage in India is governed by the principles of strict liability, negligence, and vicarious liability. Indian regulations, such as the Information Technology (Reasonable Security Practices and Procedures and Sensitive Personal Data or Information) Rules, 2011 and the Personal Data Protection Bill, 2019, impose obligations on organizations to implement reasonable security practices and procedures to safeguard sensitive personal data or information from unauthorized access, use, disclosure, and destruction. Principles such as proportionality, and reasonableness are relevant to AI harm or damage in India, and the Aadhaar data breach case highlights the importance of implementing reasonable security practices and procedures.
Author(s) Name: Utsav Biswas (Gujarat National Law University, Silvassa)