INTRODUCTION:
In modern times, technology has developed to an unprecedented scale that was unimaginable a certain number of years ago. Artificial intelligence (AI) has paced its way into the modern world and has become a familiar subject to every one. The legal system has also embraced it in imparting justice, providing data, and some other aspects. Even though AI works on mathematical calculations, and has made our lives much easier, there are some probabilities of biased results. This bias affects the impartial promulgation of justice.
WHAT DEFINES ARTIFICIAL INTELLIGENCE AND WHAT UNDERLIES ITS FUNCTIONALITY:
Broadly speaking, AI denotes a computerized apparatus possessing intelligence comparable to that of humans, endowed with diverse cognitive abilities, and specifically programmed to execute various tasks. From waking up from bed to the end of the day, the lives of all persons are filled with it. We frequently use the following common instances of AI in our daily lives: Smart assistants like Siri, Alexa, and others, Maps on Google, Autonomous vehicles, Chat GPT, etc.
Working of Artificial Intelligence: The science of AI strives to develop a computer system capable of emulating human behaviour, enabling the resolution of intricate problems through human-like thought processes. AI systems function by integrating extensive datasets with intelligent processing algorithms, rapidly executing multiple tasks in a remarkably short period. To achieve this, AI leverages key methodologies like “machine learning” and “deep learning,” executing assigned tasks in a manner akin to human cognitive processes.[1]
UNFOLDING THE TERM ALGORITHM AND ALGORITHMIC BIASEDNESS:
An algorithm is a collection of instructions or a finite sequence of steps that are used to solve a problem or complete a task. Algorithmic bias refers to the systematic and unfair prejudices in AI systems’ outputs, resulting from inherent preconceptions in data or algorithms. This bias is prevalent in various industries, such as banking, recruiting, criminal justice, and healthcare, as AI becomes increasingly used in decision-making processes.
WHAT IS PREDICTIVE JUSTICE AND ITS NEED IN THE JUDICIAL SYSTEM:
Predictive Justice refers to using a large amount of data to reach a judicial outcome. Predicting a case’s outcome has long been important to the profession of law. For legal advice and the rendering of judgments, a reasonable appraisal of the prospective legal repercussions is essential. Artificial intelligence (AI) offers a considerably more sophisticated approach to prediction as compared to the traditional instruments utilized by lawyers and judges for it (professional expertise, empirical knowledge, etc.). But with the outcome of modern technology, AI has acquired this sphere. Predictive justice utilizes AI and data analytics to efficiently allocate resources, reduce crime, and enhance public safety in law enforcement and criminal justice. It uses algorithms and machine learning to identify high-risk areas and predict criminal activity.
IMPLEMENTATION OF ARTIFICIAL INTELLIGENCE IN INDIAN JUSTICE SYSTEM:
The Indian government has actively adopted the technology of artificial intelligence in its various departments. The judicial system is not far from adopting it early in imparting justice and arriving at decisions through data collected with the help of AI.
Justice L. Nageswara Rao, leading the Supreme Court AI Committee, has the intention to employ artificial intelligence in administrative functions and expedited legal procedures.[2]A try has been made by the judicial system to enact various ICT tools to improve the efficiency of the court system.
- ECourts Mission Mode Project (ECourts project):
The ECourts Integrated Mission Mode Project, launched in 2007, represents a key component of the National e-Governance initiatives implemented across District and Subordinate Courts throughout the country. Its primary focus lies in equipping courts with the necessary hardware and software applications to facilitate the delivery of electronic services. Additionally, the project empowers the judiciary with tools for monitoring and managing court functions. Its ultimate aim is to advance and optimize the Information and Communication Technology system within the judicial domain.[3]
- SUVAAS:
SUVAAS provides translation services for legal judgments and other legal documents, facilitating the conversion between English and nine vernacular languages in both directions.
- SUPACE:
Chief Justice S.A. Bobde inaugurated the Supreme Court Portal for Assistance in Court Efficiency (SUPACE) on April 6, 2021. SUPACE is an artificial intelligence gateway providing relevant information and legislation to courts, aiming to increase efficiency and reduce case pendency.
POSSIBILITY OF BIAS IN THE AI-BASED ALGORITHMS:
There are two types of algorithms namely computational algorithm and learning algorithm. The work of a computational algorithm is to use the previously defined definitions as an input to reach an output. The work of learning algorithms includes the use of defined outcomes and further learning from them to conclusions. The possibility of bias increases in the learned algorithms.
A significant concern in discussions about AI-driven judicial systems revolves around the potential for bias. Even if the creators of the algorithm did not intentionally embed bias, the data used to conclude often mirrors existing systemic biases.
ProPublica, a non-governmental organization, analyzed the Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) system in the United States. This AI tool is designed to evaluate the likelihood of recidivism, which refers to the probability of a convicted criminal committing further offenses after release. The system is used to inform decisions on pre-trial detention, sentencing, and early release. However, an investigation revealed that COMPAS consistently overestimated the risk of recidivism for African American offenders. In contrast, white offenders who did re-offend within two years were inaccurately identified as low-risk twice as frequently.[4]
SCOPE OF BIASEDNESS IN ALGORITHMIC JUDGEMENT IN A DIVERSE AND GENDERISED SOCIETY OF INDIA:
Often algorithms, which the AI-based software uses are made up of the pre-occupied or historically available data. Due to the inception of genderized notions, caste-based differences, and sexual-oriented differences in the veins of society, AI often ends up giving a biased conclusion. Even if there is no involvement of any human being in prejudicing the algorithms, then there is a probability of the presence of biasedness in results.
The data collected by algorithms include neighborhood assumptions of racism and in fact, the usage of these AI tools in the prediction of cases may lead to more extremism and gender biasedness in society. Thus, these tools should be used with great caution due to the devil present in the data itself which may result in caste distortion of society rather than giving an efficient justice system.
MEASURES TO DECREASE DATA BIASEDNESS:
An approach involves providing cost-effective devices to empower marginalized groups, enabling them to access the internet, express themselves, and contribute to the generation of knowledge about their communities. This initiative aims to enhance the credibility of Indian datasets by averting the distortion of data.[5]
An alternative approach involves educating journalists, activists, and lawyers to foster a technical skill set that enables them to scrutinize AI systems and ensure accountability, similar to practices in Western countries. Examining the AI implementations in the judicial systems of other nations can also provide insights into effective management strategies.
CONCLUSION:
Artificial intelligence and tools of technology are never bad unless they light a fire in your house. Prediction of Justice becomes important in today’s world considering the pendency of cases. However using tools that have the potential to discriminate the society by providing biased results is not at all effective way. There is no such thing as a perfect system or tool. However, we must avoid introducing systems that promote racism and unfairness into the criminal justice system. Society must recognize that computer algorithms’ effectiveness depends on the conditions they are fed and encourage algorithmic transparency through regulatory controls. Failure to do so risks abandoning the ideal of a fair society and treating all inhabitants equitably. The need of the hour is to make judicial use of the tools towards a fairer and more equitable society rather than providing a back gear to it.
Author(s) Name: Krishi Mittal (University Institute of Legal Studies, Panjab University, Chandigarh)
Reference(s):
[1] Rashi Maheshwari, ‘What Is Artificial Intelligence (AI) And How Does It Work?’ (Forbes Advisor INDIA, April 3, 2023) <https://www.forbes.com/advisor/in/business/software/what-is-ai/> July 23, 2023.
[2] Justice L.N. Rao, ‘AI and the law,’ (Online webinar of Shyam Padman Associates, 6 August 2020) accessed on 23 Jul. 23
[3]‘ECourts Mission Mode Project’ (Department of Justice) <https://doj.gov.in/ecourts-mission-mode-project-2/> accessed 23 Jul. 23
[4] Ajoy, ‘Artificial Intelligence and Judicial Bias’ (Centre for Law & Policy Research, August 28, 2021) <https://clpr.org.in/blog/artificial-intelligence-and-the-courts/> accessed July 23, 2023.
[5]Shijith Kunhitty, ‘AI Algorithms Far from Neutral in India’ (Mint, February 18, 2021) <https://www.livemint.com/news/world/ai-algorithms-far-from-neutral-in-india-11613617957200.html> accessed 23 Jul. 23