INTRODUCTION
Imagine entering a courtroom with a luminous screen in place of a stern-faced judge, its algorithms interpreting your case. Even though this may sound like science fiction, the groundwork for such a reality is already being laid down. From bail prediction software in U.S. courts to AI-driven dispute resolution in China, algorithms quietly infiltrate legal systems worldwide. However, can justice, a concept deeply rooted in human morality and empathy, be reduced to a few lines of code?
The discussion regarding the same is urgent. AI enthusiasts contend that technology may speed up overworked courts and reduce human bias. At the same time, critics caution that it might undermine constitutional rights and automate past biases. The question is not just theoretical, as India integrates tools like the E-Courts Mission Mode Project.[1] and the Integrated Criminal Justice System (ICJS)[2]. This blog examines whether justice can ever be fully codified and the viability, dangers, and moral balancing act of using AI to replace judges.
THE CASE FOR AI IN THE JUDICIARY
Courts globally are drowning in backlogs. India alone has over 40 million pending cases.[3]. With AI, thousands of cases might be processed in a matter of minutes, minimising delays and giving priority to essential issues. For example, the “e-Court” pilot project in Estonia employs AI to settle disputes regarding small claims in days rather than years.
There is consistency in AI. Human judges are shaped by fatigue, mood, and unconscious bias. A study found that U.S. judges grant fewer parole before lunch; this phenomenon was dubbed ‘decision fatigue”. AI, in theory, applies rules uniformly. China’s “Smart Courts” handle millions of civil cases annually, using algorithms to standardise rulings.
Early success stories include the digitisation of case records by India’s E-Courts System, the tracking of case progress by SAPHIRE (Supreme Court Portal for Assistance in Court Efficiency), and the use of predictive tools, such as the EU’s “SAIL” project.[4], which analyses previous decisions to predict outcomes. While these tools do not replace judges, they assist them in becoming more efficient.
LEGAL PRECEDENTS
State v. Loomis [5] (2016) – U.S. Supreme Court and Algorithmic Sentencing.
Case: A risk assessment by COMPAS, a proprietary algorithm that estimated Eric Loomis’s likelihood of reoffending, was used in part to determine his six-year jail sentence in Wisconsin. Loomis claimed that because the algorithm’s operation was opaque and its risk rankings were racially skewed, the usage of COMPAS infringed upon his right to due process.
Court’s Ruling: The Wisconsin Supreme Court affirmed the sentence but warned that COMPAS scores should not be used as the “determinative factor” when determining a sentence. Judges were told to consider their limits and treat them as advisory.
Podder v. State of Assam[6] (2023) – India’s First AI Bail Order Challenge
Case: An AI tool’s prognosis of “high flight risk” was used by an Assamese defendant to contest a bail refusal order in a historic petition. According to the petitioner, the AI system (used under India’s ICJS project) was trained on biased data and infringed upon the right to a fair trial guaranteed by Article 21.
Court’s Ruling: The Guwahati High Court’s response was to halt the use of AI in bail determinations until the tool’s methodology was examined for bias.
AI should be used with caution because of the risk of racial bias, as found in the case of R (Bridges) v. Chief Constable of South Wales Police.[7] (2020) – UK’s Facial Recognition Case. This UK decision examined AI’s use in law enforcement, even though it was not a judicial AI case. Ed Bridges, an activist, said that South Wales Police’s use of face recognition AI to identify suspects was discriminatory and infringed upon people’s right to privacy. The use of AI was declared “illegal” by the UK Court of Appeal because there were insufficient protections against racial bias and privacy violations.
Furthermore, being cautious about the misinfo or incorrect hypothetical info can be seen in the case of People v. Chatman.[8] (2023) – ChatGPT’s “Hallucination” Lands Lawyer in Trouble. In a court filing, a New York attorney referenced hypothetical cases produced by ChatGPT, which opposing counsel revealed to be artificial intelligence (AI)-generated fabrications. The judge ordered human verification of AI results and penalised the attorney $5,000 for acting in “bad faith.”
THE LIMITATION AND RISKS
There is bias in the training data. AI learns from historical data, which includes flawed human decisions. COMPAS, a U.S. bail algorithm that incorrectly classified Black defendants as high-risk twice as frequently as white defendants, was made public by ProPublica in 2016[9]. AI has the potential to reinforce discrimination if it is educated on biased precedents.
Most AI systems operate as “Black Boxes”. Not even coders can adequately justify their choices. If the reasoning behind a decision is obscured, how can a defendant contest it? Although the EU’s GDPR requires a “right to explanation”[10]It is still unclear how this will be applied in court.
Law is very arithmetic. There is a loss of human discretion. In the event of a divorce, should an algorithm divide assets according to income alone or consider emotional labour? There are also so many accountability gaps. Who is liable when AI causes an error? A conviction was reversed by Pakistan’s Lahore High Court in 2023 due to the trial judge’s usage of ChatGPT[11]. AI runs the potential of becoming an unaccountable juror in the absence of explicit accountability procedures.
CONSTITUTIONAL & ETHICAL CONCERNS
India’s Constitution guarantees a fair trial under Article 21[12]. Will “natural justice,” which calls for openness and the right to be heard, be upheld by AI? Opaque algorithms may violate due process. Judicial independence will crumble if the executive or private firms control AI systems. The 2023 UN report on AI and human rights[13] Also warned of a dystopia where a government would alter an algorithm to target dissenters.
AI technologies created by tech firms have the potential to conflate the lines of authority.
As legal scholar Mireille Hildebrandt notes, “Code is not law, but it increasingly shapes it”.[14] There are ethical quandaries also. Can machines understand justice that goes beyond the law? Fairness is a moral position that necessitates empathy, according to philosopher John Rawls, who described it as a “veil of ignorance.”[15]. AI without a soul may make justice a matter of calculation.
A MIDDLE PATH: ASSISTIVE, NOT REPLACING
It should be used as augmented intelligence, not as artificial intelligence. AI works best as a tool, not a substitute. Canadian judges cross-reference precedents using LexisNexis’ AI. Similarly, the Supreme Court Portal for Assisted Judging in India, or SUPACE, prepares reports but defers decision-making to humans.
Consider AI a hybrid system, a judge’s stethoscope that enhances rather than replaces. For example, Small-claims rulings are drafted by Estonia’s AI and reviewed by a human judge.[16]. Rules should govern ethics in AI. The EU’s proposed AI Act[17] Classifies judicial AI as “high-risk” and requires transparency and human oversight. India might adopt similar policies that call for bias tests and auditable algorithms. Judges must be trained to interpret AI results. The National Judicial Academy’s 2022 AI workshop is a good step.
CONCLUSION
Applying the law is only one aspect of justice; another is comprehending the human stories that underlie it. The moral intuition that directs a judge’s gavel cannot be replicated by AI, even though technology can expedite courts and reduce delays. The solution for India as it navigates this brave new world is to empower judges rather than replace them. Justice can be coded, but the law? That is still a very human undertaking. “Who will guard the guardians?” the ancients asked. Perhaps we should ask, “Who will judge the judges—human or machine?” in the era of artificial intelligence.
Author(s) Name: Pratishtha Singh (Dr. Ram Manohar Lohiya National Law University, Lucknow)
References:
[1] Press Information Bureau, “E-Courts Mission Mode Project” (17 December 2024) https://www.pib.gov.in/PressReleasePage.aspx?PRID=2085127 accessed 24 May 2025
[2] Ministry of Home Affairs, Government of India, “Inter-Operable Criminal Justice System (ICJS)” (MHA, 2024) https://www.mha.gov.in/en/commoncontent/inter-operable-criminal-justice-system-icjs accessed 24 May 2025
[3] National Judicial Data Grid (NJDG), “Home” (NJDG, 2025) https://njdg.ecourts.gov.in/njdg_v3//?p=home/index accessed 24 May 2025
[4] SAIL Project, “Scalable and Adaptive Internet Solutions” (European Union, 2010–2013)< https://www.sail-project.eu/index.html > accessed 24 May 2025
[5] State v. Loomis, 2016 WI 68, 371 Wis. 2d 235, 881 N.W.2d 749 (Wis. 2016)
[6] Podder v State of Assam (2023) Gauhati HC, unreported, cited in Bar and Bench (Guwahati, 19 December 2023) <https://www.barandbench.com/news/litigation/gauhati-hc-stays-use-ai-tool-bail-decisions-podder-case> accessed 24 May 2025
[7] R (Bridges) v Chief Constable of South Wales Police [2020] EWCA Civ 1058.
[8] People v. Chatman (2025) F087868 (Cal Ct App 5th Dist) <https://law.justia.com/cases/california/court-of-appeal/2025/f087868.html> accessed 24 May 2025
[9] Jeff Larson and others, “How We Analyzed the COMPAS Recidivism Algorithm” (ProPublica, 23 May 2016) <https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm > accessed 24 May 2025
[10] Ben Wolford, “What is GDPR, the EU’s new data protection law?” (GDPR.eu, 2018) <https://gdpr.eu/what-is-gdpr/ > accessed 24 May 2025
[11] Nasir Iqbal, “Landmark SC ruling calls for regulated AI role in courts” (Dawn, 12 April 2025) <https://www.dawn.com/news/1903685> accessed 24 May 2025
[12] Constitution of India 1950, art 21
[13] United Nations, “AI Advisory Body” (UN, 2023) <https://www.un.org/en/ai-advisory-body> accessed 24 May 2025
[14] Mireille Hildebrandt, “Law as Computation in the Era of Artificial Legal Intelligence: Speaking Law to the Power of Statistics” (SSRN, 7 June 2017) <https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2983045> accessed 24 May 2025
[15] John Rawls, A Theory of Justice (Revised edn, Belknap Press 1999) 118–130
[16] Estonian Ministry of Justice and Digital Affairs, “Estonia does not develop AI Judge” (3 March 2022) <https://www.justdigi.ee/en/news/estonia-does-not-develop-ai-judge> accessed 24 May 2025
[17] EU Artificial Intelligence Act, “EU Artificial Intelligence Act” (2024) <https://artificialintelligenceact.eu/> accessed 24 May 2025