Scroll Top

LEGAL AND ETHICAL REMEDIES FOR AI “HALLUCINATIONS” IN LEGAL FILINGS AND RESEARCH: MALPRACTICE & EVIDENTIARY APPROACH

Artificial Intelligence (AI) is a fast-changing legal practice. Generative AI software like ChatGPT, Claude, Gemini, and Copilot are being used more and more to draft pleadings, do

LEGAL AND ETHICAL REMEDIES FOR AI “HALLUCINATIONS” IN LEGAL FILINGS AND RESEARCH MALPRACTICE & EVIDENTIARY APPROACH

INTRODUCTION

Artificial Intelligence (AI) is a fast-changing legal practice. Generative AI software like ChatGPT, Claude, Gemini, and Copilot are being used more and more to draft pleadings, do legal research, and summarize judgments. This adds efficiency, but it has a very real danger: AI hallucinations, instances where an AI confidently produces made-up or false facts, statutes, or legal citations.

 This phenomenon is no longer hypothetical. In 2023, an American attorney was sanctioned for submitting a brief containing entirely fictional case law created by ChatGPT.[1] This kind of incident poses immediate legal and ethical questions: Who is accountable for AI mistakes in legal submissions? How does the legal system stop this type of abuse without dismissing technology entirely?

 This blog discusses legal and ethical solutions to AI hallucinations, with emphasis on malpractice liability, evidentiary concerns, regulatory actions, and prevention best practices.

UNDERSTANDING AI HALLUCINATIONS IN LEGAL CONTEXT

AI tools generate content probabilistically; they do not “know” the law but make predictions about patterns of language. This is often followed by:

  • Unreal case citations (non-existent judgments).
  • Incorrect statements of legal rules.
  • Fake statutory references or scholarly sources.
  • Inaccurate summaries of authentic judgments.

A most dramatic instance is Mata v Avianca Inc, in which lawyers filed a brief with fabricated cases produced using ChatGPT. The judge imposed sanctions and noted that lawyers should double-check their citations.[2] The case reaffirmed that AI tools are not sources of law and cannot substitute professional legal research.

  • PROFESSIONAL RESPONSIBILITY AND MALPRACTICE LIABILITY
  • Professional Negligence and Duty of Competence

 Legal professionals have a duty of competence to act accurately and diligently towards their clients.[3] Professional negligence can take the form of reliance on hallucinated AI content without verification.

 In India, the Advocates Act 1961 and the Bar Council of India Rules mandate advocates to uphold professional standards and avoid misleading the court.[4] A lawyer who files a pleading with phantom case citations can be complained about and subject to disciplinary action for professional misconduct.

  • Ethical Responsibilities

 Part VI, Chapter II of the Bar Council Rules mandates advocates to act in good faith, refrain from misleading statements, and uphold the dignity of the profession.[5]

 The same is the position regarding Rule 1.1 of the ABA Model Rules of Professional Conduct, which formally acknowledges technological competence as a component of legal competence.[6] Even in India, where specific AI rules have not been created so far, this principle can inform anticipated standards of professionalism.

EVIDENTIARY ISSUES AND LEGAL FILINGS

  • Authentication of Legal Authorities

 Jurists depend upon authentic, validated sources of law. False citations cannot be authenticated and stand to be rejected forthwith. Hallucinated content:

  • Harms the lawyer’s credibility.
  • Can lead to costs or dismissal of claims.
  • Can even lead to sanctions.
  • Best Evidence and Legal Reliability

 The Indian Evidence Act 1872 puts the stress on reliability and authenticity.[7] Pleading with hallucinated citations is tantamount to pleading with forged evidence. Even by default, it can discredit the whole pleading.

  • Judicial Responses

 Courts are already responding. In Mata v Avianca, the court punished the lawyers for not fact-checking AI outputs.[8] Likewise, standing orders in a few US courts now mandate lawyers to attest that AI-generated content has been vetted.[9]

 Indian courts have not done this yet, but this became a precedent that may affect future procedural evolution.

  • REGULATORY AND JUDICIAL DEVELOPMENTS
  • Court-Level Measures

 In May 2023, US District Court Judge Brantley Starr issued a standing order mandating lawyers to state whether AI was utilized in preparing filings and to ensure that citations were checked.[10] This is an affirmative judicial move aimed at deterring the filing of bogus citations.

In India, plans like the e-Courts Mission Mode Project reflect the rising digitalization of the judiciary.[11] While no official AI rules have been created yet, such procedural guidelines may be introduced later.

  • Professional Regulation

 The Bar Council of India can issue professional advisories mandating:

  • Forced verification of AI output.
  • Maintenance of records of AI-enabled work.
  • Disciplinary action for submitting fake content.

Globally, the EU AI Act categorizes some legal applications of AI as “high risk” with more accountability and transparency required.[12] Explainability and reliability are similarly emphasized by the OECD AI Principles.[13] These frameworks provide templates for crafting domestic regulations.

  • REMEDIES AND ACCOUNTABILITY MECHANISMS
  • Civil Liability (Malpractice)

 A lawyer who acts on hallucinated AI outputs, resulting in injury to a client, can be sued for malpractice or negligence. Civil liability for professional accountability.

  • Disciplinary Action

 Section 35 of the Advocates Act 1961 provides that advocates are liable to be suspended or struck off the rolls for misconduct.[14] Providing hallucinated citations even without intent can result in proceedings for disciplinary action.

  • Contempt of Court

 The knowingly publication of artificial legal material is also contempt of court under the Contempt of Courts Act 1971.[15] This emphasizes the importance of ensuring legal material.

  • PREVENTIVE MEASURES AND BEST PRACTICES
  • Mandatory Verification Protocol

 Counsel should have an internal verification checklist before the filing of pleadings prepared with the aid of AI, such as:

  • Cross-verifying citations on authentic databases (SCC Online, Manupatra, LexisNexis).
  • Checking statutory references using government websites.
  • Confirming factual statements for accuracy.
  • Disclosure of AI Use

 As appropriate, counsel can indicate that AI tools have been utilized in preparation, but that all content has been checked manually. This adds transparency and believability.

  • Capacity Building and AI Literacy

 Bar Councils and law schools need to integrate AI literacy and digital legal research education into their curricula. Lawyers need to realize the capabilities and limitations of AI.

  • Institutional AI Policies

 Law firms need to:

  • Limit the use of AI to sanctioned platforms.
  • Preserve source documentation.
  • Enforce sign-off for verification before submission.
  • Prevent blind copying of AI responses.

Way Forward: Responsible Integration of AI

AI is here to stay. AI can make legal research more efficient and enhance access to justice, but only responsibly. The legal profession cannot abdicate legal responsibility to a machine.

Judiciaries, bar councils, and practitioners need to work together to establish intelligible ethical, evidentiary, and procedural norms for AI utilization. India can learn from cross-border judicial and regulatory developments without hampering innovation.

The principle is straightforward:

“AI can assist, but lawyers must verify.”

A robust culture of verification underpinned by professional discipline and judicial oversight will guarantee that AI enhances the rule of law rather than weakening it.

Author(s) Name: Ved Kakade (Maharashtra National Law University, Mumbai)

References:

[1] Mata v Avianca, Inc No 1:22-cv-01461 (SDNY, 22 June 2023)

[2] P Kevin Castel, ‘Order re: Sanctions’ Mata v Avianca (SDNY, 22 June 2023)

[3] American Bar Association, Model Rules of Professional Conduct (2020) r 1.1

[4] Advocates Act 1961 (India)

[5] Bar Council of India Rules, Part VI, ch II

[6] ABA, Model Rule 1.1, Comment 8 (technology competence)

[7] Indian Evidence Act 1872

[8] Mata v Avianca (n 1)

[9] US District Court (ND Tex), ‘Standing Order on AI Filings’ (30 May 2023)

[10] Brantley Starr, ‘Mandatory Certification Regarding Generative AI’ (2023)

[11] Supreme Court of India, e-Courts Mission Mode Project (Phase III)

[12] European Commission, ‘Proposal for a Regulation on Artificial Intelligence (AI Act)’ COM (2021) 206 final

[13] OECD, ‘OECD AI Principles’ (2019)

[14] Advocates Act 1961 (India) s 35

[15] Contempt of Courts Act 1971 (India)