INTRODUCTION
- The Trial Where the Evidence Lies
Imagine this: a courtroom in Delhi. The prosecutor presents a video of the accused admitting to the crime in his own words—the voice match. Body language is authentic. It even has intimate facts that no one else is aware of. But the defence stands up and alleges: this video is a deepfake.
Suddenly, the trial changes. It’s no longer one of innocence or guilt; it is now a matter of whether or not the evidence, itself, even exists.
Deepfakes are artificial media made using artificial intelligence, typically deep learning.[1] Which can lead a person to appear as though they’ve done or said something they never did. If there’s enough training material, photos, sounds, and video clips, AI can produce audiovisual material that is almost indistinguishable from reality.
What was once cyber pranksterism has developed into a blatant assault on the pillars of justice. A doctored confession video, a doctored voice recording, or a doctored video clip of a person at the crime scene would condemn one to death.
India’s criminal justice system, like so many others globally, is not prepared. Our rules of evidence take for granted that if an electronic record fulfils the proper formalities, it’s reliable.[2] But deepfakes not only circumvent the rules, they rewrite the playbook.
This blog examines how these technologies subvert legal processes, India’s existing legal and technical environment, and the way we can safeguard against spurious justice in the era of AI.
WHY DEEPFAKES ARE A LEGAL NIGHTMARE
- Fiction That Feels Real
So, disconcerting about deepfakes is just how real they look. Handwritten forgeries or forged documents might generally be detected by looking at them, deepfakes smile, express emotion, and are emotionally credible.[3]
They don’t simply replicate a face; they communicate tone, emotion, and nuance. A fabricated confession sounds believable. A fabricated video might have someone screaming, threatening, or confessing entirely without that individual ever saying a word.
This leaves judges in a rock and a hard place. Judges must rely on the credibility evaluation based on body language and tone of voice. But if content itself has been falsified, that instinct is a liability.
- The Breakdown of Legal Assumptions
Indian courts are based on the belief that evidence is good if it is accompanied by proper formalities. According to the Bharatiya Sakshya Adhiniyam, 2023, a video with a valid Section 65B certificate is accepted as authentic.[4]
But suppose that video is not shot but made. Deepfakes can satisfy all procedural formalities, clean metadata, verified source, valid chain of custody, and still be completely fictional.[5]
This burdens the wrong place unfairly. The defence is now responsible for proving that something did not occur, rather than creating doubt. And if the courts start suspecting all electronic evidence, valid recordings will be discarded due to fear. Either situation destroys trust in the system.
OUR LAWS AREN’T BUILT FOR SYNTHETIC REALITY
India’s law has tried to keep pace with digital evidence, but lags behind deepfakes. The threat here is easy to see: the law is considering papers and computers, not code written in fibs.
- The Lacking Legal Framework
The Bharatiya Sakshya Adhiniyam, 2023, was intended to supplant the Evidence Act of colonial times and provide for the realities of digital evidence. It brings rules of admissibility up to date, requires technical certifications and provides for digital chains of custody.
But it does not make any provision for synthetic content. It lacks any definition of deepfakes, AI-generated evidence, or means for forensic examination of doctored media. The system assumes that if it is traceable digitally, then it must be real.
And then there is the IT Act, 2000, that governs cybercrime, hacking, and electronic records.[6] It precedes generative AI and does not even mention AI manipulation of media content. The law does not make criminalising the creation of deepfakes for criminal or judicial ends illegal.
The result? Judges are applying medieval rules to fix future problems. And when a deepfake enters a courtroom, the law has no answer, only questions it never meant to ask.
INSIDE THE COURTROOM: WHERE THE SYSTEM STARTS TO CRACK
Despite tougher legislation on paper, India’s trial courts are colliding with hard realities with deepfakes. Understanding of the law is not the only issue; technical constraints cut deep.
- Detection Gaps and Tactical Abuse
The majority of courts depend on forensic labs operated by police.[7] For forensic analysis of digital evidence. However, state-of-the-art deepfake detection technologies, particularly those that stay in front of advancing generative models, do not exist in the majority of labs, particularly beyond Tier 1 cities.
Judges may effectively be deciding cases based on videos they cannot credibly authenticate. And defence and prosecution lawyers hardly possess the technical know-how to present a case for or against the authenticity of such evidence.
This provides the possibility for false positives and false negatives. A real video can be mistakenly classified as a deepfake by a seasoned defence attorney. Conversely, a deepfake crime scene video or a confession video may pass unnoticed if it is technically perfect and emotionally convincing.
Suspects can now plead the “deepfake defence” for genuine evidence. And real deepfakes can land innocent individuals in prison before the truth is revealed, if it is ever revealed.
It is worse in high-profile cases. A witness deepfake, accusing them of accepting bribes or withholding their testimony, can topple credibility before they even step into the courtroom. Public opinion, media bias, and even judicial deliberations can be influenced by having access to fake but credible images.
At which point the trial is no longer about what happened. It’s about what other people believe happened. And that’s precisely the type of vagueness deepfakes feed on.
WHAT INDIA NEEDS TO DO NOW?
There is no easy solution, but the path ahead is clear.
India’s Evidence Act has to particularly identify AI-generated evidence. Deepfakes need to be defined under law, and a legal provision needs to be made for granting jurisdiction to courts to require forensic authentication where there is suspicion of tampering. The onus of proof of authenticity should change depending on circumstances, not be hardcoded on certificates only.
Meanwhile, we also need to invest in infrastructure. Forensic labs should be equipped with deepfake-detecting capacity. Judges, lawyers, and police officers should be trained specially to detect and react to synthetic evidence.
Outside of courts, public sphere education also counts. People must be educated to understand that not everything on their screen is true.[8] And technology platforms must be regulated to watermark or flag questionable synthetic content,[9] particularly when coupled with criminal trials.
The aim is not to be afraid of technology but to ensure that it does not overtake the truth.
CONCLUSION
Let’s be honest, India isn’t ready. Deepfakes are already one step ahead of our laws, our equipment, and our intuition. And if they enter criminal trials unregulated, the stakes are vast.
We’re talking about fabricated confessions that lead to wrongful convictions. Witnesses discredited by fake videos. Legitimate evidence is thrown out because no one can tell what’s real anymore.
By the time you get around to it, trials are no longer a matter of justice. They’re performance is where the highest media convictions win.
If we mean business about courts producing truth, they must change now. It entails modifying some of the Bharatiya Sakshya Adhiniyam, investing in AI-enabled forensics, and having a legal community that is acutely aware of what it is dealing with.
The longer we wait, the tougher. Because the technology won’t wait. But justice can’t be behind. It’s not merely a matter of law. It’s a matter of constitutional importance. In an honest system of due process and truth, deepfakes are not only disruptive, deepfakes are pernicious.
And if we don’t act now, we’ll be sitting there observing trust evaporate like molten lava. In high definition. With good sound.
Author(s) Name: Abinesh M (Vinayaka Mission’s Law School)
References:
[1] Ian Goodfellow and others, ‘Generative Adversarial Networks’ (2020) 63(11) Communications of the ACM 139.
[2] Anvar P.V. v. P.K. Basheer (2014) 10 SCC 473
[3] Robert Chesney and Danielle Citron, ‘Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security’ (2019) 107(6) California Law Review 1753.
[4] Bharatiya Sakshya Adhiniyam 2023, s 65B.
[5] Anil Kumar, ‘Admissibility of Electronic Records under the Bharatiya Sakshya Adhiniyam’ (2023) 45 Indian Law Journal 23.
[6] Information Technology Act 2000, ss 66C, 66D.
[7] Indian Cyber Crime Coordination Centre (I4C), Ministry of Home Affairs, Government of India, Annual Report 2022.
[8] Deep trace Labs, The State of Deepfakes: Landscape, Threats, and Impact (2019)
[9] Ministry of Electronics and Information Technology (MeitY), Advisory on Deepfake Content Regulation (2023)