INTRODUCTION
Generative artificial intelligence (AI) — systems that produce text, images, audio or video — has moved from tech-demo novelty to pervasive utility across media, finance, governance and healthcare. Its rapid diffusion has highlighted novel harms (deepfakes, automated disinformation, privacy intrusions, algorithmic bias) while also promising efficiency gains for public services and industry. India’s policy response in early 2026 signals a shift from ad-hoc rules toward a coordinated governance framework: the Government has published principle-based AI governance guidance and amended intermediary rules to address AI-generated content, takedown timelines and labeling obligations.[1]
This blog examines the existing legal architecture, the key challenges posed by generative AI, and short- to medium-term reforms that would help balance innovation with democratic safeguards. The analysis is practical and designed for law students and junior practitioners preparing submissions under internship blog guidelines.
THE LEGAL FRAMEWORK: A PATCHWORK OF LAWS
Statute expressly regulating AI; instead, governance rests on an ecosystem of instruments. The Information Technology Act 2000 (and its subordinate Rules) remains the principal instrument governing intermediaries and online content; in February 2026 the IT Rules were amended to introduce obligations specific to AI-generated content including expedited takedown processes and labeling requirements.[2]
Complementing the IT regime is the Digital Personal Data Protection Act (DPDP Act)[3], which imposes fiduciary obligations and consent principles for processing personal data — a core concern for many generative AI systems that train on or output personal data. Concurrently, the Government’s AI Governance Guidelines (a principle-based framework issued after multi-stakeholder consultation) sets out risk classification, standards for transparency, safety testing and institutional recommendations such as an AI Safety Institute.[4]
Notably, commentators and law firms observe that the present approach is intentionally hybrid: soft governance (guidelines and standards) backed by targeted enforceable rules (IT Rules/ intermediary obligations), rather than a single “AI Act”. This creates both flexibility and uncertainty for downstream actors.
KEY LEGAL CHALLENGES POSED BY GENERATIVE AI
IDENTIFICATION AND LABELLING OF AI-GENERATED CONTENT
Generative models can produce highly convincing synthetic media. The 2026 amendments require platforms to label synthetic or AI-generated content and provide faster grievance redressal and takedown timelines (reportedly shortening certain obligations to a three-hour window for unlawful content).[5] These measures are aimed at curbing rapid viral harms (deepfakes, organised disinformation). Implementation raises difficult questions: how to define “AI-generated” in borderline cases, how to verify labels without excessive surveillance, and how to avoid over-broad takedowns that chill legitimate speech.[6]
DATA GOVERNANCE AND TRAINING DATA LIABILITY
Generative systems depend on vast training corpora. When those corpora contain copyrighted, sensitive or personal data, downstream outputs may infringe rights or violate privacy. Under India’s DPDP Act and existing copyright law, the legal liability of model providers (versus downstream deployers) is unsettled.[7] Questions arise about lawful bases for large-scale scraping, requirements for provenance/consent, and whether “model training” itself constitutes a regulated processing activity subject to notification and audit.
ALGORITHMIC TRANSPARENCY, EXPLAINABILITY AND AUDITABILITY
Regulators favour standards for transparency and risk-based testing. But balancing commercial secrecy, the technical limits of explainability for large models, and meaningful auditability for affected individuals is hard. The Government’s governance guidance proposes risk classification and safety testing institutions; operationalising these proposals will require clear metrics, independent auditing bodies, and procedural safeguards to prevent arbitrary enforcement.
HARMFUL USES IN GOVERNANCE AND FINANCE
States and public bodies are deploying AI for welfare delivery, predictive governance and administrative automation. While such uses can improve targeting and reduce fraud, they concentrate decision-making power and may entrench opaque models into rights-affecting state processes. Recent state-level initiatives (for example, state AI sandboxes and governance roadmaps) underline the need for procurement safeguards, algorithmic impact assessments, and statutory oversight when AI substitutes for human judgment in public decision-making.[8][9]
CROSS-BORDER ISSUES AND INTERMEDIARY LIABILITY
Generative models and platforms operate across borders. India’s fast takedown timelines and constitution-framing obligations for platforms require international intermediaries to adapt local practices quickly — creating compliance frictions and potential conflicts of law. Additionally, attribution and enforcement across jurisdictions (for synthetic content originating abroad) will remain a practical enforcement challenge.
PRIORITIES FOR REFORM — A PRACTICAL ROADMAP
CLARIFY DEFINITIONS, AND ADOPT A RISK-BASED APPROACH
Legislation or subordinate rules should define “AI-generated content”, “high-risk AI system” and “AI service provider” narrowly and in technology-neutral terms. A calibrated risk framework (low/medium/high) tied to obligations (disclosure, impact assessment, independent audit) will reduce over-regulation of harmless systems while concentrating resources where harms are greatest.
DATA RIGHTS FOR TRAINING AND MODEL-OUTPUTS
Introduce provenance obligations for datasets used to train systems deployed at scale; require documentation (data sheets) and reasonable efforts to secure rights or lawful bases for personal data. For copyrighted material, consider safe-harbour pathways for researchers while preserving remedies for rights-holders.
TRANSPARENCY MEASURES: IMPACT ASSESSMENTS & AUDITS
Mandate algorithmic impact assessments (AIAs) for high-risk uses, publish non-sensitive summaries, and create an independent AI Safety Institute (as recommended in governance guidance) empowered to accredit auditors and set testing standards.
PROCEDURAL SAFEGUARDS IN CONTENT ACTIONS
Fast takedown must be paired with clear notice, proportionate review and remedy mechanisms to prevent arbitrary censorship. A “three-hour” operational standard for clear illegality (e.g., child sexual exploitation, terrorist content) may be defensible, but discretionary content disputes should route to expedited human review and an appeal channel.
INTERNATIONAL COOPERATION AND INTERMEDIARY DIALOGUE
Harmonise labeling and provenance norms through multilateral fora and industry standards (interoperable content provenance frameworks). Encourage platform-government dialogues to build operational capacity for fast response without sacrificing due process.
CONCLUSION
India’s early 2026 policy moves — a principle-based governance framework together with targeted amendments to intermediary rules — mark a decisive moment in the legal regulation of generative AI. These steps create a scaffolding on which enforceable obligations (data provenance, labeling, impact assessments, expedited remedies) can be built. Yet the success of this regulatory turn will depend on careful calibration: precise definitions, proportional remedies, independent technical audit capacity, and mechanisms that protect fundamental rights while permitting beneficial public and private AI uses. For lawyers and law students preparing short analytical pieces (such as internship blogs), the immediate task is twofold: (1) track how the IT Rules 2026 are operationalised through rules, advisories and litigation; and (2) critique draft standards through research and public consultation to ensure a rights-respecting, innovation-friendly outcome.
Author(s) Name: Pranay Sundriyal (Gitarattan International Business School)
References:
[1] Press Information Bureau, ‘AI Governance Guidelines’ (Ministry of Electronics and Information Technology, 15 February 2026) < https://www.pib.gov.in/PressReleasePage.aspx?PRID=2228315 > accessed 23 February 2026.
[2] The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026, G.S.R. 120(E), Gazette of India (10 February 2026) < https://egazette.gov.in/WriteReadData/2026/269993.pdf > accessed 26 February 2026.
[3] Digital Personal Data Protection Act 2023
[4] Ministry of Electronics and Information Technology, ‘AI Governance Guidelines’ (15 February 2026) < https://www.pib.gov.in/PressReleasePage.aspx?PRID=2228315 > accessed 23 February 2026.
[5] Reuters, ‘India tightens grip on social media with new three-hour takedown rule’ (10 February 2026) < https://www.reuters.com/world/india-gives-social-media-companies-three-hours-take-down-unlawful-content-2026-02-10/ > accessed 26 February 2026.
[6] Shreya Singhal v Union of India (2015) 5 SCC 1 (SC).
[7] Copyright Act 1957
[8] Government of Telangana, ‘Telangana AI Mission and AI Sandbox Initiative’ (2025) < https://ai.telangana.gov.in/ > accessed 26 February 2026.
[9] Maneka Gandhi v Union of India (1978) 1 SCC 248 (SC).

