INTRODUCTION
The use of Artificial Intelligence (AI) has changed the operational and regulatory environment of the telecommunications industry in India. Predictive network maintenance, automated customer service, and overall efficiency and reliability in telecom work have become the core of AI technologies. Nonetheless, AI incorporation also introduces novel weaknesses, such as algorithmic bias and autonomous system breakdowns, and fails to address legal and ethical issues with severe consequences. With an understanding of these dangers, AI incident reporting has come into the limelight as an essential element of digital government models. This blog discusses the necessity of a legal framework and provisions on AI incident reporting in Indian telecommunications law, gaps in the current regulatory framework, and how insights from international best practices shape a strong and domestic framework.
ADOPTION OF AI
The adoption of AI within telecommunication in the Indian context has also increased at a faster rate, as part of the Digital India and the 5G implementation, but the regulatory reaction of the nation to AI-related events is still in its early stages. In contrast to the European Union, which introduced the AI Act 2024 that compels reporting of incidents under the heading of AI, India has not yet established what an AI incident is or a standardised system of reporting of such incidents.[1]
UNDERSTANDING AI INCIDENT REPORTING
The entity that the standard reporting of AI incidents relates to is the systematised reporting and disclosure of unintended or damaging effects associated with the implementation of AI systems. The aim is to hold accountable, be transparent, and keep improving AI governance. The Organisation for Economic Co-operation and Development (OECD) defines an AI incident as any event where an AI system results in, or might have resulted in, individual, environmental, or societal harm because of malfunction, misuse, or bias.[2]
In telecommunications, these accidents could be data leaks through automated network optimisation software, discrimination in the allocation of the service by an AI-based customer profiling system, or malfunctions in automated emergency response systems. The Telecom Regulatory Authority of India (TRAI) has recognised that with the increase in the intelligence of the networks, the associated risks relating to the presence of the algorithmic decision-making will be handled by means of ensuring transparency in accountability.[3] Therefore, AI incident reporting serves as a governance mechanism as well as a measure of compliance- it not only enables the regulators to evaluate the risks of the systems, but it also aids in fostering trust among the consumers and stakeholders.
REGULATORY FRAMEWORKS IN INDIA
India does not have a specific law or regulatory code on AI cases in the telecom industry. Nonetheless, a number of the current legislations as well as policy documents, in an indirect way, refer to the points pertinent to the management of AI incidents.
1. Information Technology Act 2000
By the Information Technology Act 2000, in Section 43A, the corporate entities dealing with sensitive personal data are liable for negligence resulting in wrongful loss or gain.[4] Incidents caused by AI-guided data processing may be covered by this section. Besides, the Information Technology (Reasonable Security Practices and Procedures and Sensitive Personal Data or Information) Rules 2011 mandate that data breaches be reported to those impacted, which may be expanded to artificial intelligence-induced data breaches.[5]
2. Telecom Regulatory Authority of India (TRAI) and Department of Telecommunications (DoT)
TRAI has also released discussion papers on the use of AI and Big Data in the Telecom Sector (2022), stating that ethical AI use and responsible data use are necessary; nevertheless, the papers do not require disclosure of AI incidents. One of the conditions of the Unified License the DoT provides is the necessity of telecom operators to ensure service continuity and network integrity, and the conditions do not explicitly address AI malfunction reporting.
3. Digital Personal Data Protection Act of 2023
The Digital Personal Data Protection Act (DPDPA) 2023 requires a data breach reporting system to the Data Protection Board of India.[6] Although AI-related harms may encompass personal data processing, the act does not cover such a requirement when it comes to non-personal data, algorithmic bias, or systemic network failures through AI. All these laws together offer coverage gaps and are not sufficient to establish a unified system of reporting of AI incidents related to telecommunications.
INTERSECTION OF AI AND TELECOMMUNICATIONS LAW
The intersection of AI and telecommunications has established new spheres of regulatory complexity. Independent systems are now being used by Telecom operators to manage spectrum, detect fraud, optimise networks, and predict maintenance, which makes finding accountability difficult to detect an incident.[7]
Without a specific AI incident reporting legislation, telecom AI systems incident falls under generic telecom cyber law or contract law. As an example, the unintentional interference of the emergency communications by an AI-driven system would be liable under the current service licence agreements, as opposed to a distinct statutory requirement, which would be detrimental in terms of prompt incident reporting. Moreover, as India swiftly implements 5G and other future networks based on 6Gs, telecommunication networks are becoming more and more reliant on machine learning models, which are constantly changing things in real time. Absence of compulsory AI incident reporting systems poses a threat to the integrity of such adaptive systems and the trust of the population in AI-based telecom infrastructure.
GAPS AND CHALLENGES IN AI INCIDENT REPORTING
Lack of a Legal Incorporation of AI Incident: Both the IT Act and the DPDPA 2023, as well as TRAI guidelines, do not specify AI incidents. The absence of a legal definition has made it unclear what threshold is needed to be covered because it is not established how close to a minor error an algorithm made is necessary to be reported.
Divided Jurisdiction and Duplication: The regulatory overlaps with the proposed Data Protection Board and TRAI, and DoT cause redundant regulatory powers. Telecom operators may be unable to know to whom they need to report, and this will result in inconsistent disclosure practices.
Absence of Standardised Reporting Mechanisms: India has no standard reporting portal or template on telecom-related AI failures, in contrast to the AI Act, which requires the European Union to establish a centralised database for reporting incidents.
Anonymity and Publicity Issues: Operators will be reluctant to report events because they are afraid of damaged reputations, lawsuits, or even sanctions. No obvious safe-harbour provisions are present to encourage proactive reporting.
Poor Technical Potential within Regulatory Authorities: Finding machine learning Incident detection involves technical prowess in algorithm auditing and explainability, where Indian telecom regulators are yet to acquire competence.
COMPARATIVE GLOBAL PERSPECTIVE
The AI Act (2024) of the European Union requires providers of high-risk AI systems to report any severe incidents or malfunctions to competent authorities within two weeks.[8] The United States Federal Communications Commission (FCC), through its Public Safety and Homeland Security Bureau, has also emphasised AI risk management in critical communications infrastructure.[9] Settled all the same, the Model AI Governance Framework (2020) of Singapore encourages incident disclosure as part of algorithmic responsibility.[10]
These frameworks have similar characteristics:
- A definition of AI incidents and harm levels;
- An obligation to report within a time constraint; and
- There is a central authority with the power to research and punish disobedience.
India can use such experiences to develop a sector-specific and appropriate proportional AI incident reporting system that corresponds to the telecom infrastructure and laws on data protection.
RECOMMENDATIONS FOR INDIAN TELECOM REGULATIONS
Statutory definition of AI Incident: The Digital India Act that should be introduced in parliament should include a definition of AI incident that includes unintended consequences of the AI systems, which lead to harm to people, networks, or even national security.
Design of a Centralised AI Incident Reporting System: TRAI, in association with the Data Protection Board of India, ought to create an online portal to enable telecom operators to report AI-related incidents. This database would allow intersectoral learning and policy development.
Enacting Obligations of Risk-Based Reporting: AI used in telecom service providers in applications that are considered to be high risk (e.g., emergency response or network control) must be made to report all material AI failures within a specified time period.
Safe Harbour of Voluntary Reporting: Legal provisions are supposed to provide a safe harbour to telecom entities that voluntarily report the incidents in good faith to promote transparency, as is the case in cybersecurity vulnerability disclosure programs.
Capacity Building among Regulatory Agencies: To have efficient enforcement and monitoring of the algorithms, TRAI and DoT should invest in technical training of regulators on the subject of algorithmic auditing and AI ethics.
Conformity to International Standards: The Indian strategy must be harmonised with the OECD and G20 principles of AI to ensure the telecom networks are interoperable with global networks and build trust across borders.
CONCLUSION
AI incident reporting will be an infrastructural block of the Indian digital structure of governance. With AI still infiltrating the telecommunications industry, a lack of a standardised reporting system may cause systemic weakness and regulatory blind spots. This will not only increase accountability but also build consumer trust in the AI-driven telecom systems by creating clear definitions, duties, and reporting systems. India can strike the right balance between innovation and responsibility by having a future-oriented regulatory framework, which is based upon transparency, proportionality, and technological neutrality. India can become a global leader of ethical and trustworthy AI use in communications infrastructure by making AI incident reporting a part of telecommunications law.
Author(s) Name: Aditya Tiwari (Bharati Vidyapeeth University, New Law College, Pune)
References:
[1] European Union, Artificial Intelligence Act 2024 (Regulation (EU) 2024/1689) <https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32024R1689> accessed 16 October 2025
[2] Organisation for Economic Co-operation and Development, OECD Framework for the Classification of AI Systems (OECD Publishing 2022) <https://www.oecd.org/en/publications/oecd-framework-for-the-classification-of-ai-systems_cb6d9eca-en.html> accessed 16 October 2025
[3] Telecom Regulatory Authority of India, Leveraging Artificial Intelligence and Big Data in the Telecom Sector (Consultation Paper, 2022) <https://www.trai.gov.in/consultation-paper-leveraging-artificial-intelligence-and-big-data-telecommunication-sector> accessed 16 October 2025
[4] Information Technology Act 2000, s 43A
[5] Information Technology (Reasonable Security Practices and Procedures and Sensitive Personal Data or Information) Rules 2011 <https://www.meity.gov.in/content/information-technology-reasonable-security-practices-and-procedures-and-sensitive-personal-data> accessed 16 October 2025
[6] Digital Personal Data Protection Act 2023 (Act No. 22 of 2023)
[7] Department of Telecommunications, National Digital Communications Policy (2018) <https://dot.gov.in/sites/default/files/Final%20NDCP-2018_0.pdf> accessed 16 October 2025
[8] European Union, Artificial Intelligence Act 2024 (Regulation (EU) 2024/1689) <https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32024R1689> accessed 16 October 2025
[9] Federal Communications Commission (US), AI Risk Management in Telecommunications (Public Safety and Homeland Security Bureau Report, 2023) <https://www.fcc.gov/public-safety-homeland-security-bureau> accessed 16 October 2025
[10] Infocomm Media Development Authority, Model AI Governance Framework (2nd edn, 2020) <https://www.imda.gov.sg/how-we-can-help/ai/Model-AI-Gov-Framework> accessed 16 October 2025

