Scroll Top

AI IN HEALTHCARE – LEGAL IMPLICATIONS

INTRODUCTION

Innovation being a key to the existence of humankind has always facilitated the medical field to emanate from various disparities it beholds in health-related jurisdictions. AI is one such catalyst that has endeavoured the health system by offering tremendous reassurance in terms of diagnosis, and treatment, minimizing risks in attending the public health, and many more. Though the absence of legal regulations to scrutinize the disruptions always poses a threat to transparency resulting in misdiagnosis with discrepancies in AI-based decisions, predictions, and biases it dissipates.

As a term, artificial intelligence refers to the use of machine learning algorithms in healthcare. Computers can use artificial intelligence rather than mimic to replicate human cognition and learn, think, act, make decisions and take cognizance of the welfare. In the substantial scene of the phenomenon of big data, the healthcare industry has a crucial role in an affluent, productive society. When AI is used to analyze healthcare data, it’s essentially a question of life or death. Artificial intelligence can help medical practitioners in their regular work. Improvements in preventive care and quality of life increased diagnostic accuracy, and better patient outcomes are just a few of the additional advantages of AI in healthcare. Due to its potential for global public health, AI will be a vital tool for fighting epidemics, pandemics and other health disasters.

Given its unprecedented virtues, there are also certain concerning perils that are to be addressed in social, legal, and ethical domains with an ignited question of whether these institutions, specifically, legal machinery adept and provisioned enough to superintend the convoluted technological advancement, AI. Yet the apprehension of AI-imposed threats cannot be unseen and neglected with a mandate to regulate it with legal constraints.

AI IN HEALTHCARE CONCERNS

Doctor-patient relationships might be damaged, healthcare professionals may be de-skilled, transparency may be compromised, patients may be misdiagnosed or treated inappropriately due to errors in AI decision-making, racial or societal biases may be intensified, or algorithmic bias could be introduced that is difficult to detect with the system being not impregnable which are comprehensible hazards. There has never been enough attention paid to this acute problem. Bringing AI into clinical practice will present several immediate challenges, but it will also be a discussion point in ethical and legal debates the major being the concept of informed consent. The determination of when and under what conditions (if ever). An informed consent AI sphere should be implemented throughout the therapeutic setting.

The safe and effective functioning of AI is crucial. Watson for Oncology, which uses artificial intelligence algorithms to analyze patient medical records and suggest cancer treatments to doctors, has been criticized for suggesting “unsafe and incorrect” treatments which is a real-world example. A flawed or manipulated set of data can result in AI creating new groups that are unrelated to the protected characteristics as defined in equality laws, and can bias or discriminate decision-making. Without artificial intelligence, humans are likely to make mental health decisions that are biased and prejudiced.

Robotics and genetic target models can be utilized to reduce labor costs and increase capital and data costs in drug discovery. An Artificial Intelligent System’s (AIS) output may conceal the reasoning behind its output because of modern computing techniques, which make meaningful inspection difficult. As a result, AIS uses an “opaque” method to generate its outputs.

As well as being used more frequently in health-related applications, AI apps are also being used to improve adherence to medications and analyze wearable sensor data such as diet counselling, health evaluations and assistance. Distorted AI could, for effect, when phenotype- and sometimes genotype-related information is involved in the health sector, this can lead to inaccurate diagnoses, ineffective treatments, and even safety concerns.

Despite the fact that the potential of health data can surpass thousands of dollars, some statistics suggest that the public is hesitant about businesses or the authorities selling patient data for profit, as was the case with Sprinklr recently. Besides revealing patient data to third parties besides physicians and family members, AI apps also present significant issues. In contrast to the doctor, who is subject to confidentiality responsibilities as outlined by applicable statutes or case law, family members or friends are typically not bound by any legally binding obligations.

REALM OF AI AND LAW IN HEALTHCARE

It is costly and dangerous to offer services and processes based on artificial intelligence and huge data. As a result, it is now more crucial than ever to create and commercialize data-driven healthcare and life science products. There are several problems that arise when data sets that are complex and intellectual property rights are combined, including ownership and access rights for data analytics. When players, such as enterprises, act inequitably and conspire to completely regulate a market where access to healthcare is intersected with competition is crucial, more sophisticated competition and antitrust laws are intended to intervene. However, this remains a distant possibility. Algorithms are specifically excluded from the list of “inventions” that qualify for patent protection under the Patent Act. Nonetheless, since the algorithms are developed based on human-created material, they may be granted copyright by Indian law, thereby making them the sole owners of the work.

The “WannaCry” cyberattack demonstrated the need for improved cybersecurity wherein FedEx was targeted by the ransomware attack, a global cyberattack utilizing sophisticated hacking tools that damaged the National Health Service (NHS) in the UK and infected over 300,000 machines in 150 countries. The safety of AI-driven healthcare goods, services, and processes should be ensured by recent regulatory developments in India, but malware attacks are frequently a global issue, with data privacy leading the list.

Computers cannot be held responsible for errors or incorrect diagnoses, even in the unlikely event that they occur thus the involvement of a human being is essential not only as a copyright owner of the source code. Along with the duty of creating legislation assuring rigorous governance and security procedures for stored data, legislators will also be responsible for the expanding cybersecurity concerns that healthcare institutions will have to deal with. Given that patients are consumers using the services supplied by the AI systems and may pursue any remedy under COPRA in the event of a default, cases involving AI in healthcare may currently be controlled by other laws or acts such as the COPRA (Consumer Protection Act, 2019). Moreover, in the case of a data breach, a Data Protection regime is still pervasive in the Indian landscape with no dedicated statute.

CONCLUSION

For digital health to grow, the policy environment must change. Medical care organizations are generally hindered by failures and misalignments coming from digital healthcare projects, especially in developing nations like India, because of faulty policy design, a lack of coordination among numerous players, and a lack of sustainable financing. Although some significant segments of the Indian healthcare industry, such as hospitals, pharmaceuticals, diagnostics, medical equipment, medical insurance, and telemedicine, have adopted AI, there is still a sizable market for artificial intelligence solutions to raise operational efficiency and healthcare quality. A CAGR of 50.9% is forecast for Indian AI in the healthcare sector over the forecast period, according to the Indian AI Healthcare Market 2019–2025 report making it a deliberation to surpass the legal and social challenges. AI solutions are not currently governed by any legislation or governed by any organization that can guarantee their security, quality, and privacy in India. AI-based healthcare is unable to be widely adopted in India because of this problem. AI implementation in healthcare is one of the biggest challenges for low-resource countries like India. Training workers in the field and deployment of logistics is essential to implement AI-based healthcare and ensure that sensitive health information can be managed safely, data can be protected from theft, and AI systems can be used effectively – a task India still lacks.

Author(s) Name: Aathira Pillai (Maharashtra National Law University, Mumbai)