Scroll Top


Artificial Intelligence (AI) is the future of our world. With its roots already established in various sectors like education, healthcare, entertainment, etc. AI is most definitely the next big revolution in our world after capitalism. Many have already said that AI will bring forth the fourth industrial


Artificial Intelligence (AI) is the future of our world. With its roots already established in various sectors like education, healthcare, entertainment, etc. AI is most definitely the next big revolution in our world after capitalism. Many have already said that AI will bring forth the fourth industrial revolution. AI will become a $100 billion+ industry by 2025. But as we all know that every coin has two sides. Similarly, AI also comes with its added, more advanced, and complicated problems. The most severe one out of them all is data privacy and protection. Nearly all industries collect and use user data through AI processes and machine learning. The data collected by any application or website that we accessed is used to make our experience more convenient as per our actions.

The most layman example of this would be the Instagram search section. Even though it’s a bit random at first but as we like, interact, and explore, our search section also changes. We see posts or content related to what we have already seen. OTT platforms like Netflix, Amazon Prime, Hotstar are using similar processes. Even though this data collection may not stand out as an issue but it threatens our privacy. Our data can be extracted from these apps by exploiting loopholes in the program. Also, some applications and websites collect data for which the user has not granted permission. In this article, we will be discussing all these issues along with possible safeguards against them.


We have accepted a lot of terms and conditions on various apps but have we ever read them? Many of us have not, to be honest. In these terms and conditions, they always ask for permission to collect our personal data? Like they ask for our location, contact information, storage access, and so on. Through these permissions, they monitor whatever we do on our devices and collect that data. For example, we search something on Google, Instagram, other social media apps start showing us the ads or promotion pages related to our searches. Likewise, other apps also do this. Across multiple apps and websites, for a smooth, better, and user-friendly experience, this data is used to train AI algorithms.

Even though it is done for the user’s benefit this comes at the cost of data privacy. Sometimes this data is collected only to be sold to third parties by the companies. Moreover, this data is at the risk of getting stolen by exploiting loopholes. Like the recent dominos incident in which, the hackers have access to Domino’s India 13TB of internal data which includes employee details of over 250 employees and user data.[1] This shows how much AI has developed but it still lacks the most important thing i.e., data protection.

Now let’s talk about the most famous scandal that sparked the grand privacy awakening, Cambridge Analytic. In 2012 Mark Zuckerberg realized that data privacy should be a concern. Consequently, he wrote an email to his director of product development regarding the same. To which he replied, “I just can’t think of any instances where that data has leaked from developer to developer and caused a real issue for us.”[2]  In March 2017-2018 it all came to light when Christopher Wylie told The New York Times and The Guardian/Observer about a firm called Cambridge Analytica.[3] It was a data firm that used to buy user data of millions of Americans from Facebook without their permission. They used this data to build a “psychological warfare tool” which played a crucial role in the 2016 US Presidential election. The worst part of this was that users would have blindly given all the permission without knowing that they are also exposing their friends’ data.

Nowadays AI-based facial recognition system is on the boom. China is using this AI-based facial recognition for mass surveillance of its citizen. They used this to keep a constant watch on their people. They also recognize and store behavioral data of their people without their permission. It is good as long as you do it for public safety and welfare and catching criminals. But it should be used when it is necessary and not otherwise. Not to examine and store the behavioral data of every citizen and scoring them on a credit system. Why does the government want to determine the trustworthiness of their citizens based on AI-based software? It is an absolute breach of the citizen’s privacy.

Similarly, there was a company named Clearview that developed an AI-based face recognition system to help police officers catch criminals. This software helped law enforcement a lot in catching many criminals. But in January 2020, it was found that Clearview made a total mockery of data privacy by sweeping billions of photos of users from social media platforms like Instagram, Facebook, Twitter, and YouTube.[4] It shows that the images you uploaded online may be included in that AI software without your knowledge.

Many incidents gave rise to concern about the protection of data privacy in this world of AI. Though now, people are becoming aware that their data is not safe online. However, most of them still don’t understand the gravity of the situation. They fail to see how AI-based system is using their data in an unethical manner. Unethical data processing via AI systems poses a real threat to data privacy and can have dangerous consequences.


Now the concern is that how to keep your data safe and protected. Who should be held liable for any breach or misuse of data even after ensuring due safety measures? The company, the user, or the government. Before discussing this, we’ll look at some data protection measures. Apart from data protection rules that binds an application or website, the user must ensure few things for better safety. Such measures include:

  • Staying away from apps not available on Play Store or App Store (iOS).
  • Not entering information without checking the credibility of the app or site.
  • Clearing cache and cookies regularly.
  • Inspecting what permissions, we are giving them to access and process our data.

These are some of the things that a user can do from their end for data protection. But actual data protection lies in the hands of the government and the developers. 

It is the primary duty of a government to protect its people and keep their personal or even public data safe. Governments of different nations have different policies regarding data privacy and protection. Like in the USA, each state has its own rules regarding data protection. Apart from abiding by government rules and regulations, the developer also takes measures to protect a user’s data. The most common technique to do so is data masking. In this, your data is hidden using numbers. These numbers combine to form a code that no one can decipher. However, it can be understood after being deciphered. Although, this is also not a full-proof plan. It blew apart when the US media tracked down one of its mayors. Ensuring transparency is of utmost importance. A developer/company should be transparent in its data usage and ask for proper permission from the users and not hide it under fine print.

Furthermore, there should be options available to the user to stop their data sharing. An option to opt out of such policies should be provided to the users. In this manner, a user who has stopped using that particular app could stop sharing their data. Moreover, these companies should conduct audits to assess their data protection practices and match foresight with hindsight. New research done by Carnegie Mellon University shows that privacy assistant apps could help in data protection. By allowing users to create policies that predict and make inferences overtime to decide how and when their data should be collected or used, or not. It opens ways for new and better methods of data protection by using artificial intelligence.


While we tend to move towards a brighter future, the shadow it casts is dangerous. Even though AI will indeed make our lives smoother and more comfortable, it puts our privacy at risk. Using personal data for machine learning is the backbone of making our enhancing our experience. But processing such data without proper permission is unethical. Combined with a lack of a well-constructed compliance program can be an Achilles’ heel to any business plan.[5]

Some people believe that in the future, machine learning would help us to protect our data. As of now, our data is unsafe, even with tech giants like Facebook, Instagram, etc. No matter what policies, rules, and regulations the government brings or the data security used by these companies, in the end, it’s the users themselves who have to protect their privacy. Artificial intelligence is still in nascent age and we shall make it a safe place for generations to come. Creating AI by misusing personal information should not become a regular occurrence in the future.[6] Until then, be aware while entering any type of information on any website or apps. Even while giving access or permissions to any app, read their terms and conditions.

Author(s) Name: Siddharth Mishra (Dr. BR Ambedkar National Law University, Sonipat)

[1] Akarsh Verma, Domino’s India database likely hacked, 1 million credit card details leaked along with mail IDs, cell numbers, India Today (April 18, 2021, 07:11 PM ),

[2] Issie Lapowsky, How Cambridge Analytica Sparked the Great Privacy Awakening, Wired (March 17, 2019, 07:00 AM),  

[3] Ibid.

[4] Kashmir Hill, The Secretive Company That Might End Privacy as We Know It, The New York Times (January 18, 2020),

[5] Editorial Team, Top Five Data Privacy Issues that Artificial Intelligence and Machine Learning Startups Need to Know, Inside BigData (July 23, 2020),

[6] Shree Das, The Social Impact of Artificial Intelligence and Data Privacy Issues, Redgate (September 8, 2020),

Related Posts