Skip to main content Scroll Top

AI AND REINFORCEMENT OF SOCIAL INEQUALITIES AND DISCRIMINATION

Artificial Intelligence (AI) plays a crucial role in today’s society, as it is utilised across nearly every sector. However, a significant issue that often goes unnoticed by the general public is that

INTRODUCTION

Artificial Intelligence (AI) plays a crucial role in today’s society, as it is utilised across nearly every sector. However, a significant issue that often goes unnoticed by the general public is that AI can promote inequality and discrimination. This problem is not widely addressed, but it has the potential to lead to serious consequences as the use of AI continues to grow in various fields. Currently, there are no specific laws in place to limit the spread of biased information through AI, which highlights the urgent need for regulation in this area.

WHAT IS ARTIFICIAL INTELLIGENCE?

Artificial Intelligence is a technology that enables computers and machines to perform tasks that typically require human intelligence, such as creativity and innovation. Machine Learning is a subset of AI. AIs are not programmed to perform a particular task, but it learns from a set of data provided to them. A machine learning model utilises algorithms to identify patterns within data and continually improve its performance over time, eliminating the need for human intervention. Generative AI is a step above Machine Learning, as it uses the already stored data to create new output.[1]

AI works by simulating intelligent behaviour to perform tasks autonomously. Its work involves several steps, including:

  • Data Collection,
  • Processing and Learning Data by using Algorithms to analyse and Identify Data,
  • Model training using existing data and improving its predictions.

WHAT IS AI BIAS?

AI bias, also known as algorithmic bias, refers to the occurrence of biased results that result from human biases in the original training data or AI algorithm, leading to distorted outputs and potentially harmful outcomes. It has emerged as a critical issue in the AI sphere, where a biased decision-making process reinforces social inequalities and discrimination.

AI systems are well-known for their ability to process large amounts of data and make independent decisions. However, they are not immune to the biases present in the data used for training. If the training data contains biases related to race, gender, or other protected attributes, the AI system can perpetuate and even amplify those biases, resulting in discriminatory outcomes.[2]

The connection between AI and inequality is already a complex issue, and it is deeply influenced by the biases found in the design, training, and use of AI systems. These biases stem from the data used to train AI models, the modelling techniques applied, and how AI outputs are interpreted. Together, they significantly impact how AI affects social inequalities.[3]

REAL-WORLD IMPACT: CASES OF AI BIAS

The Dutch court has ordered the immediate halt of their AI system, SyRI (“system risk indication”), used in detecting fraud in welfare benefits in 2020 due to human rights violations. It was found that the AI disproportionately targeted low-income families and immigrants, resulting in wrongful accusations of fraud and severe financial hardship for thousands of families. This is a clear case of AI reinforcing the existing social inequalities and discrimination.[4]

In Australia, the infamous “robodebt” program represented a significant failure of the public administration. The artificial intelligence used in this system issued erroneous debt notices that disproportionately affected vulnerable populations.[5] Similarly, the AI grading system implemented by the UK educational system during the COVID-19 lockdown harmed students from lower socio-economic backgrounds by downgrading them. This was largely due to the lingering social stigma in society, which ultimately led to the immediate revocation of the AI system.[6]

In an interview by UN Women with Zinnya del Villar on AI gender bias and creating inclusive technology on 5 February 2025, Zinnya said about the gender bias created by AI algorithms citing examples of how the voice assistants defaulting to female voices reinforce stereotypes that women are suited for service roles, and language models like GPT and BERT often associate jobs like ‘nurse’ with women and “scientist” with men.[7]

In 2018, Amazon discontinued an AI recruitment tool that favoured male candidates. The AI team found that the system had learned to prioritise male resumes over female ones, which reinforced gender bias.[8] AI-based hiring systems demonstrate the potential for algorithmic bias to perpetuate discriminatory practices. When trained on historical employment data, these systems can reflect existing biases that favour certain genders or racial groups. As a result, the AI may learn to associate specific jobs with these groups, thereby prioritising candidates from those demographics and amplifying existing gender biases.[9]

AI algorithms used for content recommendation can reinforce existing biases by creating echo chambers. These algorithms often prioritise content that aligns with users’ current views and interests, which contributes to the reinforcement of biases and stereotypes. This phenomenon, known as the amplification of biases in digital media, can lead to the spread of misinformation and the polarisation of public opinion.[10]

LAWS GOVERNING AI BIAS

Most countries do not have strong legislation specifically addressing ‘anti-AI bias.’ However, existing data protection laws and AI regulations do tackle issues related to AI bias in areas such as the public sector and housing.

  1. The EU AI Act 2024 prohibits ‘unacceptable risk’ systems, including manipulative and social scoring AI. It also establishes strict standards for “high-risk” AI in areas such as human oversight, transparency, and data quality to reduce discrimination.
  2. In India, the AI (Ethics and Accountability Bill) was introduced in the Lok Sabha on December 17, 2025. This bill aims to establish an AI Ethics Committee and address discrimination caused by high-risk AI systems based on race, caste, and gender.
  3. In Canada, the Artificial Intelligence and Data Act (AIDA) explicitly defines “biased output” as an unjustified and adverse differential impact based on grounds protected under the Canadian Human Rights Act.
  4. Both South Korea and Japan passed comprehensive legislation governing AI in 2025, focusing on preventing discrimination caused by AI. They mandate transparency and require developers to avoid using biased training data.

WAYS FOR MITIGATING AI BIAS AND ENFORCING EQUALITY

  • A key method for promoting equality and reducing AI bias is to adopt an interdisciplinary approach in AI development. This approach ensures that AI technologies are developed with consideration of societal contexts, rather than solely focusing on technical aspects. By integrating insights from various fields, we can address social issues and inequalities that might otherwise be overlooked. This helps prevent the amplification of biased content and the reinforcement of existing social inequalities.
  • Regular Audits must be conducted to ensure that the dataset and training materials the AI is working on are free from any bias.
  • Establish clear policies and governance frameworks that mandate transparency and non-discrimination in AI systems.
  • Companies can invest in training programs that emphasise inclusive practices and awareness of bias in AI. This may include workshops or collaborations with external organisations to promote best practices.
  • Passing laws which regulate transparency, human oversight, and bias mitigation

CONCLUSION

Artificial Intelligence holds vast potential to transform various sectors, but it simultaneously poses the threat of spreading and reinforcing social inequalities and discrimination in society, which cannot be ignored. AI bias, deeply rooted in prejudiced training data highlight the need for legislation governing the amplification of stereotypes and biases. Real-world instances such as SyRI and Robodebt demonstrate the severe consequences of a lack of regulations. Immediate and decisive action is required to establish ethical standards and regulations that enforce fairness and accountability in the deployment of AI technologies.

Author(s) Name: Aishwarya Sambasivan (Sastra Deemed University Thanjavur)

References:

[1] ‘What is Artificial Intelligence (AI)’ (GeeksforGeeks, 7 October 2025) https://www.geeksforgeeks.org/artificial-intelligence/what-is-artificial-intelligence-ai/ accessed 17 January 2026.

[2] James Holdsworth, ‘What Is AI Bias?’ (IBM, 21 December 2023) https://www.ibm.com/think/topics/ai-bias accessed 17 January 2026.

[3] Ibid

[4] T Bircan and MF Özbilgin, ‘Unmasking Inequalities of the Code: Disentangling the Nexus of AI and Inequality’ (2025) 211 Technological Forecasting and Social Change 123925.

[5] Frances Mao, ‘Robodebt: Illegal Australian welfare hunt drove people to despair’ (BBC News, 7 July 2023) https://www.bbc.com/news/world-australia-66130105 accessed 17 January 2026.

[6] ‘UK exams debacle: how did this year’s results end up in chaos?’ (The Guardian, 17 August 2020) https://www.theguardian.com/education/2020/aug/17/uk-exams-debacle-how-did-results-end-up-chaos accessed 17 January 2026.

[7] Zinnya del Villar, ‘How AI reinforces gender bias—and what we can do about it’ (UN Women, 5 February 2025) https://www.unwomen.org/en/news-stories/interview/2025/02/how-ai-reinforces-gender-bias-and-what-we-can-do-about-it accessed 17 January 2026.

[8] ‘Amazon scrapped “sexist AI” tool’ (BBC News, 10 October 2018) https://www.bbc.com/news/technology-45809919 accessed 17 January 2026.

[9] T Bircan and MF Özbilgin, ‘Unmasking Inequalities of the Code: Disentangling the Nexus of AI and Inequality’ (2025) 211 Technological Forecasting and Social Change 123925.

[10] Ibid