Are AI Chatbots Becoming More Racist?

Introduction    

The output of racist comments on Twitter by Tay, an artificial intelligence (AI) chatbot created by Microsoft, demonstrates the dangers of models picking up and regurgitating racism and prejudice. A recent report published in arXiv, an open-access research archive from Cornell University, has also revealed that large language models, such as Open AI’s ChatGPT, hold racist stereotypes about speakers of African American Vernacular English (AAVE). Is AI becoming more racist?

 

AI & Covert Racism

A team of researchers found that a number of large language models (LLMs) including GPT-4 and GPT-3.5, which power the popular and commercially available ChatGPT, possess covert prejudice towards speakers of African American English. Indeed, Valentine Hofmann from the Allen Institute for AI, and one of the researchers on the project, tweeted about the discovered ‘form of covert racism in [large language models] that is triggered by dialect features alone, with massive harms for affected groups.’

 

The research demonstrates and highlights a range of shocking findings. For example, it was found that the AI was more likely to describe an AAVE speaker as ‘lazy’ or ‘stupid’ when similar sentences using AAVE were compared against Standardized American English (SAE). Moreover, some systems were more likely to recommend the death penalty to a fictional defendant speaking in AAVE compared to a defendant speaking in SAE.

 

Concerns about AI models displaying overt racism, such as certain LLMs persistently linking Muslims with violence, have been studied more widely. However, covert racism, such as suggesting negative stereotypes associated with people of a certain race, is yet to be explored in great detail.

 

The Flaws in AI Training

A lot of these issues can be ascribed to known flaws in the training of the LLMs. Models are trained on vast amounts of data and scrape the internet for information. While their results may become more accurate the more data they process, if the information they are gathering is racist, the model will internalise biases and include them in their outputs, as demonstrated by Microsoft’s Tay.

 

Due to existing concerns with overt racism, companies behind these models have attempted to implement AI safety training, which includes utilising human feedback to refine responses and eventually reduce bias. However, while this training may be effective at reducing some overt biases, the study concluded that covert biases remain. To a great extent, this occurs when identity terms, such as mentions of a person’s race, are not used. Moreover, such training may produce unintended effects.  For example, images of popes and founding fathers as people of colour created by Google’s Gemini can be attributed to the company’s tuning of the model to generate people of several races; failing to account for scenarios where a range may not be appropriate.

 

As a result, much work remains to be done in terms of AI training to ensure that models do not pick up overt and covert racism found online.

 

Wider Impact

As AI models permeate an ever-increasing number of industries, the impact of these findings will have consequences beyond offending groups of individuals.

 

As previously mentioned, researchers found that when creating a hypothetical criminal scenario with a first degree murder, some systems were more likely to recommend the death penalty to defendants speaking in AAVE compared to those speaking in SAE. Although currently, it is not up to AI chatbots to hand down judgements in murder trials, the systems are gradually being integrated into various professions, including ones within the US criminal justice system.

 

The report similarly found that, when prompted to allocate jobs to different individuals, certain AI chatbots would associate AAVE speakers with professions that do not require a university education. In the meantime, SAE speakers were more likely to be allocated high-paying jobs which demand further education.

 

Thus, by accepting the report’s findings, one could assume that covert racism may be finding its way into a number of industries which are adopting these systems.

 

Legal Issues

In addition to the wider impact of these discoveries, a number of legal issues may be in play.

 

Anti-discrimination laws are prevalent in most jurisdictions. Therefore, the example of bias against speakers of AAVE may trigger protection under these laws when AI models are utilised in certain contexts. For instance, employment discrimination laws may be of concern to recruiters who implement AI into their recruitment processes, especially in a world which seeks to improve diversity and inclusion in the workplace.

 

AI models displaying covert racism may also engage a number of human rights under the ECHR, such as Article 6 and the right to a fair trial.

 

Finally, training AI models raises the usual concerns about data protection and privacy laws. As these LLMs scrap large swathes of the internet, tech companies will need to ensure they are compliant with various data protection regimes.

 

What’s next?

While the report published in arXiv is yet to be peer-reviewed, the findings have the potential to frame certain tech companies working on AI models in a negative light. While companies such as OpenAI are already grappling with the task of training their models to not produce overtly racist output, they will also need to address the covert racism that chatbots like ChatGPT seem to be internalising. It can be therefore assumed that further training and tweaking of these LLMs can be expected in the near future.

 

Conclusion

The revelation of covert racism within AI models highlights a concerning flaw in their training, leading to biased outputs. This not only raises legal issues regarding discrimination and human rights but also threatens to perpetuate societal inequalities. Addressing these biases is imperative for the tech industry to uphold ethical standards and ensure fair and inclusive AI systems in the future.