What Do Algorithms Tell Us About Prejudice in Society?
Addressing learnt bias

People are often put off when they hear the word algorithm and write it off as too technical to understand. However, algorithms are all around us. They have a huge impact on society and democracy, and how we experience our digital world. Hence, our understanding of them on at least some level is crucial.
I recently wrote part of an essay for my Master’s degree on the idea that algorithms can perpetuate and magnify prejudice, thus allowing it to continue unchecked. While researching this topic I came across several facts that I found extremely fascinating and wanted to share them with you all.
It is often assumed that machine learning algorithmic techniques are particularly useful since they eliminate human biases. While humans are biased and error-prone, it does not mean that algorithms are certainly always better. It can be argued that “an algorithm is only as good as the data it works with” (Barocas and Selbst, 2016; p.671). To put it simply, the input affects the output. Since algorithms are trained on human data, it is quite possible that they could in turn acquire human prejudices. According to Barocas and Selbst (2016), this would be the case if machine learning algorithms treat cases with prejudice as valid examples to learn from or if they draw inferences from a biased sample. If this is truly the case, then it poses a serious threat to equality, while raising numerous ethical concerns.
It comes as no surprise that discrimination continues to persist in our society. Its stubborn pervasiveness in the domains of employment, credit, housing, and consumer markets has been studied extensively (Pager and Shepherd, 2008). Racism in various justice systems is also widespread and has recently gained its due attention because of the Black Lives Matter movement.
Prosecutors in Harris County have been found to be three to four times more likely to seek the death penalty against defendants of color (Paternoster, 2015). Another study found that in New York City Black and Latino males between the ages of fourteen and twenty-four accounted for 40.6% of the stop-and-frisk checks by police, while they only make up 4.7% of the population (NYCLU, 2012).
Automated prejudice when using artificial intelligence (AI) has also been documented in various contexts, one of the most detrimental being racist criminal sentencing (Douglas et al., 2017). The use of algorithms in such cases often amplifies and even contributes to the inequalities in society, such as institutional racism. One example involves COMPAS, a risk assessment software used to forecast the likelihood of criminals reoffending in the United States. Angwin et al. (2016) found that when the COMPAS algorithm was wrong, the results varied greatly depending on race. Black offenders were twice as likely (than white offenders) to be labeled as high risk but not actually reoffend.
In her book, Weapons of Math Destruction, Cathy O’Neil examines the social ramifications of algorithms and consequently argues that many big data algorithms reinforce and magnify preexistent societal inequalities. O’Neil points out that instead of eliminating bias, we are simply camouflaging it with technology. She describes how algorithms contribute to and perpetuate the wealth gap through student loans in the United States: a lending model algorithm deems a poor individual too risky for a student loan (due to his postcode), which in turn cuts him off from the education that could have alleviated his poverty.

O’Neil (2016) also discusses how algorithmic prediction in policing increases the chance that poorer individuals will be caught by police for crimes that are committed at comparable rates in wealthier neighborhoods- simply because more police are being sent to poor areas by the algorithms. This book is definitely worth a read, especially for people who believe that data doesn’t lie.
Similarly, Hannah Fry warns us of the problems with algorithmic policing in her book, Hello World, by highlighting how we risk getting into a feedback loop. A feedback loop is created because the algorithm is learning from reports recorded by police and not actual crime rates. Algorithmic predictions used by the police within England and Wales have been found to predict future policing rather than actually predicting future crime (Oswald and Babuta, 2019). This is also the case with PredPol, an algorithm used in several American states to predict crime (Lum and Isaac, 2016).
Such algorithmic bias is also found in the area of face-recognition. This can be especially dangerous as many facial recognition software are being used by law enforcement, and they are a potential source of racial and gender bias (Cossins, 2018). Buolamwini and Gebru (2018) investigated three commercial gender classification AI systems. It was found that the error rates for light-skinned males was 0.8%, while it was up to 34.7% for darker-skinned females. An article discussing these findings was aptly titled ‘Face-recognition software is perfect- if you’re a white man’ (Revell, 2018).
These examples articulate how algorithms can amplify inherent human biases, and hence a vicious cycle ensues- thereby affecting increasingly larger populations.
Interestingly, a study by Caliskan et al. (2017) found that algorithms that were trained on data from the news learned race and gender-based biases swiftly. This is of particular interest as many claim that the news is an objective source of information. According to the researchers it is not surprising that bias would be a result even when using an unbiased algorithm to derive regularities from a given dataset because the regularities discovered are the bias. Importantly, the computational model is exposed to language in a similar way that humans would be. The results of Caliskan et al. (2017) indicate that language encompasses imprints of our historic biases, which are frequently problematic. Such findings hold vital implications, not only for AI but also for our understanding of humans in the field of psychology. This is because these results propose that mere exposure to language in humans could account for biases on some level as it did in algorithms.
I then came across a new study by DeFranza et al. (2020), which provides further evidence for the idea of mere exposure to language and prejudice in humans. Their research indicates that gender prejudice is more common in languages that use grammatical genders (i.e. languages in which the form of a noun or verb is presented as either female or male). This includes languages such as Hindi, French and Spanish to name a few. These findings suggest that the language humans use can enhance gender-based prejudice because the ‘genderedness’ of certain languages makes gender more salient in the mind. Research such as DeFranza et al. (2020) suggests that language can both shape and communicate human thoughts, particularly concerning prejudice.
Overall, the findings that algorithms can learn bias reflects and magnifies the widespread existence of prejudice in society, while exposing clear patterns of inequality. The acknowledgment of prejudice in seemingly unbiased algorithms greatly enhances our understanding of society and how the human mind can acquire biases too. Thus, the notion that algorithms eliminate human biases is incorrect as algorithms have been found to have a significant impact on our society and its unjust inequalities.
References:
- Angwin, J., Larson, J., Mattu, S., and Kirchner, L., 2016. Machine bias risk assessments in criminal sentencing. ProPublica, May, 23.
- Barocas, S. and Selbst, A.D., 2016. Big data's disparate impact. Calif. L. Rev., 104, p.671.
- Buolamwini, J., and Gebru, T., 2018. Gender shades: Intersectional accuracy disparities in commercial gender classification. In Conference on fairness, accountability and transparency (pp. 77-91).
- Caliskan, A., Bryson, J.J. and Narayanan, A., 2017. Semantics derived automatically from language corpora contain human-like biases. Science, 356(6334), pp.183-186.
- Cossins, D., 2018. Discriminating Algorithms: 5 Times AI Showed Prejudice. [online] New Scientist. Available at: <https://www.newscientist.com/article/2166207-discriminating-algorithms-5-times-ai-showed-prejudice/> [Accessed 11 June 2020].
- DeFranza, D., Mishra, H. and Mishra, A., 2020. How language shapes prejudice against women: An examination across 45 world languages. Journal of Personality and Social Psychology.
- Douglas, T., Pugh, J., Singh, I., Savulescu, J. and Fazel, S., 2017. Risk assessment tools in criminal justice and forensic psychiatry: the need for better data. European Psychiatry, 42, pp.134-137.
- Fry, H., 2018. Hello World: How to be Human in the Age of the Machine. Random House.
- Lum, K., and Isaac, W. (2016). To predict and serve?. Significance, 13(5), 14-19.
- Nyclu.org. 2012. Stop-And-Frisk 2011. [online] Available at: <https://www.nyclu.org/sites/default/files/publications/NYCLU_2011_Stop-and-Frisk_Report.pdf> [Accessed 31 May 2020].
- O'neil, C., 2016. Weapons of math destruction: How big data increases inequality and threatens democracy. Broadway Books.
- Oswald, M. and Babuta, A., 2019. Data Analytics and Algorithmic Bias in Policing.
- Pager, D. and Shepherd, H., 2008. The sociology of discrimination: Racial discrimination in employment, housing, credit, and consumer markets. Annu. Rev. Sociol, 34, pp.181-209.
- Paternoster, R., 2015. Racial disparity in the case of Duane Edward Buck. https://www.naacpldf.org/wp-content/uploads/Duane-Buck-FINAL-Signed-Paternoster-Report-00032221-1.pdf
- Revell, T., 2018. Face-Recognition Software Is Perfect – If You’re A White Man. [online] New Scientist. Available at: <https://www.newscientist.com/article/2161028-face-recognition-software-is-perfect-if-youre-a-white-man/> [Accessed 11 June 2020].
About the Creator
Simran Lavanya Saraf
When I'm not oversharing my thoughts on the internet, you will find me devouring chocolate, making good use of my Netflix account, or asking strangers if I can pet their dogs.
Comments
There are no comments for this story
Be the first to respond and start the conversation.