Futurism logo

Amazon sexist ai

Amazon's AI recruiting tool that showed bias against women

By Andrew DPublished 2 years ago 4 min read
2
Amazon sexist ai
Photo by Christian Wiediger on Unsplash

Firstly, one must understand what bias is and how it affects AI systems. Cambridge English Dictionary defines bias as “the action of supporting or opposing a particular person or thing in an unfair way, because of allowing personal opinions to influence your judgement”. From this definition the danger of allowing bias in AI systems can be seen. It is the programmer’s duty to eliminate or minimise bias in algorithms. This is because programs are not inherently biassed, however the people designing them are. By removing personal judgement and prejudice from AI systems, these systems would generalise better and produce more accurate results. However, omitting bias when creating AI techniques can be a difficult procedure. This is because, as will be shown in the next section, spotting bias can be a problematic task.

To understand better how bias can creep into AI systems, take a look at one of the world’s biggest companies, Amazon, and how their hiring tool had a fundamental problem. It was favouring male applicants. According to a Reuters article, “Amazon scraps secret AI recruiting tool that showed bias against women”[1]. Amazon developed a hiring tool that used AI to give job candidates scores. However it was not doing this in a gender-neutral way. The score given to an applicant had a lot to do with gender rather than qualifications. This bias to promote male candidates occurred because of how the algorithm was trained. “Amazon’s computer models were trained to vet applicants by observing patterns in resumes submitted to the company over a 10-year period.” The problem with this approach was that most of these resumes came from men. The system taught itself to penalise resumes that included the word “women’s” for example, “women’s football team”. Also it downgraded graduates of two all-women’s colleges. Moreover, it was seen that the system would promote higher masculine language. Given the rise of popularity of hiring tools in big companies, it is crucial that the underlying algorithms do not discriminate. As seen above, even though gender and names were not included, the AI system still managed to contain bias in favour of male candidates.

From the Reuters article, an understanding of the problem is achieved. In this section, a deeper look into how Amazon’s hiring tool was biassed will be discussed. To understand bias in AI, one must have knowledge about the design of the system[2]. This AI tool was designed to give a score according to the individual’s resume. In this case it was clear that the tool will result in many false negatives, that is, wrongly disqualifying a potentially strong candidate. Since having a different background or gender to others doesn’t make the candidate a weak option. However the system was trained to do exactly this.

How was it trained to assume this? Machines rely heavily on information to come up with predictions. It is crucial that the data presented to an algorithm in the training phase is relevant. It must be able to make intelligent predictions from it. In this case Amazon used the resumes submitted by past candidates. For the algorithm, good resumes were those which received an offer by a hiring manager which were mostly male candidates, resulting in a class imbalance. With just this dataset the algorithm failed to accurately generalise because the data is not an accurate representation of the population of software engineering resumes. Other than that, Amazon used a ‘bag of words’/keywords to score the resumes. Clearly this NLP component didn’t work as intended, since it was noted that the algorithm promotes masculine words while penalising feminine ones.

It is shocking that a company as influential as Amazon in the automated world failed to notice all these issues while creating the hiring tool. How can one prevent this from happening in the future? First of all, when creating algorithms one must be aware of bias and influence being taught to the AI. In this specific case, it is well researched that diversity is crucial in the workplace and improves overall team performance. Therefore, when building a hiring tool these considerations must be taken into account. A solution might be to promote diversity instead of penalising it. For legal or ethical purposes a company would need to have certain minorities hence the algorithm can be made to promote words that are associated to these minorities when necessary. This way the target score aligns closely to the real company goal.

In recent years there has been a rapid rise of Natural Language Processing applications. Many companies introduced automation to handle parts of their businesses. As seen above, even Amazon failed to realise bias in their AI. These NLP systems have an effect on society as a whole. Especially when used to hire people. It is crucial that there isn't any discrimination. As seen, bias can take many forms. Hence it is up to the programmers and designers to be knowledgeable about it and minimise the effect. Only then can these AI systems be truly beneficial to society.

[1]https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G

[2]https://becominghuman.ai/amazons-sexist-ai-recruiting-tool-how-did-it-go-so-wrong-e3d14816d98e

artificial intelligence
2

About the Creator

Andrew D

Student at University, motivated to learn and improve my skills so as to achieve my maximum potential.

Reader insights

Be the first to share your insights about this piece.

How does it work?

Add your insights

Comments

There are no comments for this story

Be the first to respond and start the conversation.

Sign in to comment

    Find us on social media

    Miscellaneous links

    • Explore
    • Contact
    • Privacy Policy
    • Terms of Use
    • Support

    © 2024 Creatd, Inc. All Rights Reserved.