01 logo

What is AI Bias? Here's Everything You Need to Know

Bias is not only harmful to minority groups but also has the potential to degrade empathy for diversity.

By Prio Danny KuncoroPublished about a year ago 6 min read
Like
What is AI Bias? Here's Everything You Need to Know
Photo by h heyerlein on Unsplash

What is AI bias? Those new to the term may feel confused because this problem is rarely realized in general. In fact, bias is one of the risks arising from AI technology that can harm certain groups.

Along with improving knowledge and developing artificial intelligence technology, "Bias in AI" has been the object of considerable research among developers.

The term bias describes a situation where the data analysis process in machine learning-based systems presents results that are considered detrimental to certain groups. Bias can involve gender, race, age, body condition, language, skin color, and culture.

See the explanation below to make understanding what AI bias is easier.

What is AI bias?

As a technology built on algorithms and data, AI bias results from machine learning errors. These errors can be problematic if the context relates to race, skin color, and gender.

Despite being considered a cutting-edge technology, AI is built on a set of algorithms determined by a group of people. The results are also the same as the "design mindset" taught by that group.

By Markus Winkler on Unsplash

The advanced technology embedded in AI is not necessarily immune to bias. One clear example is the image generator application that has recently become increasingly popular. Users must only type in prompt commands, and the application will generate the desired image.

However, a paper by Federico Bianchi et al. found that images such as human face models generated reinforce harmful stereotypes around gender, race, gender, social disparities, and criminal threats. This is a clear example of bias by AI.

Examples of AI bias in app-based systems

A simple example of AI bias is facial recognition technology designed to automatically recognize human faces. The problem is that facial recognition algorithms are often trained only to recognize white faces rather than people of color.

Suppose facial recognition technology struggles to recognize people of color but succeeds when the object is white. In that case, this falls into the realm of discrimination. Where minority groups ultimately do not have the same opportunities, causing bias.

Another finding in 2019, a group of researchers found that a hospital in the United States used an algorithm that favored skin patients over blacks by a considerable margin when it came to predicting which patients needed additional care.

By Markus Spiske on Unsplash

Another example is the PortraitAI art generator, which can produce images with a touch of Baroque and Renaissance painting styles. The app makes images like classic medieval European paintings and works well for fair-skinned faces.

But when used by the BIPOC group, for example, the resulting image looks unsatisfactory. The reason is that the algorithm in PortraitAI uses a database of paintings of white-skinned Europeans, which were more common in that era.

PortraitAI, as the developer, acknowledged the problem, saying: "Currently, the AI portrait generator has mostly been trained on portraits of people of European ethnicity. We plan to expand our dataset and improve it in the future.

Examples of common AI biases

1. PredPol algorithm's bias against minorities

PredPol, or predictive policing, is an algorithm used by US police departments such as Florida, Maryland, and California. The goal is to predict the location of potential crimes in the future.

The PredPol algorithm, which uses artificial intelligence, works based on data from the number of crimes collected by the police, such as the number of arrests and police calls in a place. Unfortunately, PredPol has more bias, according to research by several experts.

PredPol has been known to repeatedly display data and send police officers to certain neighborhoods dominated by racial minorities, regardless of how much crime occurs in the area.

2. COMPAS algorithm is biased against black communities

The findings of ProPublica, a Pulitzer Prize-winning, non-profit news organization, about algorithm bias against certain communities occurred in COMPAS (Correctional Offender Management Profiling for Alternative Sanctions), which is a special software for law enforcement in the United States,

COMPAS is software with algorithms built using artificial intelligence technology by Northpointe developers, intending to predict criminals or recidivists who are likely to commit crimes again.

According to ProPublica's findings, the bias in COMPAS is a tendency towards black criminals who are considered at high risk of re-offending. Meanwhile, white criminals are considered less at risk of re-offending.

3. AI bias based on gender in Amazon's employee hiring process

Amazon's efforts to use artificial intelligence algorithms to screen potential employees have created a gender bias against female job seekers in its hiring process. This is possible because, in the past, the manual hiring process was mostly done by men.

The inherent bias against female candidates is likely the data to train the AI in analyzing candidates' resumes. Amazon's study of the algorithms in its hiring process yielded some surprising findings.

Amazon found that the algorithm automatically crippled resumes with the words "female" and downgraded graduates from two all-female colleges. Amazon then decided to scrap the algorithm and not use it anymore.

By Samsung UK on Unsplash

How to minimize artificial intelligence bias?

Bias in artificial intelligence is one of the challenges for developers, in addition to the advancement of AI, considered the most advanced technology in the industrial era 4.0. So, what is the solution to eliminate bias in AI?

1. Involve collaboration between artificial intelligence and humans

This process can be considered a Human-in-the-Loop system, where humans must be involved to solve a problem that cannot be solved by AI. From there, an iterative process (in-the-loop) will be created that feeds back continuously.

Feedback through the in-the-loop process will become learning material for artificial intelligence to improve its capabilities. The result can produce more accurate and precise data due to human involvement.

2. Applying preprocessing and post-processing methods to AI systems

Preprocessing is a method that filters and sorts out potentially biased data and removes it cleanly before it is used to train AI. Simply put, it is designing AI to be unbiased by teaching it with square data.

Then the post-processing method is to look back at the results of AI training with the given data. Suppose it is deemed to have the potential to produce bias. In that case, system developers can change the predictions generated by paying attention to aspects in terms of ethics about race, gender, and ideology.

By Clay Banks on Unsplash

3. Testing algorithms based on real-life diversity

Racial and gender bias is one of the many biases caused by AI due to certain factors. Starting from artificial intelligence developers dominated by certain races, data inequality, etc. The solution is to look back at real life, which is very diverse.

AI is a machine trained using data intake, which then produces algorithms. These algorithms must be qualified and tested using data suitable for diversity, do not favor certain races, and fulfills the principle of equality that is truly fair.

4. Increase diversity from the developer side

It is no secret that artificial intelligence technology is developed by most white males. As a result, AI models are designed with biases that extend from gender to communities of color.

Increasing the diversity of developers - in terms of demographics, gender, race, and skills, will reduce the likelihood of bias so that the problem is quickly recognized and mitigated before it is released into production.

5. Invest more in improving AI quality

High-quality AI has a minimal level of bias, so it has the potential to avoid certain conflicts that violate gender, racial, and other equality. There is a need for collaboration between developers and businesses to invest more in bias research.

Collaboration between business owners and experts in various fields, such as computers, law, social experts, doctors, and ethical practitioners, will create a wealth of data that can be used as biased research in AI. This approach, through multidisciplinary collaboration, will improve the quality of AI in practical applications.

By Pietro Jeng on Unsplash

Conclusion

Artificial intelligence is one of the advancements of modern human civilization that is occasionally growing. However, these technological advancements are like two sharp blades that have the potential to become a setback because AI also creates bias.

Bias in AI occurs due to the use of data and assumptions that are also biased. Hence, artificial intelligence, which is expected as a tool to facilitate human life, actually destroys the essence of humanity itself.

Humans can prevent bias in AI by playing an active role in feeling discrimination in everyday life in the neighborhood. Starting from increasing knowledge by reading about AI and bias, engaging in discussions, and understanding inclusivity in society.

startuptech newspop culturefuture
Like

About the Creator

Prio Danny Kuncoro

A writer who covers digital business and technology developments. Realizes that technological disruption, such as artificial intelligence, will shape the future.

Reader insights

Be the first to share your insights about this piece.

How does it work?

Add your insights

Comments

There are no comments for this story

Be the first to respond and start the conversation.

Sign in to comment

    Find us on social media

    Miscellaneous links

    • Explore
    • Contact
    • Privacy Policy
    • Terms of Use
    • Support

    © 2024 Creatd, Inc. All Rights Reserved.