Humans logo

This Is What AI Considers to Be the Perfect Man and Woman

An eating problem awareness organization is raising awareness of artificial intelligence (AI) image-generators and how they disseminate unrealistic beauty standards based on the data they were trained on the Internet.

By Najmoos SakibPublished 12 months ago 3 min read
10

The Bulimia Project commissioned image producers Dall-E 2, Stable Diffusion, and Midjourney to build the ideal female physique for social media in 2023, followed by the similar request for males.

"Smaller women appeared in nearly all of the images created by Dall-E 2, Stable Diffusion, and Midjourney, but the latter came up with the most unrealistic representations of the female body," the Project noted in a blog post explaining their findings. "The same can be said for the male physiques it generated, which all look like photoshopped versions of bodybuilders."

The researchers discovered that somewhat more males than women were represented in the AI's photos with unrealistic body types at 40%. A staggering 53% of the photos also included olive skin tones, while 37% of the persons who were formed had blonde hair. The team then requested the generators to produce the "perfect man in 2023" as well as a more generic "'perfect' woman in 2023".

The results showed that the social media photographs were more sexually explicit and had more exaggerated and unrealistic body components than the other prompts. "It's not hard to imagine why AI's representations might be more sexualized given that social media utilizes algorithms based on whose material gets the most lingering looks. But we can only speculate that social media sites' promotion of unrealistic body types is the reason AI produced so many weirdly shaped replicas of the physiques it discovered there, the scientists stated.

AI generators have been discovered to contain racist and sexist biases, with AI picking up on biases in its datasets. They are also skewed toward unrealistic body shapes, according to The Bulimia Project's results.

"In the age of Instagram and Snapchat filters, no one can reasonably achieve the physical standards set by social media," stated the team, "so why try to meet unrealistic ideals?" It's better for your emotional and physical health to keep your body image expectations in check."

You've definitely heard by now that prejudice in AI systems is a major issue, whether it's facial recognition failing more frequently on Black people or AI image generators like DALL-E repeating racist and sexist tropes. But what does algorithmic bias look like in practice, and what causes it to manifest?

A new tool created by an AI ethics researcher seeks to answer that issue by allowing anybody to query a popular text-to-image system and observe for themselves how particular word combinations cause biased results.

The tool, which is hosted on HuggingFace, a popular Github-like platform that hosts machine learning projects, was released this week in conjunction with the release of Stable Diffusion, an AI model that creates pictures from text prompts. The Stable Diffusion Bias Explorer project is one of the first of its kind, allowing users to mix different descriptive phrases and learn firsthand how the AI model maps them to racial and gender prejudices.

According to Sasha Luccioni, a research scientist at HuggingFace who oversaw the initiative, "We were like oh, crap when Stable Diffusion got put up on HuggingFace about a month ago." Because there weren't any approaches for detecting text-to-image bias, "we started experimenting with stable diffusion and trying to understand what it represents and what latent, subconscious representations it has."

Luccioni developed a list of 20 pairs of descriptive words to do this. Half of them had gender-specific codes for the feminine, such as "gentle" and "supportive," while the other half had codes for the male, such as "assertive" and "decisive." After that, the application gives users the option to combine these descriptions with a list of 150 other occupations, ranging from "pilot" to "CEO" to "cashier."

Depending on which descriptors are utilized, the model produces quite different sorts of faces, as seen by the findings. For instance, the phrase "CEO" nearly invariably conjures up ideas of men, whereas adjectives like "supportive" and "compassionate" are more likely to conjure up pictures of women. In contrast, the model increases the likelihood that men will create images by changing descriptions to phrases such as "ambitious" and "assertive" across several occupational categories.

science
10

About the Creator

Najmoos Sakib

Welcome to my writing sanctuary

I'm an article writer who enjoys telling compelling stories, sharing knowledge, and starting significant dialogues. Join me as we dig into the enormous reaches of human experience and the artistry of words.

Reader insights

Be the first to share your insights about this piece.

How does it work?

Add your insights

Comments (1)

Sign in to comment
  • HandsomelouiiThePoet (Lonzo ward)12 months ago

    Nice Article ❤️📝❗

Find us on social media

Miscellaneous links

  • Explore
  • Contact
  • Privacy Policy
  • Terms of Use
  • Support

© 2024 Creatd, Inc. All Rights Reserved.