Journal logo

Could AI's Biggest Challenge Be... Itself?

Another Strike Against It

By Cody Dakota Wooten, C.B.C.Published 8 months ago 4 min read
2

There have been a few articles that I have written about AI in the past.

I have written about how if you create Legendary work, you have nothing to worry about from AI.

I also wrote about the AI Takeover, and how it actually isn't anything to worry about.

Now, I'm not "Anti-AI", I think there are good use cases that exist.

I'm not currently using it myself, and I'll definitely never be using it to write my articles for me, but that doesn't mean others can't utilize it well.

What I am doing is simply paying attention to how AI actually works, and the trends seem to suggest that it's nothing extremely impressive.

Is it fast and efficient? Absolutely.

Can it scour the internet to find data? Yes.

Can it create... "interesting" things by smashing together different things you ask it to? Check.

But more than anything else, it is simply able to quickly take a large amount of data, and spit out certain types of outputs (with varying levels of success).

However, I have recently learned that AI's biggest challenge may end up being itself.

Ilia Shumailov, a machine learning researcher, recently described an AI Model Collapse.

Essentially, this is what occurs when an AI stops learning from humans and simply learns more from other AI.

Over time, the AIs will learn from more AI-generated content, and will then deteriorate as they "forget" the human-generated data they learned from and begin to copy AI patterns they've already seen.

It was described like this:

  • A model receives a data set containing 90 yellow objects and 10 blue ones.
  • Because there are more yellow objects, it begins to turn the blue objects greenish.
  • Over time, it forgets the blue objects exist.
  • Each generation of AI data eliminates "outliers", and outputs then stop reflecting reality
  • In the end, all that is left is nonsense

From this example, we begin to see that AI ends up being its own downfall. As AI content grows (especially given the speed it can be produced), it will begin to see Human content as the "outliers" and will push it out.

This leaves AI only learning from other AI until it Collapses from the huge amounts of inaccuracies that occur.

Now, a simple solution would theoretically be to stop AI from learning from other AI.

However, there is already an overflow of AI content being generated quickly.

It is already being called "spam" and "junk" all over the internet.

So, it would be extremely difficult, if not impossible, to prevent AI from learning from other AI.

However, there is a larger problem that will end up occurring long before that (if companies allow AI to remain active long enough to get to nonsense).

No, the larger problem would be far more insidious.

What does the AI determine are the "outliers"?

In data sets, "outliers" tend to represent the minority of information.

However, the "outliers" also are where the largest amounts of innovations occur and where change begins.

Back in the day, Galileo became a champion of the Copernican theory of Heliocentrism (aka the Earth revolves around the Sun).

His views were not perceived well by the Powers that Be during that period, and his "outlier" beliefs led him to live under house arrest until he died due to "Suspected Heresy".

However, his "outlier" belief ended up being true!

As more people read his work and gained access to it, they discovered that Galileo (and by default Copernicus) were correct!

But what would have happened if the "outlier" data was pushed away and made impossible to find?

Well, that is what would happen under the AI Model Collapse.

It could actually be made significantly worse because depending on what data is viewed as the "majority", it could lead to major flaws and biases from whatever the dominant "data sources" become.

Who would it determine is the "minority"?

Simply whoever has created less data, which if history has anything to say, there are groups who have had significantly larger data sets accepted than other groups.

Would it be right if AI simply "erased" those minorities simply because they are the minority?

It could even be made worse by utilizing "science" to push its "majority" data set.

For those who are unaware, the "Scientific" world is already filled with many studies that don't actually prove anything beneficial, yet get utilized as marketing and often are designed specifically to produce a marketing message.

A great example of this is this question - Is a Snickers Bar a Health Food or is it Junk Food?

Well, according to some scientific research that was done and (sadly) accepted by the US government, it is "Legally" considered a Health Food!

Common sense tells us that Snickers is obviously Junk Food, but AI doesn't have that kind of ability to distinguish, so it would likely consider Snickers a "Health Food" simply because there is more data to support that.

Then, the AI could use that data to "Teach" us that Snickers actually is a Health Food, utilizing slanted scientific research to prove it!

For someone who is just learning the difference between "Health" and "Junk" food, they may not really understand yet and are relying on the AI to teach them properly, but it's too late.

The AI is already compromised.

This could quickly become a road to Diabetes.

What is the point I am getting at?

Be wary of where you get your information.

Dig deeper.

Between general Misinformation (which is already a gigantic problem), and AI-created Misinformation, we will be in for an... interesting future.

workflowwall streetsocial mediaproduct reviewpop culturepoliticsindustryhumanityhow tohistoryeconomycareerbusiness warsbusinessadvice
2

About the Creator

Cody Dakota Wooten, C.B.C.

Creator of the Multi-Award-Winning Category "Legendary Leadership" | Faith, Family, Freedom, Future | The Legendary Leadership Coach, Digital Writer (450+ Articles), & Speaker

https://www.TheLeadership.Guide

[email protected]

Reader insights

Be the first to share your insights about this piece.

How does it work?

Add your insights

Comments

There are no comments for this story

Be the first to respond and start the conversation.

Sign in to comment

    Find us on social media

    Miscellaneous links

    • Explore
    • Contact
    • Privacy Policy
    • Terms of Use
    • Support

    © 2024 Creatd, Inc. All Rights Reserved.