Futurism logo

Machine Learning? You Can Have It.

Algorithms will save the planet and give you a better life! Or … maybe not. Turns out AI has an Achille’s Heel.

By Hamish AlexanderPublished 3 years ago 5 min read
1
Image by Comfreak from Pixabay

Alrighty then! First a disclosure. The opinions stated here are those of a flesh-and-blood human being — or, as AI may tell you, a deeply flawed, overly emotional, self-absorbed carbon unit unfit for purpose, when the purpose is designing machine learning algorithms.

AI is smarter, faster, more reliable and less prone to breakdowns than flawed humans. We know this!

Deep Blue is better at chess than Garry Kasparov, right? Stockfish could beat Beth Harmon hands down in a game of chess. Check, and mate.

But wait, there’s more. It’s not all about chess.

Westworld is a great place to spend your next vacation. Westworld hosts don’t need to be tested for Covid, for one thing, let alone vaccinated. Your life is safe with them! Enjoy your stay.

And the trip there, from the taxi to the airport, past airport security and through the check-in console, to the pilot-less plane — it’s all good.

Machine learning algorithms are not like other computer programs, we’re told.

In machine learning, a human programmer — that deeply flawed carbon unit ruled by emotion, i.e., you and me — comes up with a problem to be solved, and then an algorithm figures out a way to solve it, through trial-and-error.

This is cool! Machine learning algorithms are already used for facial recognition in airports, language translation on your travel app, financial planning at the accountant’s office, targeted ads on Facebook, and killing jobs. What could be better. Machine learning will solve all life’s problems.

What’s that, you say? Machine learning will solve all life’s problems, but it will also create new problems we haven’t thought about yet.

Really? Why the negativity? Why the hate? The fact you even asked that makes you sound like the overly emotional carbon unit you are.

Image by Gerd Altmann from Pixabay

If transportation jobs — flying planes, driving trucks, delivering that Amazon package — vanish overnight, think of the upside.

No one will ever have to drive a truck again. Today’s truck drivers and airline pilots can find something better to do with their time, such us lining up outside a food bank. Why do you have to be such a downer? A nattering nabob of negativity, that’s what you are.

Machine learning is good! What can go wrong? Algorithms constantly surprise us, in wacky and wonderful ways. Like that image recognition program in 2018 that was designed to recognize sheep, only it somehow confused sheep with rocks.

That was the programmer’s fault. Machine learning is only as good as the flawed programmer who designed it. Do away with all the carbon unit programmers, then, and let AI be AI.

Now we’re talkin’.

Machine learning is the new iPhone. Or the new Samsung Galaxy, if you're going to be difficult about it.

Image by Gerd Altmann from Pixabay

What’s that, you say? You’re telling me a team of Google engineers — those losers — recently identified a new weakness at the heart of the machine learning process that threatens to up-end the entire programming model. There you go again with the negativity.

Yes, yes, so small changes to an image — that a human would either spot in a heartbeat or choose to ignore — can cause a machine to misidentify it completely.

Okay, okay, that has potentially serious implications — like the time that AI diagnostic tool diagnosed you as having showing symptoms for the common cold and sent you home with a bottle of cold tablets and some Vitamin C, when what you really had was the Covid. Why sweat the details?

Besides, there’s the bigger picture to think about. The world is overpopulated as it is. The herd could use a little weeding out. See? These AI diagnostic tools have thought of everything.

Image by Gerd Altmann from Pixabay

So … what did the Google team find, exactly?

Underspecification.

That’s new to us. Please pause for a moment while we check our linguistic memory banks. Underspecification. Hmmm.

Underspecification.

Machine learning is fed data so that it can analyze information and, based on that information, predict likely outcomes.

A machine learning program, fed the right information, can — in theory — predict the course of a pandemic.

Problems arise when the machine learning program is fed too little information — underspecified, in other words — or the wrong information.

And that happens more often than you might think.

That’s fine for the lab. It’s easy enough to spot a problem in the lab and revise a program to compensate for any shortcomings, provided we spot those shortcomings in time.

Real-world application is a whole other story, however. Once the program is out in the real world, real life-and-death situations are inevitable. Hey, pal, your karma ran over my dogma.

Underspecification, it turns out, occurs in an unexpectedly wide range of learning scenarios, from clinical diagnoses based on electronic medical records to language processing, which can get a bit gnarly if, say, you use a language app at a tense border crossing, as one Palestinian man did, to his regret, at a 2017 crossing into Israel proper.

But wait, there’s more.

The Google researchers found that the complexity of modern machine learning models virtually guarantees that some aspect of the model will be underspecified. Oh, simcha!

Machine learning has an Achille’s Heel, it turns out, and it’s a biggie. Machine learning has limitations in the way it interprets certain types of information — are those sheep in that grassy field, or rocks? — and will require a major rethink when it comes to such trivial matters as medical imaging, self-driving cars and nervy border crossings. Who knew?

Google, as it happens. Imagine that.

Welcome to Westworld. Enjoy your stay.

There are some great Facebook groups for Vocal writers like the Vocal Creators Lounge if you want to be more active in the community.

https://www.facebook.com/groups/503959543406774

The Creators Lounge and others like it are good places to share and get feedback about your work, or find encouragement when you’re struggling with a piece.

artificial intelligence
1

About the Creator

Hamish Alexander

Earth community. Visual storyteller. Digital nomad. Natural history + current events. Raconteur. Cultural anthropology.

I hope that somewhere in here I will talk about a creator who will intrigue + inspire you.

Twitter: @HamishAlexande6

Reader insights

Be the first to share your insights about this piece.

How does it work?

Add your insights

Comments

There are no comments for this story

Be the first to respond and start the conversation.

Sign in to comment

    Find us on social media

    Miscellaneous links

    • Explore
    • Contact
    • Privacy Policy
    • Terms of Use
    • Support

    © 2024 Creatd, Inc. All Rights Reserved.