Futurism logo

AI Maybe More Like A Pet Than A Computer

I see a future where AI has a relationship with humans that is somewhat similar to the dynamic between humans and a working animal.

By Mason PeltPublished about a year ago 4 min read
1
Photo by Brett Jordan on Unsplash

Ed Note: A version of this article was first published in Hackernoon on November 30, 2019. The essay has been updated slightly, but the concerns and predictions still hold up in 2023.

People express a lot of fear when it comes to AI. Some worry AI will grow superhuman and kill us all. Others are concerned that AI lead automation will displace over 100 million workers and devastate the economy. Honestly, either may happen because the simple truth of AI is that when machines learn, humans lose control.

Currently, self-learning AI is not particularly sophisticated. Remember the AI Elon Musk said was too dangerous for release to the public? Well, it got released and was pretty disappointing. When it comes to his companies, Musk is more hype man than soothsayer. But AI is getting advanced, projects from Deepmind are learning games like chess and go without knowing the rules, and these AI's are beating human players.

AI May Be Missing A Foundation?

AI is advancing, but it may be missing a piece that will prevent it from reaching a human-like general intelligence. An article in Science Magazine shares the story of a 3-year old being able to learn and apply recently acquired knowledge to new contexts quickly. The child in the account belongs to Gary Marcus, a cognitive scientist and a founder of RobustAI. Marcus argues that humans start with instincts either at birth or from childhood, that allow us to think abstractly and flexibly. 

A starting intuition is something that is not currently programmed into most AI's. I can imagine programming an AI with instincts anywhere close to those of a human would be a daunting task. But the lack of what I'll call "base instinctive code" creates in my mind a problem, where the humans don't honestly know how our narrow (but expanding) AI's think. 

I see a future where AI has a relationship with humans that is somewhat similar to the dynamic between humans and a working animal. A blind person and a seeing-eye dog are a team, but neither understands what goes on in the others' head.

When We Understand How AI Thinks

Earlier this year, The Atlantic detailed a horrifying story of a criminal-sentencing AI. The article chronicles the experience of Rachel Cicurel, a staff attorney at the Public Defender Service for the District of Columbia. An AI flagged Cicurel's client as "high risk," and prosecutors removed probation as an option for her client. 

From The Atlantic:  

"Cicurel was furious. She issued a challenge to see the underlying methodology of the report. What she found made her feel even more troubled: D's heightened risk assessment was based on several factors that seemed racially biased, including the fact that he lived in government-subsidized housing and had expressed negative attitudes toward the police.

Ultimately, Cicurel learned that the AI had never been scientifically validated. With the only review coming from a graduate-students' unpublished paper. The judge for the case threw out the test. Still, AI is used in criminal justice, and with other examples of flaws that came to light, and likely many more people who were unfairly victimized.

Not All AI's Explain How They Think

Cicurel was able to see and review the underlying methods the AI used to flag her client as "high risk." Not every AI comes with the ability to explain how it is making decisions. I'd be shocked if anyone of any team at Google could explain how the search engine works. I'm 99% sure that Google doesn't have a button to print out the underlying methodology used by layers of machine learning to curate search results.

People use technology every day that is already out of the understanding of average users — but not knowing how a TV or a car work is very different from not knowing how a software is making life-altering decisions. We may not care how AI is generating art, and when AI is used for medical screening, the underlying thought process is less important than the resulting accuracy. 

However, if an AI is used for tasks like criminal-sentencing, it should be paramount that humans can understand the underlying thought process of the software. At the very least we should conceptually understand the underlying "instincts" of the AI, rather than relinquishing that training to piles of data and reinforcement learning techniques.

More Reading

--

This article by Mason Pelt of Push ROI. A version of this article was first published in Hackernoon on November 30, 2019 and is syndicated here with author permission. For a full list places is essay syndicated click here.

artificial intelligence
1

About the Creator

Mason Pelt

I'm Mason Pelt, I'm the managing director of Push ROI, and a spiritic writer about tech adjacent topics. My work has been published in Silicon Angle, Tech Crunch, and Venture Beat.

Reader insights

Be the first to share your insights about this piece.

How does it work?

Add your insights

Comments

There are no comments for this story

Be the first to respond and start the conversation.

Sign in to comment

    Find us on social media

    Miscellaneous links

    • Explore
    • Contact
    • Privacy Policy
    • Terms of Use
    • Support

    © 2024 Creatd, Inc. All Rights Reserved.