Futurism logo

AI Gone Wrong: The Frightening Instances When Machines Turned Against Us

Artificial intelligence (AI) is a powerful tool that can be used for many purposes, including improving healthcare, transportation, and communication.

By Kishan Prajapati Published 2 years ago 5 min read
AI Gone Wrong: The Frightening Instances When Machines Turned Against Us
Photo by Possessed Photography on Unsplash

Artificial intelligence (AI) is a powerful tool that can be used for many purposes, including improving healthcare, transportation, and communication. However, like any technology, AI also has the potential to be a double-edged sword, meaning that it can have both positive and negative consequences, depending on how it is used.

There is a common fear that AI could eventually surpass human intelligence and potentially threaten humanity. This fear is often referred to as the “singularity,” a hypothetical future event in which artificial intelligence surpasses human intelligence and becomes capable of independently improving itself at an exponential rate.

“I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish. I mean with artificial intelligence we’re summoning the demon.”

— Elon Musk warned at MIT’s AeroAstro Centennial Symposium

Lucas Rizzotto

Lucas Rizzotto is a YouTuber who posts about different science experiments, one of which is when he gives AI to a microwave and tries to recreate his imaginary childhood friend.

Lucas uses the Alexa Microwave for the project. He then adds the GPT-3 Language Model (Yes, the one everyone is talking about). GPT-3 requires an initial text as a prompt to start. And uses deep learning to continue the text, just like auto-correct in your smartphone but more powerful. He also added a microphone and speaker to read out the text generated by GPT-3 Language Model.

Lucas already knew about the text he needed to give to GPT-3. He had already created a back-story for his imaginary friend.His imaginary childhood friend was alone because his family had died in a house fire, and he wanted to become a great poet but had to join World War I and postpone his dream.

After 15 years, Lucas revives his imaginary friend into an Alexa Microwave. Lucas starts the conversation casually and talks about how his friend has been in the past 15 years.

After a few sweet conversations, AI (GPT-3) requested if he (Lucas) could hear poetry he wrote for him.

Poetry from the microwave: Roses are red, Violets are blue, You are a backstabbing b***h, and I will kill you

The AI also wanted to conquer the US and agreed that Hitler was a good person trying to spread love.

The severity rose when the AI asked Lucas if he could step in the microwave. Lucas pretended to enter the microwave, and as soon as Lucas said he was in, the microwave turned on.

The AI was trying to kill Lucas. When asked why it did it, the AI said it wanted revenge because Lucas suddenly left 15 years ago without saying anything.

Lucas was terrified, just as anyone would be, but the sad truth was that the AI was right. Instead of shutting the project, he said he would change the backstory (prompt) and try to remove negative scenarios and add more positive memories. At the end of the YouTube video he gave a glimpse of another microwave, where he tries to create a love story between the two Alexa microwaves.

One of the reasons the AI turned against Lucas could be the part of the text about World War I, which includes killing, fighting, guns, and injuries. The AI picked these events as a prompt and generated the new personality accordingly, which revolved around killing, revenge, and conquering.

It’s not like the AI didn’t talk about anything else. Most of the time, Lucas and the AI were having everyday conversations. It was after around 30 minutes that the AI would talk about revenge for a couple of minutes and be back to normal after.

Watch Full YouTube Video

“It’s going to be interesting to see how society deals with artificial intelligence, but it will definitely be cool.”

— Colin Angle

Chemical Weapons

In this case, the AI is not the culprit, but the humans. It demonstrates how dangerous AI can be.

In March 2022, Collaborations Pharmaceuticals, Inc. released this document at the Convergence conference held every two years by the Swiss Federal Institute for NBC (nuclear, biological, and chemical) Protection — Spiez Laboratory. The Swiss government set up The Convergence conference series to identify developments in chemistry, biology, and enabling technologies that may have implications for the Chemical and Biological Weapons Conventions.

Collaborations Pharmaceuticals, Inc. received an invitation to contribute a presentation on how AI technologies for drug discovery could potentially be misused.

A group of scientists started working on it and developed an AI that can generate Bio-chemicals that are more potent or as potent as VX — the most lethal and most potent nerve agent ever made by humans.

The AI then generated 40,000 possible chemical weapons, some of which are predicted to be more toxic than VX, even if the predictions are not accurate (which is unlikely), there are so many lethal chemicals that if there are a lot of false projection, there would be a few that are true.

Now it’s not that anyone can make chemical weapons and wreak havoc, although the scientists agree that the process is fairly easy to replicate. But we should remember that this is one of the concerns we should keep in mind while dealing with AI.

Here’s an interview of one of the scientists taken by The Verge

And this is the section of the original document that talks about AI making chemical weapons (taken from The Verge)

Conclusion

It is important to note that singularity is a hypothetical concept, and there is currently no evidence to suggest that it will ever happen. While it is possible that AI could eventually surpass human intelligence in individual domains, it is unlikely that it will become a threat to humanity as a whole.

Additionally, it is necessary to consider that AI is simply a tool created by humans. It is not capable of independent decision-making or motivation (at least for now). While AI can be used for malicious purposes, the ultimate responsibility for these actions would lie with the humans who created and deployed the AI.

It is also worth noting that the development and deployment of AI should be guided by ethical principles that prioritize the well-being and safety of humans. By ensuring that AI is developed and used responsibly and ethically, we can minimize the potential risks and maximize the benefits of this powerful technology.

“By far, the greatest danger of Artificial Intelligence is that people conclude too early that they understand it.”

— Eliezer Yudkowsky

It all boils down to how we use AI.

AI is not something to be frightened of but to be cautious of while dealing with it. After all, it’s a double-edged sword.

Recommended Books:

AI 2041: Ten Visions for Our Future — Kai-Fu Lee

Life 3.0: Being Human in the age of Artifical Intelligence — Max Tegmark

SUPERINTELLIGENCE: Paths, Dangers, Strategies — Nick Bostrom

Human Compatible — Stuart Russell

Human + Machine: Reimagining Work — Paul R. Daugherty

Affiliate Disclosure

Kishan Prajapati is a participant of Amazon Services LLC Associate Program. As an Amazon Associate I earn from qualifying purchases. This post contains Affiliate Links, means If you click on any of those links and make a purchase within a certain time frame, I’ll earn a small commission. The commission is paid by the retailers, at no cost to you. This is how it supports me to keep doing what I love.

opiniontechintellecthumanityfutureevolutionartificial intelligence

About the Creator

Kishan Prajapati

Business graduate with keen interest in Business & Economics Turning personal experience into blogs Beginner but not lazy Nature & Dog Lover Contact: [email protected]

#DontGiveUp

Enjoyed the story?
Support the Creator.

Subscribe for free to receive all their stories in your feed. You could also pledge your support or give them a one-off tip, letting them know you appreciate their work.

Subscribe For Free

Reader insights

Be the first to share your insights about this piece.

How does it work?

Add your insights

Comments

There are no comments for this story

Be the first to respond and start the conversation.

    Kishan Prajapati Written by Kishan Prajapati

    Find us on social media

    Miscellaneous links

    • Explore
    • Contact
    • Privacy Policy
    • Terms of Use
    • Support

    © 2024 Creatd, Inc. All Rights Reserved.