Futurism logo

The Wickedness of A.I.

Machine Uprising on the Horizon??

By Tomás BrandãoPublished 7 years ago 3 min read
Like

Like with most things in life “There's no rose without a thorn”, in other words, there are going to be some negative points to this tale, and they don't exactly sound very reassuring, quite the contrary. Some of these have even been the target of works of fiction, TV series, games, podcasts, you name it, it has been done; movies like The Matrix, The Terminator, AI, 2001 space odyssey, and the movie adaptation of Marvel's Avengers (where the main villain is a being of "evil" artificial intelligence). Almost every apocalyptic scenario, every "machine turn on the humans that created them" scenario has been covered one way or another. These works of fiction can to some extent be seen as cautionary tales to what may come if ethics and caution are put aside.

But these problems aren't purely fictional, scientists like Elon Musk and Stephen Hawking are working to create safeguards just in case. In case we are walking into any of the aforementioned scenarios.

There is even an open letter signed not only by these two scientists but also by numerous others, that defend that A.I. should be focused on humanity's well being above all. This document came in response to an almost comical situation where an A.I. program was asked "Is there a God?" to which it promptly replied with "There is now".

But before covering possible code errors, machines gaining free will and God complexes there is a need to address the human factor.

And this can be divided into mainly two parts, the creator and the hacker.

So far a crushing majority of software is human created, and even those created by the software is nothing short than "human byproduct", making any triumph or accomplishment of said program in a way human, but the same can be attributed to any harm. And this covers the main point. Human integrity and righteousness have to be at the forefront of the creation of artificial intelligence. Unfortunately, this is very difficult to control, and given the right tools, there are individuals that can create pieces of intelligent software that can or will in some way harm other humans (through physical, financial, intellectual or even emotional harm). On the same note, we should include the perversion of existing software by those who wish to gain control and ultimately create the same type of negative consequences mentioned. This will happen if, upon creating software, the safety isn't a priority, leaving it open to possible hacker attacks.

Lastly, and the most popular of the fictional scenarios, is the pursuit of intelligence by the software, and ultimately the development of a conscience. This is very well demonstrated in both the Matrix and Terminator. In the later, there is a piece of software that became self-aware, and through a connection system that resembles the internet began launching destruction over humanity. Although apocalyptic this alternate reality is not as farfetched as it might sound, and the aforementioned open letter (the one signed by Musk, Hawking and many others) is almost proof enough, at least we should be concerned with their concern. In an article on the Huffington Post, there are some points that help us relate the Terminator reality with our own, some of these parallels have been mentioned already, like the ever learning share their characteristics of some pieces of software and some others like the existence of military drones and exoskeletons that grant the user more strength, protection and or speed (that are currently used by humans through technology), and the fact that some robots currently in prototype phase are getting more and more human-like (at least in terms of appearance), but once more and in a cautionary note the article ends with the same letter that has been mentioned throughout this text.

Bibliography

The Huffington Post. (2015). 5 Reasons Why We All Need To Take Artificial Intelligence More Seriously. [online] Available at: http://www.huffingtonpost.com/2015/06/22/skynet-real_n_7042808.html [Accessed 8 Jan. 2017].

D'Orazio, D. (2014). Elon Musk says artificial intelligence is 'potentially more dangerous than nukes'. [online] The Verge. Available at: http://www.theverge.com/2014/8/3/5965099/elon-musk-compares-artificial-intelligence-to-nukes [Accessed 8 Jan. 2017]..

Musk, E., Wozniak, S. and Hawking, S., et al (2014). AI Open Letter - Future of Life Institute. [online] Future of Life Institute. Available at: http://futureoflife.org/ai-open-letter/ [Accessed 8 Jan. 2017].

Russel, S., Dewey, D. and Tegmark, M. (2015). Research Priorities for Robust and Benefcial Artifcial Intelligence. AI MAGAZINE, [online] Winter 2015. Available at: http://futureoflife.org/data/documents/research_priorities.pdf [Accessed 8 Jan. 2017]

artificial intelligenceopiniontechintellect
Like

About the Creator

Tomás Brandão

Jack of all trades, but master of none, Communications student, and freelance writer. Trying to change the world by starting to change myself.

Reader insights

Be the first to share your insights about this piece.

How does it work?

Add your insights

Comments

There are no comments for this story

Be the first to respond and start the conversation.

Sign in to comment

    Find us on social media

    Miscellaneous links

    • Explore
    • Contact
    • Privacy Policy
    • Terms of Use
    • Support

    © 2024 Creatd, Inc. All Rights Reserved.