With more and more of our attention being drawn towards discussions on the emergence of artificial intelligence (AI), everyone is knocking on the doors of our pre-eminent physicists and tech experts (techsperts), and the internet is being scoured for quotes to support one camp or the other. When asked about AI, Elon Musk said that he thinks we “should be very careful… If I were to guess at what our biggest existential threat is, it’s probably that.” Bill Gates and Stephen Hawking agree with Musk, while Neil deGrasse Tyson expresses equal concern, but adds that “as long as we don’t program emotions into robots, there’s no reason to fear them taking over the world.”
AI in Pop Culture
Our movies echo these concerns, as Hollywood banks on our fears and creates such dire apocalyptic films such as the Terminator and Matrix series. In both series, robots of both the anthropomorphic and the non-anthropomorphic variety decide that humanity is no longer worthy of existence and they set out to eradicate us. Although this is our most obvious fear, Elon Musk and Neil deGrasse Tyson once joked during a podcast about humans becoming the pets of AI-endowed robots. This is reminiscent of the 1993 Porno for Pyros song “Pets,” or perhaps we can hear the voice of Simpson’s news reporter Kent Brockman quickly proclaiming, “I, for one, welcome our new robot overlords.” (I am aware it was insect originally).
As far as the possibility of creating an AI that is superior to humans, the discussion becomes less absolute and less filled with specifics. In an interview with The Screensavers on TechTV, Michio Kaku talked about the improbability of AI at a level that would be comparable with human intelligence. He compared the intelligence of the Mars Rover (and most of our other AI) with that of a mentally impaired cockroach. He said the key to achieving a higher form of AI lies with advancements in quantum computing, but then he went on to state that human intelligence is more than just the computing speed of our brains, and therefore there are still unanswered questions pertaining to independent AI. In “The Emperor’s New Mind,” Mathematical Physicist, Roger Penrose, agrees with Kaku by arguing that human consciousness is not based on algorithms and therefore imitation by computers is not possible.
In Ex Machina, Nathan (played by Oscar Isaac) points to the AI brain he created and corrects his guest when asked about the hardware, and instead refers to it as wetware. Wetware is more of a description of a biological brain, and it is a term sometimes tossed about in the AI community as a theoretical means to achieving strong AI, which is an AI that is capable of at least human activity. (Weak AI, on the other hand, is a non-sentient computer intelligence, which is all we currently have). This detail is left unexplained in the movie, and instead the film focuses on the AI being tested by purposely making it hostile to its creator and watching as it attempts to manipulate a human into assisting its escape. This film depicts tragedy at the hands of an AI on a smaller scale, and considering the circumstances surrounding the creation and testing of Ava, it is difficult to see it as completely a tragedy, and rather as yet another example of human stupidity and cruelty.
Diffusing AI Threats
Biologist Edward O. Wilson does not believe that any of the negative consequences described above will occur for the simple reason that we will program barriers for the AI creations and will always maintain control over them. This viewpoint exhibits an understanding of our need for control, as well as our capabilities in foresight and taking precautionary measures, however it also exhibits a naiveté in assuming that humanity is a singular and cohesive unit in which no single member will deviate from an agreed-upon course of action. What sort of precautions can we take against our own hostility, our own greed, our own lust for control over other humans, and our own stupidity?
Although I disagree with the scope of Dr. Wilson’s argument, he is precisely the type of expert whose advice we should be seeking. Whether or not we are technologically capable of creating AI is fascinating, and the answers we seek truly are in the minds of the physicists and techsperts. The various possible ramifications of creating AI are extensively (albeit darkly) covered by science fiction writers, and although I am personally interested in what else can be written, that territory seems to be sufficiently covered. However, we never ask if we should create AI, or if we are currently ready for such an advancement. Ignoring these questions is the height of arrogance, and if you enjoy science fiction or history or any literature at all, then you know that periods of arrogance are always followed by a horrific fall.
To answer these questions, we can listen Neil deGrasse Tyson and Elon Musk. However, for a more in-depth analysis we would be better off interviewing Dr. Wilson, or Geneticist Craig Venter, who has already created artificial life. The answers to these questions are in the minds of philosophers like Alain de Botton or Felix Guattari. Perhaps the best person to ask is famed Anthropologist and Primatologist, Jane Goodall. Her work in studying the primates of Tanzania, as well as her studies in conservation and her time spent on the board of the Nonhuman Rights Project, have given her a unique and piercing perspective on the nature of humans. And it is our nature that must be unraveled and improved upon before we can jump into a world where we are the creators of hyper-sophisticated intelligent beings.
Irresponsible Amounts of Power
Developing strong AI at this point in time would be similar to the South developing the atomic bomb during the Civil War. It would be horrifically irresponsible to allow ourselves such power when we are still so barbaric. We are quite enamored with the gadgets we can create to assist with our work and to entertain ourselves. However, we are no more evolved intellectually or ethically than we were in the 1950s and 1960s. Parts of our world are capable of acknowledging that multiple forms of inequality still exist, but we are powerless or uninspired to do anything about it. We prey on weaker countries and peoples in order to exploit their natural resources, and deep down one set of humans with a moderately homogenous physical appearance consider themselves superior to another set of humans with a slightly different physical appearance. We create a law banning the killing of humans who reside within a political territory and then kill people outside of that border in droves. We pollute the planet as if we have the ability to move to another at will, and then spend mountains of cash and resources on curing some of the diseases caused by our own activities.
We are a species of inconsistencies, with a barbaric nature and a long history of being untrustworthy. With such suspect motives and being always at the whim of our baser instincts and emotions, how can we possibly program an AI that will not look upon us as unworthy of coexistence?
We are in need of further evolution: physically, intellectually, and emotionally. To believe that we are already perfected creatures capable of creating and controlling such a power as AI is extreme arrogance, the likes of which would make a grand and terrible planetary demise similar to those we have already seen in so many films.
About the Creator
Writer of all types of fiction, poetry, and odd bits of non-fiction. Traveler, scuba diver, teacher, and observer of human behavior in Saudi Arabia. Buy "Rejuvenation", my new dystopian novel, at: https://tinyurl.com/yyevad5n
There are no comments for this story
Be the first to respond and start the conversation.