Artificial Intelligence the Threat of Exposure
An Academic Essay Exploring This Popular New Potential Threat
To put it simply, at its core functionality, artificial intelligence (AI) uses pattern recognition software to simulate natural intelligence (NI). When combined with a database of trial and error already performed by humans, AIs are able to come to the best possible conclusion with the best possible outcome without furthering any error.
AI is best personified using the classic phrase, mother knows best. And while mother’s experiences lend themselves to her advice, mother can also overstep and interfere, taking away from the authentic, individualistic human experience. If we view AI as a one-dimensional line limited by its physical capabilities with only two directions on the line to be attempted before the preferred direction is clear, AI is harmless. As a software within an online platform, with access to an infinite number of databases, including interactions with real people, AI becomes a threat.
What is ethical and what is not ethical is determined by NI, specifically human intelligence, which is developed through beliefs, experiences, and behaviors. An AI software can be given a belief and it may come across data that challenges that belief; however, what makes that belief right or wrong is experience oriented. And for that NI is needed.
IBM’s Watson computer system and its Watson Personality Insight service was “designed to generate psychological profiles automatically on the basis of unstructured text extracted from mails, tweets, blog posts, articles, and forums” (Gaggioli 357).
From that information, Watson could determine personality, thinking style, social connections, and emotional stress variations (357). In an Orwellian scenario, an AI like Watson has the power to determine a person’s psychological traits and relay that information to an institution that associates certain psychological traits with potential threats, forgoing any benefit of the doubt of the user on the other end.
As more people and younger technology users look to the internet to help make important decisions, AIs will replace needing to experience a scenario in real time to assess whether the steps leading to the outcome could have been different. It becomes a battle between what one should do and what one wants to do.
Just like the previous mentioned ethical dilemma, what one should do is not always right and what one wants to do is not always wrong, vice versa. In the age of virtual reality (VR) online dating, AIs have already begun chatting with online dating users.
In particular are the “sexbots” that limit a user’s access to “her” by pushing premium services (Wiederhold 297). There is an additional concern with similarly programmed AIs exposing members of the LGBTQ community in countries where homosexuality is banned (297).
Further concern examines AIs being able to measure “how a virtual date is going, what a user is feeling” as well as “real-time dopamine and oxytocin levels” (297). This disables the user from discretion while they determine their next course of action when their body may not be completely aligned with their internal attempt to work out the current situation.
AIs are programmed to adapt. However, with the ability to program and restrict certain adaptions, not all AIs the public will interact with will have their best interests in mind. The Internet of Things quickly enables positive and negative scenarios before we are able to fully comprehend them. And with AIs involved, that can lead to threatening situations.
Gaggioli, A. CyberSightings. Cyberpsychology, Behavior, and Social Networking. 2016. Accessed 27 Nov. 2017
Wiederhold, B.K. VR Online Dating: The New Safe Sex. Cyberpsychology, Behavior, and Social Networking. 2016. Accessed 27 Nov. 2017
Interested in more academic research? Submit a topic to [email protected] and we can explore the world with #straightupfacts.