Futurism logo

AI Can Frighteningly Mislead Human Minds Through Proxy Reports and Analysis Data

A new study has found frightening details about AI. A prognostication made by Microsoft executives and Elon Musk might come true now.

By freya gilbertPublished 3 years ago 5 min read
Like

A new study has found frightening details about AI. A prognostication made by Microsoft executives and Elon Musk might come true now.

What is artificial intelligence?

While a number of definitions of artificial intelligence (AI) have surfaced over the last few decades, John McCarthy offers the following definition in this 2004 paper (PDF, 106 KB) (link resides outside IBM), " It is the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable."

However, decades before this definition, the birth of the artificial intelligence conversation was denoted by Alan Turing's seminal work, "Computing Machinery and Intelligence" (PDF, 89.8 KB)(link resides outside of IBM), which was published in 1950. In this paper, Turing, often referred to as the "father of computer science", asks the following question, "Can machines think?" From there, he offers a test, now famously known as the "Turing Test", where a human interrogator would try to distinguish between a computer and human text response. While this test has undergone much scrutiny since it publish, it remains an important part of the history of AI as well as an ongoing concept within philosophy as it utilizes ideas around linguistics.

Stuart Russell and Peter Norvig then proceeded to publish, Artificial Intelligence: A Modern Approach (link resides outside IBM), becoming one of the leading textbooks in the study of AI. In it, they delve into four potential goals or definitions of AI, which differentiates computer systems on the basis of rationality and thinking vs. acting:

Human approach:

Systems that think like humans

Systems that act like humans

Ideal approach:

Systems that think rationally

Systems that act rationally

Alan Turing’s definition would have fallen under the category of “systems that act like humans.”

At its simplest form, artificial intelligence is a field, which combines computer science and robust datasets, to enable problem-solving. It also encompasses sub-fields of machine learning and deep learning, which are frequently mentioned in conjunction with artificial intelligence. These disciplines are comprised of AI algorithms which seek to create expert systems which make predictions or classifications based on input data.

Today, a lot of hype still surrounds AI development, which is expected of any new emerging technology in the market. As noted in Gartner’s hype cycle (link resides outside IBM), product innovations like, self-driving cars and personal assistants, follow “a typical progression of innovation, from overenthusiasm through a period of disillusionment to an eventual understanding of the innovation’s relevance and role in a market or domain.” As Lex Fridman notes here (link resides outside IBM) in his MIT lecture in 2019, we are at the peak of inflated expectations, approaching the trough of disillusionment.

As conversations emerge around the ethics of AI, we can begin to see the initial glimpses of the trough of disillusionment. To read more on where IBM stands within the conversation around AI ethics.

As it turns out, AI tech can outsmart cybersecurity experts with proxy data reports and other modes of misinformation. Yes, you had heard that right. AI is now capable enough to perform deception. Not just in a chess game, but a real-life scenario with frightening ramifications and scenarios. The latest research has revealed that AI tech has devised a clever method of hoodwinking cybersecurity experts.

The misinformation and proxy reports given by AI can massively hamper the work of cybersecurity experts, the new research entails. The implication of this manipulation of data could be anything. It all depends on intention. If the AI tech starts catering to the intention of the wrong sets of people, then it could pose frightening threats to the cybersecurity and computer security field disciplines.

The experts of cybersecurity do not only come into the when there is a major hack. They have a day job of detecting, solving, and predicting all the anomalies, which can impact the entire computer ecosystem. They are always hunting for leads that can bring forward flaws in various ubiquitous computer networks.

It is safe to say that there has been no shortage of cyberattacks in recent years. Furthermore, nefarious algorithms have become smarter and effective in exfiltrating and infiltrating the data. In most cases, the intention of breaching cybersecurity networks has not been more than money. The hackers demand ransom in exchange for the data they have illegally acquired. Two of the most popular cases in recent times are JBS Meats and Colonial Pipeline. JBS Meats had to give the ransom of 11 million, which was never recovered. And Colonial Pipeline is still seeking help from the FBI in getting their money and assets back.

People in the United States are also not alien to the term “misinformation,” especially after the recent discoveries on social media. The role of misinformation is also pretty clear with regards to elections, as it was made clear by Cambridge Analytica. We live in a cyberworld where it is hard to differentiate between a lie and a truth. The misinformation is being spread at a rampant pace by vested interests to shape the public narrative.

Now, if the AI in question falls into such hands, the consequences could be catastrophic.

To know the details about the new research, you should read the latest report published by Wired. The report relays the fact that researchers have examined the use of AI in spreading misinformation. They have figured out a frightening detail that AI could even be utilized in misleading researchers and professionals involved in the cybersecurity industry.

The Wired report reveals that the researchers working for Georgetown University utilized “GPT-3” to test AI. The algorithm “GPT-3” utilized AI tech to spread fake reports and proxy data that almost look legitimate.

Are we propagating towards AI singularity? Well, it can’t be said for certain, but there are pieces of evidence that allude to the fact that we are in massive need of AI regulation.

Source:AI Can Frighteningly Mislead Human Minds Through Proxy Reports and Analysis Data

artificial intelligence
Like

About the Creator

freya gilbert

Freya Gilbert is a self-professed security expert, making people aware of security threats. His passion is to write about Cybersecurity, cryptography, malware, social engineering, the internet, and new media.

Reader insights

Be the first to share your insights about this piece.

How does it work?

Add your insights

Comments

There are no comments for this story

Be the first to respond and start the conversation.

Sign in to comment

    Find us on social media

    Miscellaneous links

    • Explore
    • Contact
    • Privacy Policy
    • Terms of Use
    • Support

    © 2024 Creatd, Inc. All Rights Reserved.