Futurism logo

The AI That Never Was (AI)

An Essay Challenging AI scientists to prove that it will be possible to achieve computer intelligence through their data-based approaches to AI

By Dr. Randy KaplanPublished 14 days ago 6 min read
Like
Image Licensed by the Author. Do not copy or distribute

This idea is highly questionable because the only thing that the data-based programs do is copy words through a word selection procedure. How will computer intelligence come from copying words – except by sheer random chance?

I cannot prove what I say except when considering today's AI algorithms; I can only see a highly sophisticated copier. Copiers are not creating what they reproduce.

The current approach to AI will eventually show its limitations. All of what is being done today is based on something other than science. It is based on fantasy and a mission to squeeze every penny out of this effort until the next AI winter sets in and the money being pumped into AI will dry up. Various sources have suggested that this approach to AI is already exhibiting limitations. The dreaded winter is beginning to show signs that it is coming. As long as the world believes this version of fantasy, AI, money will pour into this effort of folly(neural networks, generative AI). It is just a bunch of people with lots of data to manipulate and are developing novel ways of "playing" with that data. To achieve this, the people using this data steal it from the others who created it. OpenAI, to name one company, justifies this crime by giving excuses of why it is essential, but the theft is not justifiable.

Not one of the results they are achieving is based on well-founded theories. In this case, the statement "built on the shoulders of giants" does not apply to how today's AI was formulated.

Let me bring up one significant problem with this AI. This problem is one of many.

Hallucinations.

Why does the current AI hallucinate? Do any of the current generation of data "scientists" know why this version of AI hallucinates? Can they explain why?

If a person hallucinates, we know they have a problem. At a minimum, when a person hallucinates, we try to "fix" (aka help) the problem. Today, these "fixes" are called interventions. An intervention is based on actual science and experience.

In this case, these problems cannot be fixed because the people involved in this effort have yet to define what intelligence means. Without definition, how can a goal possibly be defined?

Even what the data scientists call testing does not approach the rigors the software industry has developed for software development. If we are to have any faith in this effort, then someone better show the world the tests and results from the testing they are supposedly doing. Otherwise, the effort has no validity. If no evidence can be shown, the effort has no validity. We would only accept the most basic accounting system with sufficient testing.

So here, in the current version of AI, we cannot explain why these computer programs hallucinate. And they are only computer programs.

There is nothing different or unique about these AI programs. They consist of some instructions that cause computers to do stuff (a technical term representing the result of a series of computer instructions) we want computers to carry out. Computer scientists worry when a computer program produces a result that cannot be explained when programmers are supposed to understand what program instructions do. We have a term for these erratic and unexpected behaviors. They are called bugs* or errors in the program.

The term "bug" refers to an error in the program(a list of instructions) written by a programmer or programmers. Bugs in the program translate into instructions that have been specified that are incorrect—programmers (those who write the instructions that computers are meant to follow). Programmers spend a lot of time writing programs for the computer and also use significant amounts of time seeking out and fixing bugs. While fixing, evidence is collected that the system IS FUNCTIONING AS EXPECTED. Companies like

__________________________

*Attribution of the term debug. The term comes from a story about Grace Hopper's (the first lady of programming) computer and the first bug found. It was an insect that flew into the workings of the computer, disrupting its operation. Based on this story, the term "bug" is attributed to Grace Hopper.

OpenAI (and other companies that engage in AI software development) should show evidence of sufficient testing and the requisite correction of program bugs.

Unfortunately for the public, those with applications for this largely untested software do not know the ramifications of using untested software.

Why isn't this true of the programs doing these so-called amazing things? Is the fact that these programs seem to have conquered some of AI's challenges and, therefore, excuse any testing? Why don't we consider it a priority to determine the cause of these bugs in neural networks? Why do we accept them and consider them a step in the evolution of computer intelligence? What scientific theory supports this belief in faulty programming?

Now, let's consider this last statement. Why do we think the bugs we encounter today are a positive feature and not a negative feature of this AI software? One statement made to justify not testing this software is that no one understands why these problems are occurring. Another dubious statement about the software justifies these bugs by making them special "features." And no one seems to care. If these programs make money for the AI companies, they will avoid the whole testing question. At least, this is what it looks like.

OpenAI and similar organizations think this error(I like to use the term aberration) will lead to AGI, Artificial General Intelligence, the holy grail of AI scientists. The scientists in organizations like OpenAI think this behavior will spontaneously lead to AGI. The intelligence that computers will demonstrate will arise spontaneously from their software.

The furious response to these behaviors is that we will achieve computer intelligence from program anomalies like hallucinations. I wonder if we will reach the spontaneous development of machine-based human intelligence from these error-ridden programs, as even the authors seem to have no idea why and when they exhibit these problems.

I have an important saying about bugs. If you identify one bug, more bugs will follow. The one that appears is only the first you've recognized. Furthermore, any bug may mean a more severe problem in the program exists. In other words, one bug may indicate a more significant problem with the software. It is always the case that given the current architecture of the present state-of-the-art computer, the behavior of that computer should be predictable.

From the beginning of this version of data-based intelligence, I have questioned why anyone thinks these programs will magically develop intelligence on their own. They are programs — nothing more than a list of instructions that direct the computer to do certain things, and those things have been fixed (as in static) since the first program was written more than 60 years ago. There is nothing to contradict this essential behavioral imperative. We expect a computer program to carry out the instructions we have written for it.

Unfortunately, data scientists believe they have changed this. Somehow, their software will not follow the precise behavior of computers that we have come to expect. Somehow, their software will evolve and not follow the instructions specified by programmers; instead, it will follow the instructions the computers can "invent." The act of invention is evidence of intelligent behavior. Invention is a kind of creativity. Computers cannot create. They can only follow the instructions they are given.

So, if this is the case, I challenge the current crop of AI scientists to show me the machine instructions that will produce this desired creative behavior.

For this to happen, there must be code (program Instructions) that somehow in some way produces spontaneous behavior. But here is a catch-22 or conundrum. Spontaneous behavior of the kind being described will necessarily be new. It will have to be implemented by changing the fundamental instructions that the computer can interpret or a spontaneous change to the hardware to allow the new behavior. I contest this notion and request that the data scientists show me how this will happen. As with all computer programs, including these data science programs, it should be possible to demonstrate these newly created behaviors.

Please prove you have written the necessary program instructions for this to happen, that is, for a computer to be able to modify its hardware. After all, it should be a requirement that they demonstrate that particular aspect of their data science programs that will accomplish spontaneous behavior. Isn't this why we are dumping money into data-based AI? Humans can't "change" their hardware but can modify their thinking. Of course, thinking is part of HGI(Human General Intelligence).

If the AI data scientists cannot demonstrate and show us the necessary code, then there is no way present-day computers can achieve this result. Following the present course of AI, we will head into a frigid AI winter indeed.

Thank you for reading my article. I appreciate your time and attention. Please leave comments and other things to let me know you've been here.

Thanks, Randy

techscience fictionopinionfact or fictionartificial intelligence
Like

About the Creator

Dr. Randy Kaplan

Welcome to my Vocal page and storicles that are published here. Some time ago I started a migration to Vocal. I have now decided to move ahead. During the next couple of weeks, I will move my stories to Vocal.

Reader insights

Be the first to share your insights about this piece.

How does it work?

Add your insights

Comments

There are no comments for this story

Be the first to respond and start the conversation.

Sign in to comment

    Find us on social media

    Miscellaneous links

    • Explore
    • Contact
    • Privacy Policy
    • Terms of Use
    • Support

    © 2024 Creatd, Inc. All Rights Reserved.