Futurism logo

Really, let an AI write code for you?

How naive can you be?

By Dr. Randy KaplanPublished 14 days ago 9 min read
Like
Image Courtesy of Pinterest

Yes, I started this story with a direct attack at anyone that thinks it is okay to take code generated by an AI on the Internet.

First I’ll pose some questions that you may not have been thinking about. I have a lot of questions about this relatively new practice. So go and get a drink, preferably something that will reduce your immediate pain or the pain you will be in when you read some of the questions I’d like you to answer for me.

Do you want to know a fundamental problem with the AI that we are trusting? The problem is that the AI software has not been exhaustively tested. nIf you prefer it has not been vetted. No one has actually tested it. There are tests. But they are from the outside looking into the AI. They are guesses about what may or may not work. Since they are largely done by the companies that created the AI one must assume their Outside-Inside tests are BIASED. That’s right. From what I’ve read, none of these folk that are creating this software are testing it — at least not from the inside out. What do I mean be inside out?

Outside-In/Inside-Out

What's the Difference?

When you read about how this software is being tested they are doing what I call external observational testing. This means that testing is carried out by feeding data to a program and watching what the program produces as its output. Take for example the phenomena of hallucinations. A hallucination occurs when an AI program produces a serious output error. The pathology of this output is that the output has nothing to do with the input or the output has nothing to do with the prompt. This is a typical garbage-in-garbage-out scenario. In this case the input is not garbage per se but the output is garbage. It is computationally incorrect. Here’s the problem, the people that developed the software cannot explain the phenomena of hallucinations. There is no attempt at identifying the code that causes the hallucination. I asked another AI Expert about finding errors and he dismissed it as if it wasn't important. He dismissed it because he SAID IT CAN'T BE DONE. Seriously, it can't be done???

An Example of One Place where testing is taken very seriously

For a little while, several years ago I worked for a public utility. This utility has been around for many years. I won't name it but it is by no means a small utility. They build, own, and operate nuclear reactors.

Courtesy of WHYY at https://www.pbs.org/wgbh/nova/tech/nuclear-control-room.html

In the control room of a nuclear reactor plant, there are many monitors and many controls. Here is a photo of the control room (a part of the control room) of a nuclear reactor. What a great place for a bunch of five-year-olds to play around.

So you may notice, off to the left there are some computer monitors. These computer monitors are connected to (you guessed it) computers, hopefully, and these computers are monitoring little things like the reactor core, probably to make sure it doesn't overheat among other things. On the monitor, you can see a diagram that represents the core of the reactor. Now suppose that the software monitoring the nuclear reactor was an AI (no, no, no), and suppose while monitoring it decided to have a hallucination about skiing. That's right skiing. Why? Because the AI hallucinated how wonderful it would be at that moment to be skiing at some place that was really cold. The AI, being a program that is autonomous, decides not to warn the operators in the control room that the reactor core is overheating. In reality, there are multiple layers of systems in the control room that are not controlled by the computer, and the operator can confirm the temperature of the reactor core by looking at one of the readouts on the wall. That's the way nuclear reactor control rooms are designed - with lots of layers of backup systems. Wow, those engineers, I wished they had designed our modern-day AIs. They probably wouldn't have designed a system that produced explainable hallucinations. At least the engineers don't think a meltdown would be funny OR acceptable. The people who are creating our AI systems today think hallucinations are, well, not that critical. I see no real concern because no one is trying to fix coding errors like this one. And by the way, if there is ONE error, how many others may there be? They don't even have an idea. They are too concerned with creating larger LLMs and getting trillions of dollars to do so.

Just a comment about Microsoft. When Microsoft Windows and Microsoft Word (the Microsoft Word Processor were created they had errors. How did they know? One way was that they kept finding them - after all, they wrote the code so they could inspect the code from the inside out. The other way is that the user community REPORTED the errors and this was documented. All errors were documented. For an operating system that may in some cases cause harm to people if the OS doesn't run properly. On the other hand, problems identified in Microsoft Word may not harm a person but does cause all kinds of problems for users of the software.

Now I ask a very simple question. Where is the documentation for ChatGPT (or any other LLM-based program for that matter) that records the type of errors that occur? And why aren't available to the public? Because the information is probably proprietary. The company management does not want this kind of information to be divulged.

Return to the nuclear reactor example. If the nuclear reactor software had bugs that were found but not documented would it possibly be conceivable that such a system could result in catastrophic harm to people that live next to the reactor? And for the most part, although there are emergencies (think Three Mile Island), they are few (thank god, and the engineers) and far between. The nuclear industry typically does not use any NEW technology unless it has been tested for 10 years. The going story is that when teletype terminals were long gone from the public, they were still being used in nuclear reactor plants.

Now fast forward to AI software. It is STILL HIGHLY EXPERIMENTAL. That characteristic seems to be lost. It is not being treated as highly experimental because the investors want a return on their investment and therefore AI companies are going to do all that they can to speed their software out the door. That is how they make money. Thank goodness governments are largely in charge of the use and deployment of nuclear technology. To put that technology into the hands of the business community would have been a disaster.

Back to Outside-in and Inside-out

Typically errors are explained by understanding the code in such a way that they can give an explanation provided a reason for the result computed by the program. AI programs use a different kind of technology than most present-day programs. In the case of an AI program, it is extremely difficult to locate and find errors in the code because the way computations are carried out in an AI program is because they are "NON-DETERMINISTIC." This means that while such a program operates it is extremely difficult to determine what they are doing. We cannot DETERMINE what will happen next.

There are important reasons why an explanation is difficult to formulate. Because the programs utilize probabilities the programs are non-deterministic. The current majority of computer software is DETERMINISTIC. Their output can be predicted because the behavior of a deterministic program can be predicted. In my opinion, the developers do not appropriately test the AI programs because it is extremely difficult to show their behavior due to their non-deterministic nature. To be facetious, why bother to do the requisite testing? It would be difficult and very time-consuming. My friend AI expert who dismissed testing of these programs summarily does not consider this a major problem of AI programs. I would claim that just because something CAN'T be done (until proven it can't be done) we should still make it a majority. It should never be the case that any AI should be given the job of being a nuclear plant operator.

The development of AI programs can proceed at breakneck speeds because they do not have to be tested. Program testing can take as much as 60% to 80% of total development time. The so-called advances in AI technology are more important than ensuring they are functioning correctly. This is a prescription for disaster — one that is just waiting to happen. Think of the TERMINATOR scenario.

If what the AI advocates say is true we are to expect that machines will become intelligent over time. Intelligence in machines will be able to arise spontaneously without a requisite explanation. Perhaps magic will be involved. Needless to say, it is difficult for me to accept this spontaneous intelligence theory. Non-determinism is not a kind of magic. It is more a kind of happenstance — a random unpredictable event.

Some think that human intelligence evolved randomly. Perhaps so. But evolution is based on a decision process. The system is making decisions based on a demonstrable and provable principle. The principle is called "survival of the fittest." If a system is not designed well, then it will cease to exist, in the natural world because it will not survive. With humans that case is different because humans can override the principle of survival of the fittest. In the end though, if another entity appears that can somehow return the principle to operate, then humans will be at risk. Intelligent machines, even these error-prone AIs may cause humans to no longer exist. Fortunately, that scenario does not have a high probability of occurring - maybe.

It is very difficult to understand why AI scientists are being so casual with their predictions. Various theories posit that human intelligence took a long time to evolve and that some of the events that led to human intelligence were random invents. In the end though even if random events were involved, they were all vetted by getting an answer to the question, "if the new version better capable of surviving than the old version?" Even if we consider that we are assisting computers along the way to create human intelligence, the idea that intelligent machines will arise spontaneously contradicts what we know about human intelligence and its development. Considering that much of what AI involves now is largely not based on any scientific theories, to expect intelligence to develop in machines is as far-fetched as one day soon we will have time travel. It just cannot happen.

My point is that the question of machine intelligence arriving one day without any real reason cannot be possible.

I diverge from the subject at hand and that is about using AI technology to write software.

If the technology of AI today creates hallucinations we can expect AI-generated software to contain such hallucinations.

Furthermore, since we are deriving software developed by AI code collected from the Internet by way of a huge net, how can we possibly ensure the quality of what is collected? How can we be sure that the collected software does not have errors? There is no testing of the correctness of this software. It is one thing to collect small pieces of code that solve simple problems like sorting but quite a different thing to expect an AI programming system to create an ERP system for example. The very idea raises some very important questions. Having been a software developer for almost 60 years and knowing the problems that can be involved with software development, how can we expect an AI system to create systems consisting of millions of lines of code? It will be impossible to guarantee that such a system would operate without error. Considering software created by human beings is being used to control automobiles and that this software is just marginally reliable, to expect high reliability is far from reality. I wouldn’t trust a Tesla to drive me anywhere autonomously. Present-day anecdotal evidence about the long-term performance of such systems validates this belief.

So, trust an AI to write software that is correct, reliable, and will not cause damage or harm to human beings,I don't think so. I reiterate, are you going to let an AI write your code, that would be very stupid. Not testing AI programs EXHAUSTIVELY before deploying them is also STUPID and the excuse that "it isn't possible," should not be a consideration. Perhaps some lab is working on that very problem because it seems to be an important one right now.

Photo created by the Author using MidJourney. Do not copy or distribute.

opiniontechscience fictionfact or fictionartificial intelligence
Like

About the Creator

Dr. Randy Kaplan

Welcome to my Vocal page and storicles that are published here. Some time ago I started a migration to Vocal. I have now decided to move ahead. During the next couple of weeks, I will move my stories to Vocal.

Reader insights

Be the first to share your insights about this piece.

How does it work?

Add your insights

Comments

There are no comments for this story

Be the first to respond and start the conversation.

Sign in to comment

    Find us on social media

    Miscellaneous links

    • Explore
    • Contact
    • Privacy Policy
    • Terms of Use
    • Support

    © 2024 Creatd, Inc. All Rights Reserved.