01 logo

"That AI bill that has Big Tech panicked"

All innovation in the U.S. Will Stop if this Bill is Enacted

By Dr. Randy KaplanPublished 25 days ago Updated 25 days ago 7 min read
Sam Altman's Appetite

If big tech is panicking over a bill that is nothing more than common sense and protects people, then I wonder what these "Big Tech" people have in the way of consideration and care for their fellow human beings.

It strikes me that technologists in business are no better than anyone who has historically sought to exploit people for their ends. Think about "big business" and oil companies who, for years, have promoted damage to the Earth and the people that populate the Earth, not to mention the damage to the planet itself.

Elon Musk may think his electric cars benefit humankind, but what about the accumulation of depleted lithium batteries that will be left to pollute the Earth? Eventually, these depleted lithium batteries too will result in even more damage to the planet and its inhabitants. Oh well, Elon is only thinking about ... what is he thinking about? In any event, he will be living on Mars soon. Perhaps he will also take his battery trash to Mars.

In a recent article entitled, "The AI bill that has Big Tech panicked. Why are some tech leaders are so worried about a California AI safety bill (that should be a bill that the United States of America should also be considering).

The question is, why is Big Tech so worried about this Section 230 bill? That is to say, why is Big Tech worried about this bill? Regardless of what they argue, considering the future implications of what we are doing NOW IS IMPORTANT if we want our planet and our children to survive. Of course, if you do not have kids, why worry? However, do you have trees?

I question today's motives of any person "pushing" AI. Why? Because there is a considerable amount of money to be had by those on the side, let us develop this technology. The profit motive should not be a consideration. It does not matter that AI-big-tech is making vast amounts of money for the betterment of, well, of what? Their wealth? Some of the wealthiest people are involved in this forward motion of AI. Their view does not go anywhere beyond themselves. If it did, they may have a different goal than promoting AI.

The article I am speaking of appeared in a publication named Vox (Vox, 2024) and a newsletter named Future Perfect Newsletter (Future Perfect, 2024). I wonder if there is a sister publication named "Future Imperfect."

I am critical of this technology (AI technology, that is) because of the people behind this particular instance of the technology. Twice before, similar technology has failed. The technology did not meet expectations then, and Mr. Altman's expectations as Principal Snake Oil Salesman and many others are in line with the idea that this instance of AI technology will lead to intelligent machines. That would be a great thing. "BUT" and it is a "LARGE BUT, this technology does not have anything to do with intelligence. It is simply another application of technology that did not work before. Why should it work now? What does this AI have to do with anything related to the brain, let alone intelligence?

There is something else about which we all should be concerned. The current generation of AI technology has yet to be appropriately tested to ensure it works as expected. As we have seen, it sometimes does not work, and no one wants to pursue why this is or even fix the problems associated with current data-science-based AI.

A quote from the article is meant to support the argument that we should not ensure the safety of these systems.

"If I build a car that is far more dangerous than other cars, don't do any safety testing, release it, and it ultimately leads to people getting killed, I will probably be held liable and have to pay damages if not criminal penalties." (Me: I WOULD CERTAINLY HOPE SO.)

This argument goes on, "If I build a search engine that (unlike Google [what does this mean in the context of the argument?], has as the first result for "how can I commit a mass murder" detailed instructions on how to carry out best a killing spree (the original article says spree killing???? -- was this by chance written by some AI?) and someone uses my search engine and follows instructions, I likely WON'T be held LIABLE, thanks largely to Section 230 of the Communications Decency Act of 1996." Are you kidding me? Are they kidding you?

Are they correctly quoting the Communications Decency Act? Because if it is a correct quotation, that makes no sense. So let us make sure about this.

It is always interesting to me to examine the arguments made by people supporting an effort that they do not want to be disturbed, regardless of the implications of this support.

After reading the opening section of the act, I concur with the writer's synopsis in saying that no one is liable for any negative result caused by the information provided on the Internet. The legal system calls the information producer on the web a publisher.

Fundamentally, the argument is that having such a law has protected innovation. We should not curtail freedom of speech as it applies to the Internet and, for that matter, any other technology that is better for humankind and the pockets of those who receive what money to be made from such efforts.

However, I think the following part of the act presents a conflict (from the URL: https://www.law.cornell.edu/uscode/text/47/230.

California US Code 230 Escape Clauses

The act has built into it "escape clauses" that consist of a series of BUTS, such as BUT THIS and BUT THAT.

So, in essence, the federal government's take on this is to let it happen, but let us give other interested parties the ability to put in place other conditions that would allow other law-making institutions to protect against adverse outcomes of the earlier, less stringent aspects of the act. California is one example where these exceptions apply.

So here we are, setting up the situation where we need to figure out what we can do. Moreover, I see legal actions and challenges occurring because of the lack of clarity of the act. Those challenges are beginning to happen as I write this.

None of this, therefore, will hold (at least right now) OpenAI, Google, etc., etc., liable for causing damage, but a state may very well say, "Hey, Google, you cannot do that." (for example, California).

Calling the Internet a venue for publishing where the speech that occurs is free seems far-fetched.

AI pushes this idea further because who is speaking when an AI creates any statement? It is not one person or entity because the current generation of AI draws upon trillions and trillions of samples of speech on the Internet. Who is speaking this? Today's AI systems do not record the sources of their textual productions, i.e., there is currently no way to trace anything created by an AI back to its original author or authors.

Moreover, this means that there is no one responsible for what is written by an AI. However, the technology producers are responsible. Who are the producers? OpenAI, Google, and the like.

Of course, the AI community wants to avoid seeing the California bill passed. They predict the demise of California's technology industry. They say that this new California law will kill California's vibrant innovation culture.

I find such an argument extremely brittle. My question to the people claiming this is, how are you making this prediction? What is the data you are calling upon to claim that laws like the California law will curtail innovation? I would like to see the evidence for the argument.

On the other hand, the arguments for such a law are put into catastrophic terms without actual proof.

The regulatory restrictions placed on Pharma caused them to take the safest course. As one who has spent part of his career in Pharma, I have seen how Pharma, on the one hand, acts like good citizens with the idea that they would NEVER harm the public, and on the other hand, all we have to do is look at the history of Pharma.

Take, for example, pharmaceuticals like Oxycontin. Here is an example of Pharma operating under regulatory restrictions, and still, Pharma was able to cause millions of people to become addicts because of their circumvention of the regulations. There are examples on both sides, and I wish for once that we (people) would at least try to make an informed and balanced decision instead of predicting catastrophic outcomes.

Sometimes, regulations represent a challenge to those who wish to find a way to circumvent those very regulations.

Arguments are also made by the advocates that we are worrying about nothing or, instead, the predicted scenarios will never (NEVER ???) happen.

I want the advocates (I have asked for this many times before) to demonstrate that these systems have been adequately tested and the testing (just as most of us would have liked SOMEONE to have put forth the idea that the automobile might not have been a good thing necessarily for the Earth) has occurred by formulating appropriate repeatable test plans.

I do not see this. Alternatively, I do not see any evidence. If these systems (AI) are so safe that we do not have to worry, why do AI systems hallucinate and create non-sensical results? As a computer scientist, a software developer, and an AI researcher, I think this is the epitome of hubris, i.e., "Our systems are perfect." The oil is perfect. Lithium batteries are perfect. The benefits FAR outweigh the disadvantages.

If this is the case, please show me, or better yet, show us.

Thank you for taking your valuable time to read this article. If you have a comment or a question, please also take the time to leave the comment or question. I would appreciate that very much.

Dr. Randy M. Kaplan

June 19, 2024

appstech newsfuture

About the Creator

Dr. Randy Kaplan

Welcome to my Vocal page and storicles that are published here. I write about tech, the human condition, and anything else that interests me.

Enjoyed the story?
Support the Creator.

Subscribe for free to receive all their stories in your feed. You could also pledge your support or give them a one-off tip, letting them know you appreciate their work.

Subscribe For FreePledge Your Support

Reader insights

Be the first to share your insights about this piece.

How does it work?

Add your insights

Comments

There are no comments for this story

Be the first to respond and start the conversation.

    Dr. Randy KaplanWritten by Dr. Randy Kaplan

    Find us on social media

    Miscellaneous links

    • Explore
    • Contact
    • Privacy Policy
    • Terms of Use
    • Support

    © 2024 Creatd, Inc. All Rights Reserved.