Futurism logo

Why Chat-GPT Is Even Bigger Than You Think: Understanding

what is chat- GPT

By Harish KumarPublished about a year ago 5 min read
Like

Everyone has an opinion regarding Chat-GPT and AI. Engineers and entrepreneurs regard it as a new frontier: a brave new world in which to create goods, services, and solutions. Social scientists and journalists are concerned, with one famous New York Times contributor, Ezra Klein, labelling it a "information warfare machine." What has God done?

Let me say right away that I see great potential here. And, as with any new technology, we cannot yet fully forecast the impact. There will be setbacks and failures, but the end result will be "hooray!"

What Is Chat-GPT?

Simply defined, this technology (and many others like it) is a "language machine" that combines statistics, reinforcement learning, and supervised learning to index words, phrases, and sentences. While it lacks true "intelligence" (it doesn't know what a word "means," but it knows how to use it), it can answer questions, create articles, summarise material, and do other things quite well.

Chat-GPT engines are "trained" (programmed and reinforced) to emulate writing styles, avoid specific types of talks, and learn from your inquiries. In other words, more complex models can refine answers as more queries are asked, and then store what it has learnt for future use.

While this isn't a novel concept (we've had chatbots for over a decade, including Siri, Alexa, Olivia, and others), the degree of performance in GPT-3.5 (the most recent version) is astonishing. I've asked it things like "what are the best practises for hiring" and "how do you develop a corporate training programme," and it's given me good answers. Yes, the answers were fairly basic and somewhat inaccurate, but with practise, they will undoubtedly improve.

It also offers a variety of other capabilities. It can answer historical questions (such as who was president of the United States in 1956), write code (Satya Nadella estimates that 80% of code will be generated automatically), and compose news stories, information summaries, and more.

One vendor I spoke with last week is employing a GPT-3 derivative to generate automatic quizzes from courses and act as a "virtual Teaching Assistant." That brings me to the potential use cases.

How Are Chat-GPT and Other Similar Technologies Used?

Before I get into the market, let me explain why I think this will be so massive. The corpus (database) of information that these systems index "trains and educates" them. The GPT-3 system has been trained using the internet and carefully vetted data sets, so it can answer practically any question. That is, in some ways, "dumb," because "the internet" is a mishmash of marketing, self-promotion, news, and opinion. To be honest, I think we all have enough trouble determining what is true (try Googling for health information on your latest ailment; you'll be surprised at what you find).

The Google counterpart to GPT-3 (rumoured to be Sparrow) was designed with "ethical principles" in mind from the outset. It contains notions like "do not give financial advise," "do not talk race or discriminate," and "do not give medical advice," according to my sources. I'm not sure if GPT-3 has this degree of "ethics," but you can bet that OpenAI (the firm developing it) and Microsoft (one of their major partners) are working on it (announcement here.)

So, while "dialogue and language" are vital, some extremely erudite people (I won't name names) are actually kind of jerks. As a result, chatbots like Chat-GPT require improved, in-depth material to truly generate industrial-strength intelligence. It's fine if the chatbot works "very well" if you're using it to break through writer's block. However, if you want it to perform consistently, it must source authentic, deep, and expanding domain data.

I suppose one example would be Elon Musk's overhyped self-driving software. I, for one, do not want to drive or simply be on the road with a group of 99% safe cars. Even 99.9% safety is insufficient. This might be a "disinformation machine" if the information corpus is incorrect and the algorithms aren't "constantly verifying for dependability." And one of the most senior AI engineers I know warned me that Chat-GPT will almost certainly be biassed due to the data it consumes.

Consider the possibility that the Russians utilised GPT-3 to create a chatbot on "United States Government Policy" and directed it to every conspiracy theory website ever created. This doesn't appear to be a difficult task, and if they put an American flag on it, I'm sure many people would use it. As a result, the source of information is critical.

Because AI engineers are well aware of this, they believe that "more data is better." As the data set grows larger, OpenAI CEO Sam Altman believes these systems will "learn" from erroneous data. While I comprehend the concept, I believe the opposite. I believe that one of the most beneficial applications of OpenAI in business will be directing this system to refined, smaller, validated, deep datasets that we trust. (As a big investor, Microsoft has its own Ethical Framework for AI, which we must assume will be enforced based on their cooperation.)

The most impressive solutions I've seen in demos over the years are those that focus on a single domain. Olivia, a Paradox AI chatbot, is intelligent enough to screen, interview, and hire a McDonald's employee with remarkable efficiency. A vendor created a chatbot for bank compliance that functions as a "chief compliance officer," and it works quite well.

As I describe in the podcast, imagine if we built an AI that directed us to all of our HR research and professional growth. It would be a "virtual Josh Bersin," possibly smarter than me. (We are currently prototyping this.)

Last week, I saw a demonstration of a system that took current courseware in software engineering and data science and generated quizzes, a virtual teaching assistant, course summaries, and even learning objectives automatically. This type of work often necessitates a significant amount of cognitive effort on the part of instructional designers and subject matter specialists. When we "direct" the AI toward our content, we suddenly unleash it on the world at large. And we may train it behind the scenes as specialists or designers.

Consider the hundreds of corporate applications: recruiting, onboarding, sales training, manufacturing training, compliance training, leadership development, and even personal and professional mentoring. If the AI is trained on a trustworthy domain of content (which most enterprises have in abundance), it can solve the "expertise delivery" problem at scale.

artificial intelligence
Like

About the Creator

Harish Kumar

Blogger

Reader insights

Be the first to share your insights about this piece.

How does it work?

Add your insights

Comments

There are no comments for this story

Be the first to respond and start the conversation.

Sign in to comment

    Find us on social media

    Miscellaneous links

    • Explore
    • Contact
    • Privacy Policy
    • Terms of Use
    • Support

    © 2024 Creatd, Inc. All Rights Reserved.