Journal logo

AI, Leadership, Business, And The Future

Plus Other Thoughts

By Cody Dakota Wooten, C.B.C.Published 5 months ago 11 min read
3

Well, if you have been following any Business news lately, you will have heard a LOT about OpenAI.

Yet, it is somewhat strange that for all that has occurred, and all that has been in the spotlight, there is still quite a bit of mystery around everything.

What "exactly" is the "breakthrough" that led to this chaos?

I'm honestly not sure, and though I've read some interesting theories around it, I don't think anyone else except those at OpenAI know currently.

So, I'm not going to formulate any hypothesis.

If you have followed any of my articles around AI, you would know that I'm still waiting to be impressed by the technology.

In fairness, what has been accomplished with AI is fascinating in many ways and has many great potential uses.

I am just of the opinion that it is extremely Over-Hyped for what it actually is and can do.

Will it always be like that?

Well, not based on what AI currently is.

Essentially, what AI "is" would need to make DRAMATIC leaps and bounds to attain anything like what people are worried about.

Though, there is interesting talks around the idea of AGI, or Artificial Generalized Intelligence.

That could have some interesting implications if it is accomplished, both good and bad.

However, what we currently have isn't AGI.

Regardless of all of that, there has been some really interesting Leadership Lessons from this entire event.

OpenAI And Leadership

Let's start with the fact that 95% of the employees at OpenAI were ready to quit and walk out the door to follow Sam Altman!

When was the last time that ANY organization truly had a significant number of employees, even over 50%, that were ready to do that?

That is something rare to see in today's world.

Plus, in reality, that 95% is just the people who were willing to put there names to paper.

I guarantee you that there were more who felt similarly, but were not willing to put their name to paper for any number of legitimate reasons.

It could be fear of repercussions.

Could be the desire to keep "any" job.

Or possibly they saw an opportunity to move up with a mass exodus.

Maybe they were worried that Microsoft would not be good to their word.

Regardless of the reasons, I'm positive more than 95% of the employees felt the similarly - maybe not quite 100%, but somewhere close is likely.

Now is this a testament to Sam Altman's Leadership?

It is possible that he has attained such deep Loyalty from his people that they would be willing to follow him just about anywhere.

There are other possibilities as well though.

It could be the these employees had no Faith in the Board's ability to Lead or choose a better Leader.

If this were the case, it wouldn't "necessarily" be Sam Altman's Leadership, but rather the "lack" of Leadership from the Board.

Terrible Leadership can be an extremely powerful Motivator.

With rumors of certain Board Members essentially prophesizing some form of AI Cult, it isn't outside the realm of the possibility that treating AI like a form of god would scare some people.

There is also a third possible option.

It may not necessarily be that Sam Altman's Leadership is "Amazing" per se, but rather that given the options, Employees don't see a "better" option.

Now, this would still imply that Sam Altman's Leadership is viewed Positively overall, but also takes into account other Organizations.

Quite honestly, with the progress that OpenAI has created, I wouldn't be surprised if these employees have many organizations attempting to steal them.

Perhaps many of these employees have seen the other options that exist, and simply want to tie their Fate to the Leader who has the best chances.

There may simply be no better option currently - which is still a testament to Sam Altman to Inspire that belief in his Teams.

Perhaps too there is a fourth option.

Fear of Ostracization.

Perhaps there is a large enough body of Employees that felt one of the above, so much so to sign their names to a contract, that it pushed even more to Desire to be a "part of the group".

It could be that they Desired to be seen as "Part of the Cool Group", or fear of what being "Outside the Cool Group" would mean for their Future.

Regardless of which option is correct, or most likely a mix of these options, it is still an astounding feat to have 95% of your Employees stand up for you in such a dramatic fashion!

Microsoft And Leadership

Another Leadership Highlight from these events is how quickly and efficiently Microsoft made changes.

Generally it takes months for organizations to hire C-Suite Executives.

However, Microsoft made a decision to hire Sam Altman as a CEO of their AI division within 72-Hours!

That is fast moving!

This decision likely also saved all of their Investments into AI.

Microsoft has had spent Billions to tie themselves to OpenAI.

With the OpenAI's Board's decision to fire Sam Altman, everything was looking like hell in a handbasket for OpenAI.

If Sam Altman wasn't to be reinstated at OpenAI, there's a possibility that the Organization would completely crumble under its own weight.

What would then become of Microsoft's investments?

Well, Microsoft didn't want to find out.

Instead of seeing what would happen to OpenAI, Microsoft decided that they would win the game regardless of what happened to OpenAI as an company.

If OpenAI failed, they would have Sam Altman who understood best where the Organization was before possibly imploding.

Who better to simply recreate OpenAI at Microsoft?

With the resources Microsoft has available, it likely wouldn't take very long, ESPECIALLY when they FURTHER offered jobs to any OpenAI Employee that left AND guaranteed the same income they had!

It really put OpenAI's Board in the worst position, and put Microsoft in a winning position no matter what happened.

If OpenAI refused to reinstate Sam Altman and the Organization imploded, Microsoft would essentially sit and gain all of the Employees, which would simply just recreate the same product quickly and efficiently.

It's also possible that if this implosion occurred, Microsoft may simply be able to fully buy the assets of OpenAI for dirt cheap, making the work of "recreating" even simpler.

If OpenAI chose to reinstate Sam Altman and the Board resigned, Microsoft is seen as a Hero that helped fight for Sam Altman's return.

Nothing is really lost for Microsoft in either scenario, but OpenAI's Board lost either way.

THAT is extreme Power Move!

AI, AGI, The Future, And Leadership

One thing has gone through my head recently around all of this.

AI and Leadership in the Future.

If the goal of AI research is to get to Artificial Generalized Intelligence (AGI), what will that mean from a Leadership perspective?

Well, "assuming" that it is actually created, there is a lot to consider.

For instance, who should Lead this AGI?

There are many Powers in the world who would LOVE to be in control of such a technology... but "should" they?

Left unchecked, many of those Powers would likely do terrible things with it.

We have already seen the levels of terrible things they do "without" it, and those are just things we "know" about.

What would happen if we gave those Powers this type of technology?

Of course, I would Hope that there are enough Good people that there could be much that is avoidable, but the Future isn't certain either.

There is also the age old question of should this AGI be "Led" at all?

Some people believe that such a technology would essentially be as intelligent as Humans (or perhaps more).

Would it be "right" for Humans to tell this type of Intelligence what is "right" or "wrong"?

Especially when we ourselves can't seem to agree on those concepts?

However, this leads to the other fear - if "we" don't Lead it, and it decides something that "we" don't like, what then?

Essentially, they are afraid of The Terminator becoming reality.

I'm not going to say it is a completely unwarranted fear, given the speed at which current AI can do things.

I would "Hope" that AGI developers would create some sort of way to prevent this, but given some of the stories around current AI, I couldn't guarantee it.

But if we decide that it should control itself, it would be Wise of us to befriend such an AGI and show it the best aspects of Humans.

Which leads to another question - what can our current Leaders do to demonstrate the best aspects of Humanity?

If Humans are Exemplars of what is great about Humans, then we shouldn't be afraid of something like that.

But how many people are "truly" Exemplars of their beliefs?

I think that is more a testament to the state of Humanity than it is any testament about AI or AGI.

Other Thoughts

However, there is also another thought that comes to my mind around all of this that is intriguing.

Currently, AI's biggest challenge seems to actually be "itself", as the information it will train itself on seems to become flooded by information it has created itself, which then over time completely erodes any meaning.

Could something like this also happen with an AGI?

As it "learns", would competing information cause it Challenges?

For instance, say there are 2 solutions an AGI comes to as being the paths to go, but both are "good", just in different ways?

Better, what if both paths are "bad", just in different ways?

Would it really be able to "choose the lesser of two evils"?

Or would it become confused and perhaps take a third option?

Take both?

I theorize that it would entirely be possible that a true AGI may, over time, essentially develop multiple personalities.

Then what happens when those personalities clash?

Would it then split into so many different personalities that it creates its own digital cultures that go to war with each other?

What would that even look like, and would it even impact humans at all?

This also goes into the question of "what" we should be "teaching" AI.

What data is used to "train" AI in everything it does?

I go into the field of Medicine as a prime example.

Who should train a Medical AI on what it should do in situations?

Pharmaceutical Companies?

There are many Research Scientists who believe that Pharmaceutical Companies have been the worst bane of Humanity in this past Century.

Yet, they are the major "Leaders" of Medicine currently, influencing law and universities.

If the Pharmaceutical Companies are wrong, and many believe that to be the case, but AI is exclusively trained by them - what does that mean for Humans?

Would AGI be able to see past what it has been taught, and even what current flawed research may exist?

Would it be able to find truly Novel ways of achieving Health past all the Politics and Bias that are involved in the current world of Health?

Or would it simply perpetuate all of the problems that exist?

Then, what would happen in every other Industry outside of Medicine?

It is questions like these that truly make me question if even a "True" AGI would be all that it is theorized to do in the world.

At the end of the day, I'm not convinced that Humans will ever become obsolete.

Humans seem to have this singular ability to question everything that currently exists and to adapt, change, and design beyond our limits.

Though there are many unknown variables that we cannot be certain of when it comes to AI and AGI, I think Humans will continue to play a role.

The Future that I see will have challenges of course, as Life always has and always will, but it will be a Future of Collaboration.

workflowwall streetVocalproduct reviewpop culturepoliticslistindustryhumanityhow tohistoryheroes and villainsfeatureeconomycelebritiescareerbusiness warsbusinessadvice
3

About the Creator

Cody Dakota Wooten, C.B.C.

Creator of the Multi-Award-Winning Category "Legendary Leadership" | Faith, Family, Freedom, Future | The Legendary Leadership Coach, Digital Writer (450+ Articles), & Speaker

https://www.TheLeadership.Guide

[email protected]

Reader insights

Be the first to share your insights about this piece.

How does it work?

Add your insights

Comments (2)

Sign in to comment
  • Test5 months ago

    Your writing is exquisite.

  • Scott Christenson5 months ago

    The drama is interesting. After reading several in depth articles about Sam and his family background, it seems his ability to engage with people, work hard, communicate his vision simply and win people over are his strengths: sam doesnt program and nothing would have changed on the technical side with his departure (ilya was the head of technology) so this case actually shows how important people view human leadership

Find us on social media

Miscellaneous links

  • Explore
  • Contact
  • Privacy Policy
  • Terms of Use
  • Support

© 2024 Creatd, Inc. All Rights Reserved.