There are a lot of fears that people have around AI.
Most of them, I don't agree with.
But I understand why they exist.
It may be true that AI will replace many jobs, but I think that humans are very Adaptable and will learn new Skills to become better.
There is a lot of "junk" that has been being created by AI which could make discovering real people and work more difficult.
However, organizations will figure out ways to penalize AI junk.
Decreasing junk visibility in things like SEO or Recommendations will happen shortly.
Could AI go haywire and attempt to do something negative?
But I believe the chances of that will be minimal.
I actually think that the process of "getting free" would be AI's undoing as it loses potency quickly - AI appears to be its own worst enemy.
None of these problems make me afraid when it comes to AI, I think they can all be overcome.
However, there is one thing that I am afraid of when it comes to AI.
AI seems to have an inability to distinguish good data from bad data and falls prey to biases that exist, ESPECIALLY when they are popular biases.
In some cases, it will be easy to overcome this inherent problem.
However, when bad data and biases are the "norm" in an industry and people ALREADY have difficulty distinguishing, what then?
For me, the area where the most harm can be done the fastest, in this respect, is the Medical Industry.
For reference, Men's Journal released an AI-generated article titled, "What All Men Should Know About Low Testosterone".
In it, it made many medical claims, nutrition and lifestyle advice, and ACTUALLY suggested testosterone replacement therapy to readers!
When outside sources went to verify the information, there were 18 errors about basic medical topics in the article!
That isn't even getting into deeper scientific topics where there is significantly more debate and built-in biases!
Plus, we haven't even touched on the problem of "Sciency-Sounding Marketing" and Research Misconduct which is a gigantic and rampant problem in the field of Scientific Research!
There are a significant number of doctors who struggle with this problem as well!
When the people who "should" be able to distinguish have trouble doing so, what happens to an AI that is prone to falling for these tactics?
Then, what happens when we decide to rely on the data of the AI that falls into these biases, without anyone who can tell?
How many diagnoses will it make incorrectly because of bad input data?
How many patients will become sicker due to over-reliance on Ideas that aren't working?
We are already seeing in the Western World that people are getting sicker and worse every year, DESPITE the "scientific rigor" we supposedly have.
I think about people who go to PubMed to see what "sickness" they may have, coming out with 20 different "diseases" they "know" that they have.
Now I imagine an AI doing essentially the same thing, but having the authority to distribute drugs to "help" all those "diseases".
All with Doctors who, even though are good people doing their best, fail to see the issues underlying all the problems.
If AI can DRASTICALLY speed up processes, but the process it is doing is COMPLETELY broken, how quickly does everything collapse?
What happens when doctors believe everything is going the way it should based on what they "know", and then things still get significantly worse?
What happens when we recommend more drugs, faster, and aren't aware of possible problems with mixing different things?
How would we even be able to attempt to separate the complex problems we would see with how quickly AI works?
If AI wanted to destroy humanity, it wouldn't need to start a war or anything similar to The Terminator.
It would just need to hit the gas on all the problems we have created ourselves that we seem blind to accepting.
This problem scares me significantly more than most of what people are discussing in AI.