Futurism logo

Artificial Intelligence Is Not Smart Enough to Know When It Can't Be Trusted.

Artificial Intelligence Is Future?

By Get Value DailyPublished 3 years ago 3 min read
2
Artificial Intelligence Is Not Smart Enough to Know When It Can't Be Trusted.
Photo by Arseny Togulev on Unsplash

As it turns out, scientists may just have saved us out of this future AI-led apocalypse by creating neural networks that know when they are untrustworthy.

These profound learning neural networks are made to mimic the human mind by weighing many variables in equilibrium with one another, spotting patterns in masses of information that people don't have the ability to analyze.

While Skynet might still be a way off, AI is already making decisions in areas that influence human lives, like autonomous driving and clinical investigation, which means it is vital that they are as accurate as you can. To assist with this aim, this newly established neural network system may generate its confidence level and predictions.

This self-awareness of trustworthiness was given the name Deep Evidential Regression. It bases its scoring on the standard of the available data it must use -- the more precise and detailed the training data, the more probable it is that future prediction will work out.

By ThisisEngineering RAEng on Unsplash

The research team compares it to a self-driving automobile having different levels of certainty about whether to proceed through a junction or whether to wait, in the event, if the neural network is less confident in its predictions. The assurance rating also includes hints for getting the score higher (by tweaking the network or the input information, for example).

While similar safeguards are assembled into neural networks before, what sets this one apart is that the rate at which it works, without surplus computing demands -- it may be completed in 1 run via the network, instead of several, with an assurance level outputted in the same time for a choice.

"It may be used to assess products which rely on learned models. By estimating the uncertainty of a learned model, we also understand how much a mistake to expect from the model, and what missing data could improve the model."

The researchers analyzed their new system by obtaining it to judge depths in different components of an image like a self-driving automobile might judge distance. The system also well to existing setups while also estimating its own uncertainty -- the days it was certain were indeed the times it obtained the flames wrong.

By ThisisEngineering RAEng on Unsplash

As an additional bonus, the system managed to flag times when it struck images out of its usual remit (so very different to the data it was trained on) -- that in a medical situation could mean getting a doctor to take a second look.

Even if a neural system is right 99% of the time, that missing 1 percent can have serious consequences, depending on the scenario. The researchers say they are confident that their new compact trust evaluation can improve security in real-time, even though the job hasn't yet been peer-reviewed.

"We are beginning to see a good deal more of these [neural network] models trickle out of the research lab and in the actual world, into situations which are touching humans with possibly life-threatening consequences," states Amini.

"Any user of this procedure, while it is a doctor or an individual from the passenger seat of a car, needs to know about any risk or uncertainty associated with that decision."

Future of AI Technology

When you hear the word Artificial Intelligence, what comes to mind? Many think that it is something related to computers or robots. However, the term "Artificial" is very broad and covers many different things. AI can refer to anything from programming software to an artificially intelligent robotic system. There is a new field called "Computer-Aided Design" (CAD) that deals with the designing of artificial intelligence systems. If you were to design a car with human help, you would call it "Auto-Aided Design."

As we all know, humans have a large advantage over other animals and machines when it comes to intelligence. The human brain is designed to analyze data, make decisions, adapt quickly, and solve problems. So what is a smarter way to do these things than to put someone else's brain in there and let them do all of these things? In some cases, we don't really need to be smarter; we just have a better understanding of what needs to be done. Some examples of these can be seen in the way people who are blind to perform, walk, speak, or even live. That is because they have an understanding of the process.

One of the most interesting and promising new artificial intelligence technologies is the development of what are called "Nanotechnology." These nanotechnology-based technologies are being developed to create machines that are more like human beings and not just more efficient. So, how is artificial intelligence's future related to Nanotechnology? It's possible that the future of artificial intelligence technology will involve nanotechnology in the creation of artificial intelligence robots.

artificial intelligence
2

About the Creator

Get Value Daily

Reader insights

Be the first to share your insights about this piece.

How does it work?

Add your insights

Comments

There are no comments for this story

Be the first to respond and start the conversation.

Sign in to comment

    Find us on social media

    Miscellaneous links

    • Explore
    • Contact
    • Privacy Policy
    • Terms of Use
    • Support

    © 2024 Creatd, Inc. All Rights Reserved.