01 logo

Can we explain AI? An Introduction to Explainable Artificial Intelligence.

Understanding the effort to make Artificial Intelligence understandable to humans.

By Jair RibeiroPublished 3 years ago 10 min read
1
Can we explain AI? An Introduction to Explainable Artificial Intelligence.
Photo by Emily Morter on Unsplash

A few years ago, when I was still working for IBM, I managed an AI project for a bank. During the final phase, my team and I went to the steering committee to present the results. Proud as the project leader, I have shown that the model has achieved 98 percent accuracy in detecting fraudulent transactions. Everything went well until one of the executives asked:

-Very good, everything looks perfect, but now I want to understand how your AI ended up with fraud or not. I want to understand the logic of this…

In my manager’s eyes, I could see a general panic when I explained that we used an artificial neural network, that it worked with a synapse system and weight adjustments. Although very efficient, there was no way to understand its logic objectively.

Even if it was based on real facts, this raw explanation conditioned the project's continuity at that time, unless we could provide a full explanation that the senior executive could understand and trust.

This was my first experience facing the need for a clear explanation of Artificial Intelligence. Luckily, today we have what we call Explainable AI or xAI.

Ok but, after all, what is the xAI?

Explained AI, interpretable AI, or transparent AI refer to artificial intelligence (AI) techniques that humans can trust and easily understand. It contrasts with the concept of “black box” in machine learning, where even designers cannot fully explain why AI made a specific decision.

It is a field dedicated to studying methods so that Artificial Intelligence applications produce solutions that can be explained, acting as a counterpoint to the development of entirely black-box models, opaque models in which even the developers do not know how to make decisions.

Humans need to understand their choices.

Let us look at the point where we refuse to use artificial learning technology because we can not explain how artificial intelligence makes its decision. On the other hand, many people really can not explain how they made the decision.

Imagine how a person decided at the model level: When we approach our biological structure at the physical and chemical level, we talk about electrical signals from one brain cell to another. If this explanation may sound too technical, tell me how you decided to order a cup of coffee when you arrived at Starbucks?

Let us consider that you and your friends went to Starbucks for a cup of coffee: when one of your friends ordered an iced coffee, the other ordered a hot coffee, and then finally, you ordered a cup of tea.

Why are they choosing iced coffee and hot coffee, and you are going to choose tea? Can someone explain the chemistry and synapses of your brain? Can you explain that? Would you like such an explanation?

Do you know what this is all about? Maybe no, but many other people are starting to give a lot of relevance and pay a lot of attention to how people make decisions.

Why do we need to explain AI?

This is a question that has no simple answer to it. Suppose you take the example of my project that I mentioned initially. In that case, the controller might want to understand our trust models. It is hard to believe something we do not understand.

We have a problem when we cannot explain the decisions made by an algorithm. In assessing AI’s decisions, it is crucial to assess the factors that led to that decision. We will therefore be able to audit and challenge decisions or work to improve the factors.

This is where the importance of xAI, or explainable AI, comes in, which addresses the need to be able to interpret a model of Machine Learning.

This is because it is typical for the formulation of problems addressed by ML to be incomplete. Often, forecasting is not enough to address a problem. It is essential to know more than just “what,” but also “why,” “how.”

It is not enough to know that a teacher has been poorly classified in one year; it is also essential to know the reason for improvement.

Although AI is one of the most important and disruptive technologies of the century, it is subject to bias. Good model accuracy can be a trap.

A convolutionary neural network, for example, can present high precision in pattern recognition. Then you discover that high accuracy resulted from an object that was always present when taking pictures of different classes, not because of the objects themselves' characteristics, for example.

Talking about the need for an explicable AI, Richard Tibbetts, Tableau’s Product Manager, explained: “Decision-makers are right to remain skeptical about the unexplained reactions of AI mechanisms and machine learning. Analytics and AI should help, but not completely replace human knowledge and understanding.’

Do we have to explain it all?

Does this mean that all ML models need to be explained? Not necessarily that. When we are working on well-documented issues that have been dealt with in the field for many years, there is no need to explain the algorithm.

There is also no need to explain to a model when the impact of your wrong decisions is low, such as, for example, an AI capable of learning to dance. But in the case of ML models that directly impact people’s lives, such as the algorithm used to fire a teacher, it is important to explain.

Therefore, the study of xAI is necessary so that the models' wrong decisions can be challenged and corrected so that the models can evolve in tandem with the society they affect.

The Challenges of the Explaining AI

There are two main challenges linked to XAI. First of all, the correct definition of the XAI concept proves to be quite challenging.

One thing is to know what we should be aware of, and one thing is to know what the limitations of our knowledge should be, and these two aspects need to be clarified.

Suppose companies had no choice but to provide detailed explanations for everything. In that case, intellectual property could have had a severe impact, to mention one example.

The second challenge is the assessment of the trade-off between performance and explainability in specific tasks.

Do we need to regulate and standardize specific tasks or industries and force them to seek integrated AI solutions for transparency, even if this means placing a very high burden on these industries’ potential? This trade-off is not yet clear.

Looking inside the “Black Box”

In 2019, the Organization for Economic Cooperation and Development (OECD) presented its AI principles to promote innovation and reliability.

One of the five complementary values-based principles for the responsible management of trusted AI is that “transparency and responsible disclosure must take place around AI systems to ensure that people understand and challenge AI-based outcomes.”

Unfortunately, the way AI “thinks” is currently beyond our human reach. Worst of all, AI is a terrible teacher and is what we, in the world of computer science, call “black boxes.”

Artificial Intelligence exposes solutions without giving reasons for the outcome. Computer scientists have been trying to open this “black box” for decades. Recent research has shown that many AI algorithms think in ways similar to humans.

For example, a computer trained to recognize animals will learn about different eyes and ears and gather them to identify the animal correctly.

How can the AI results be explained?

A great effort is underway to open the AI black box. The research group at the AI Institute at the University of South Carolina is interested in developing an Artificial Intelligence that explains its results.

To this end, the Rubik’s Cube was used as the primary source of analysis.

The Rubik’s cube is a path to find a problem: find a path from point A-a scrambled Rubik’s cube-to point B-a Rubik’s cube solved. Other destination issues include navigation, proof of theorems, and chemical synthesis.

The University of South Carolina Lab has created a website where anyone can see how their AI algorithm solves the Rubik’s cube; however, it would be difficult for a person to learn how to solve it. This is because the computer is unable to tell you the logic behind its solutions.

The Rubik cube solutions can be divided into a few general steps-the first step, for example, can be “forming a cross,” while the second step can be “putting the corner pieces in place.”

Although the Rubik cube has possible combinations of 10 to 19th power, a generalized step-by-step guide is straightforward to remember. It can be applied in many different scenarios.

Conclusion

It is clear that the more AI grows, both in terms of application and performance, the more it needs to be understood. Explaining AI is a new but promising science.

Many efforts have been focused on techniques such as demonstrating the relevance of what-if attributes and tools. Both techniques are not new to the AI world.

Many institutions worldwide are investing in field research, and tech giants like Google are already offering Explainable AI as a platform.

But much more than satisfying human curiosity, I believe that we need to understand AI in the context in which it is already part of people’s lives and organizations’ daily lives.

Furthermore, AI will be increasingly responsible for human life, whether with the control of an autonomous vehicle, either in the fight against hunger and epidemics or in the medical diagnosis.

And we are going to need to trust in these solutions… and at the moment, at least for me… it is tough to trust in something we cannot fully understand or explain.

Key points to remember

  • Explained AI, interpretable AI, or transparent AI refer to artificial intelligence (AI) techniques that humans can trust and easily understand.
  • In assessing AI’s decisions, it is essential to access the factors that led to that decision.
  • xAI, or explainable AI, addresses the need to interpret a model of Machine Learning.
  • Therefore, the study of xAI is necessary so that the models' wrong decisions can be challenged and corrected so that the models can evolve in tandem with the society they affect.
  • There are two main challenges linked to XAI: the correct definition of the XAI concept and the trade-off assessment between performance and explainability in specific tasks.
  • A great effort is underway to open the AI black box.
  • The research group at the AI Institute at the University of South Carolina is interested in developing an Artificial Intelligence that explains the reason for its results, using the Rubik’s Cube as the primary source of analysis.
  • The more AI grows, both in terms of application and performance, the more it needs to be understood.
  • Explaining AI is a new but promising science.
  • Efforts are focused on techniques such as demonstrating the relevance of what-if attributes and tools.
  • Much more than satisfying human curiosity, we need to understand AI in the context in which it is already part of people’s lives and organizations’ daily lives.
  • AI will be increasingly responsible for human life, whether with the control of an autonomous vehicle, either in the fight against hunger and epidemics or in the medical diagnosis.

If you want to read more about it

One more thing…

If you want to go further on your learning journey, I’ve prepared for you an amazing list with more than 60 training courses about AI, Machine Learning, Deep Learning, and Data Science that you can do right now for free:

If you want to continue to discover new resources and learn about AI, In my ebook (link below), I am sharing the best articles, websites, and free training courses online about Artificial Intelligence, Machine Learning, Deep Learning, Data Science, Business Intelligence, Analytics, and others to help you start learning and develop your career.

Also, I’ve just published other interesting ebooks on Amazon, and I’m sure that some of them may be interesting for you… let’s keep in touch, follow me and let’s do it together.

Originally posted on Medium.com - 28-02-2021

tech news
1

About the Creator

Jair Ribeiro

A passionate and enthusiastic Artificial Intelligence Evangelist who writes about people's experiences with technology and innovation.

Reader insights

Be the first to share your insights about this piece.

How does it work?

Add your insights

Comments

There are no comments for this story

Be the first to respond and start the conversation.

Sign in to comment

    Find us on social media

    Miscellaneous links

    • Explore
    • Contact
    • Privacy Policy
    • Terms of Use
    • Support

    © 2024 Creatd, Inc. All Rights Reserved.