Futurism logo

Is It Time to Demystify AI? - Google Thinks So

With Google's innovative use of AI in its products, the tech world is asking serious questions about the morality of AI

By Becka MaisuradzePublished 4 years ago 3 min read
Like

The talks about Artificial Intelligence are slowly but surely becoming more graspable for the regular person. This is in part caused by the increased adoption of AI to everyday technologies and the heated discussion about the involvement of technologies into every aspect of our lives.

Google using Artificial intelligence for marketing purposes is nothing new, but now we’re hearing more and more companies from all different industries talking about using AI. The latest industry to embark on a journey to adopt AI is actually casinos. The new initiative will help evaluate the playing patterns in online casinos and help to act accordingly. With this new initiative, new online Net Entertainment casinos in Norway or the regular Casinos in Australia will be able to promote responsible gambling. Since the adoption of AI reached this level, Google has come out with an initiative to try and make AI “explainable”.

The concept of AI misuse

The concept is pretty simple. Companies use AI to swift through the enormous amount of data to solve challenges that humans might struggle with. Most people understand that. But talks about how it makes its decisions haven’t really taken place in the public eye.

This week, at an event in London, Google’s cloud computing division pitched with an intriguing title: “Explainable AI”.

This project will aim to give sufficient information about the performance and potential challenges that come with working with AI. Before that can actually happen, Google will offer “insights” to make the thinking of AI algorithms less mysterious and more trustworthy.

Google recently got into quite a scandal, with the Wall Street Journal publishing an investigation challenging Google’s algorithm and accusing the company of not actually following the guidelines that it publicly stated to follow.

Google’s defense

Google as a company is no stranger to such accusations. There have been similar allegations with the way Google handles customer data and now there are discussions about Google’s algorithm blacklisting certain names or blogs due to their political views or Youtube who is owned by Google, promoting left-leaning content over the right-wing creators. All these accusations might have prompted Google to try to openly talk about the ever-so-mysterious algorithm that seems to be causing Google a lot of trouble.

Professor Andrew Moore is the head of Google Cloud’s AI Division and spoke with the BBC about Google’s new initiative. According to him the era of black-box machine learning is behind us.

It is evident that even though staff at Google works at these learning machines very meticulously sometimes they struggle to understand the whole process. Especially when it comes to large systems for smartphones or for search-ranking systems or question-answering systems. Now Google is planning on releasing these tools for the public to see so they can also try to understand it. As Google representatives put in during the event in London this new project aims at “Sharing the essential facts of a machine learning model in a structures accessible way”. So it’s safe to assume that this project does not mean that we will get to learn every detail about the way Google specifically works but it serves as a starting point for people to understand where the AI is coming from and how to understand the system as a whole. This new project will be especially interesting for the data scientists who want to do diagnoses of what’s going on.

The morals of AI

Whether or not this new initiative is a response to recent backlash regarding the WSJ investigation or any of the recent allegations regarding the algorithm is unclear. When speaking with BBC, Mr. Moore did touch on the “moral code” of artificial intelligence but the guidelines seem to be very vague. For example, Moore talked about the chief executive, Sundar Pichai’s set of principles that google operates with.

They apparently include something along the lines of “it should never be used to do harm, or making sure that the decisions of the system are unbiased, fair and accountable. These seem like very easy to work around if one decided to actually do “harm”. For something as massive as Google working with something as powerful as AI these guidelines seem pretty weak. It would be unfair to discard Google’s efforts to make AI more accessible as just a cover-up since the initiative seems long overdue. We’ll have to wait and see if this new project actually sheds more light on the AI and how it works.

artificial intelligence
Like

About the Creator

Reader insights

Be the first to share your insights about this piece.

How does it work?

Add your insights

Comments

There are no comments for this story

Be the first to respond and start the conversation.

Sign in to comment

    Find us on social media

    Miscellaneous links

    • Explore
    • Contact
    • Privacy Policy
    • Terms of Use
    • Support

    © 2024 Creatd, Inc. All Rights Reserved.