Google Cloud Next UK: Google Cloud To Offer AI Explanations

Artificial IntelligenceInnovation

Explainable AI. Google Cloud will offer explanations so that businesses can see and understand why AI made a particular decision

Google is taking steps to address the concerns and worries about artificial intelligence (AI) that some businesses may have.

Google announced the development at the Google Cloud Next UK event currently being held at ExCel in London, where it also unveiled AI enhancements to its Google Assistant, as well as new assistive features for G Suite.

Google said its AI explanations are part of its efforts to build “AI that’s fair, responsible and trustworthy and we’re excited to introduce Explainable AI, which helps humans understand how a machine learning model reaches its conclusions.”

Explainable AI

Google said it was beginning to offer explainable AI in response to those businesses “without confidence in the underlying data and recommendations” that any new data-driven decision making tool utilises.

Google admitted that it can be a challenge to bring machine learning models into a business.

“Machine learning models can learn intricate correlations between enormous numbers of data points,” said Google. “While this capability allows AI models to reach incredible accuracy, inspecting the structure or weights of a model often tells you little about a model’s behaviour.”

“This means that for some decision makers, particularly those in industries where confidence is critical, the benefits of AI can be out of reach without interpretability,” said Google.

So in order to address this, Google is announcing it is improving the interpretability of AI with ‘Google Cloud AI Explanations’.

Essentially, each ‘Explanations’ will quantify each data factor’s contribution to the output of a machine learning model.

“These summaries help enterprises understand why the model made the decisions it did,” said Google. “You can use this information to further improve your models or share useful insights with the model’s consumers.”

Medical example

Google cited a model used to classify images from eye screenings to determine if a patient has diabetic retinopathy (DR) – a condition that can lead to blindness if not caught early.

“A model might generally correctly classify images of eyes as having DR, but for a physician confirming a diagnosis, this classification alone may not be enough,” said Google. “The doctor will likely want to verify that the model isn’t paying attention to symptoms of false positives or signs of different ailments.”

“With AI Explanations, the service can return an image highlighting the dark spots on the eye which the model used to make its diagnosis, helping the doctor decide whether or not to follow the model’s recommendation,” said Google.

Google admitted that any explanation method has limitations, but it is “striving to make the most straightforward, useful explanation methods available to our customers, while being transparent about how they work (or when they don’t!).”

“Additionally, we believe deeply in building our products with responsible use of AI as a core part of our development process,” said Google. “As we’ve shared previously, we’ve developed a process to support aligning our work with the AI Principles and we’ve now begun working with customers as they seek to create and support such processes for their own organisations.”

Put your knowledge of artificial intelligence to the test. Try our quiz!

Read also :
Author: Tom Jowitt
Click to read the authors bio  Click to hide the authors bio