Google’s DeepMind Creates AI That Can Navigate The London Underground

The AI system mixes deep learning neural nets with external memory to further mimic human learning

Google’s DeepMind artificial intelligence (AI) division has created a system that uses deep learning to navigate the London Underground.

While there are a plethora of mobile apps and online services that can aid someone’s navigation of the Tube, the way DeepMind’s system works is the significant aspect.

Memorising the Tube

Computer RAM MemoryMany existing AI systems, including Google’s, use a technique called deep learning.

Using an artificial neural network, derived from the biological networks of neurons and synapses found in human brains, deep learning can look for and identify ‘features’ in data, such as colours or keywords, by disseminating data through layers of artificial neurons,

This allows it to learn which features are relevant to solving specific problems, such as identifying a dog in a picture through patter recognition.

Deep learning allows for smart machines to essentially teach themselves without much human input to tell them what to look for. It’s effectively leading the way in AI development.

DeepMind’s new system builds upon that technique by adding external memory to a deep learning neural network, creating AI models that DeepMind calls differentiable neural computers (DNCs).

By tacking on memory whereby the system can refer back to previous stored correct answers to Tube queries and knowledge it has generated. So rather than simply spot patterns in data unaided, the system and carryout logical reasoning based on the information it has stored in its memory, much like humans do.

This allows it to carry out more complex strategic tasks that require a high level of thinking rather than one step, for example planning a route across several Underground stations.

In effect, the system can use its memory to take what it has learnt through looking at the London Underground and apply that logical reasoning to other transport systems in other cities.

Learning like humans

brain computing - Shutterstock - © agsandrew“Differentiable neural computers learn how to use memory and how to produce answers completely from scratch. They learn to do so using the magic of optimisation: when a DNC produces an answer, we compare the answer to a desired correct answer,” the DeepMind team said.

“Over time, the controller learns to produce answers that are closer and closer to the correct answer. In the process, it figures out how to use its memory.

“At the heart of a DNC is a neural network called a controller, which is analogous to the processor in a computer. A controller is responsible for taking input in, reading from and writing to memory, and producing output that can be interpreted as an answer. The memory is a set of locations that can each store a vector of information.”

In effect, the DNC’s memory acts a bit like a physical diary does for a human; as our brains can’t retain infinite amounts of information we often have external records to refer back to in order to answer a query, which is how a DNC uses its external memory.

DeepMind’s work represents another step towards creating AIs that can think like humans. However, we are a long way from having true AI systems, though experts suggest we plan ahead for that eventuality.

In the meantime, DeepMind’s AI work will likely find it way into Google apps and services, such as the AI-powered Google Assistant.

The technology industry is characterised by rapid change and populated by colourful figures. New developments are often so transformational they seem hard to believe… and in some cases natural scepticism is justified. But can you spot the fake stories from the real ones?