Google’s DeepMind artificial intelligence (AI) division has created a system that uses deep learning to navigate the London Underground.
While there are a plethora of mobile apps and online services that can aid someone’s navigation of the Tube, the way DeepMind’s system works is the significant aspect.
Using an artificial neural network, derived from the biological networks of neurons and synapses found in human brains, deep learning can look for and identify ‘features’ in data, such as colours or keywords, by disseminating data through layers of artificial neurons,
This allows it to learn which features are relevant to solving specific problems, such as identifying a dog in a picture through patter recognition.
Deep learning allows for smart machines to essentially teach themselves without much human input to tell them what to look for. It’s effectively leading the way in AI development.
DeepMind’s new system builds upon that technique by adding external memory to a deep learning neural network, creating AI models that DeepMind calls differentiable neural computers (DNCs).
By tacking on memory whereby the system can refer back to previous stored correct answers to Tube queries and knowledge it has generated. So rather than simply spot patterns in data unaided, the system and carryout logical reasoning based on the information it has stored in its memory, much like humans do.
This allows it to carry out more complex strategic tasks that require a high level of thinking rather than one step, for example planning a route across several Underground stations.
In effect, the system can use its memory to take what it has learnt through looking at the London Underground and apply that logical reasoning to other transport systems in other cities.
“Over time, the controller learns to produce answers that are closer and closer to the correct answer. In the process, it figures out how to use its memory.
“At the heart of a DNC is a neural network called a controller, which is analogous to the processor in a computer. A controller is responsible for taking input in, reading from and writing to memory, and producing output that can be interpreted as an answer. The memory is a set of locations that can each store a vector of information.”
In effect, the DNC’s memory acts a bit like a physical diary does for a human; as our brains can’t retain infinite amounts of information we often have external records to refer back to in order to answer a query, which is how a DNC uses its external memory.
DeepMind’s work represents another step towards creating AIs that can think like humans. However, we are a long way from having true AI systems, though experts suggest we plan ahead for that eventuality.
In the meantime, DeepMind’s AI work will likely find it way into Google apps and services, such as the AI-powered Google Assistant.
The technology industry is characterised by rapid change and populated by colourful figures. New developments are often so transformational they seem hard to believe… and in some cases natural scepticism is justified. But can you spot the fake stories from the real ones?
Seemingly accidental leak reveals Google is developing Jarvis AI extension that can browse the web…
Amazon is reportedly in talks to pump billions of dollars more into AI start-up Anthropic,…
Star witness for the US prosecution of FTX founder Sam Bankman-Fried, has begun her two…
After axing 31 percent of its workforce when it failed to be acquired by Amazon,…
Mozilla Foundation axes 30 percent of its staff, and is eliminating its Advocacy Division that…
Improving security. Mandatory multi-factor authentication (MFA) is coming to the Google Cloud by the end…