DeepMind Lab can be used to put intelligent agents to the test and train them to become smarter
DeepMind, the artificial intelligence (AI) company owned by Google’s parent company Alphabet is opening up it maze-style platform game used to conduct smart machine experiments to researchers and the public.
The UK based AI division is putting its entire source code for the training environment, known as DeepMind Lab, previously Labyrinth, onto online open source depository GitHub.
From there developers, researchers and anyone generally curious in how one of Google’s more secretive divisions has been working, will be able to use the code to train their own AI systems and agents.
DeepMind goes open source
“DeepMind Lab has been used internally at DeepMind for some time. We believe it has already had a significant impact on our thinking concerning numerous aspects of intelligence, both natural and artificial.,” said DeepMind.
“However, our efforts so far have only barely scratched the surface of what is possible in DeepMind Lab. There are opportunities for significant contributions still to be made in a number of mostly still untouched research domains now available through DeepMind Lab, such as navigation, memory and exploration.”
DeepMind has made rather significant steps with its AI tech, having developed AlphaGo, an AI system that can beat grandmasters of the extremely complex Chinese board game Go.
Its technology has also put deep learning neural networks, effectively simplified artificial simulations of how human neurons and synapses process data, to create a machine learning system that can created and decode its own snooper-proof encryption.
Releasing DeepMind Lab could set into motion a load more AI agents developed using some of the foundations DeepMind has laid, thereby propagating the spread of smart software in both the enterprise and consumer world beyond the likes of the Google Assistant that comes loaded in the Pixel XL smartphone.
Of course, this spread of AI raised questions around how smart software is influencing society and the role government needs to play in its regulation and reaction to the influence it could have.
Science and technology luminaries such as Stephen Hawking have also warned of the impact AI could have, with the theoretical physicist noting it could either be the bet or the worst thing to happen to humanity.