Alphabet’s DeepMind Releases AI Training Game To Researchers, Developers And The Public

DeepMind, the artificial intelligence (AI) company owned by Google’s parent company Alphabet is opening up it maze-style platform game used to conduct smart machine experiments to researchers and the public.

The UK based AI division is putting its entire source code for the training environment, known as DeepMind Lab, previously Labyrinth, onto online open source depository GitHub.

From there developers, researchers and anyone generally curious in how one of Google’s more secretive divisions has been working, will be able to use the code to train their own AI systems and agents.

DeepMind goes open source

“DeepMind Lab has been used internally at DeepMind for some time. We believe it has already had a significant impact on our thinking concerning numerous aspects of intelligence, both natural and artificial.,” said DeepMind.

“However, our efforts so far have only barely scratched the surface of what is possible in DeepMind Lab. There are opportunities for significant contributions still to be made in a number of mostly still untouched research domains now available through DeepMind Lab, such as navigation, memory and exploration.”

DeepMind has made rather significant steps with its AI tech, having developed AlphaGo, an AI system that can beat grandmasters of the extremely complex Chinese board game Go.

Its technology has also put deep learning neural networks, effectively simplified artificial simulations of how human neurons and synapses process data, to create a machine learning system that can created and decode its own snooper-proof encryption.

Releasing DeepMind Lab could set into motion a load more AI agents developed using some of the foundations DeepMind has laid, thereby propagating the spread of smart software in both the enterprise and consumer world beyond the likes of the Google Assistant that comes loaded in the Pixel XL smartphone.

Of course, this spread of AI raised questions around how smart software is influencing society and the role government needs to play in its regulation and reaction to the influence it could have.

Science and technology luminaries such as Stephen Hawking have also warned of the impact AI could have, with the theoretical physicist noting it could either be the bet or the worst thing to happen to humanity.

How much do you know about AI? Take our quiz!

Roland Moore-Colyer

As News Editor of Silicon UK, Roland keeps a keen eye on the daily tech news coverage for the site, while also focusing on stories around cyber security, public sector IT, innovation, AI, and gadgets.

Recent Posts

OpenAI Hit By Austrian Complaint Over ChatGPT ‘False Data’

Rights group argues ChatGPT tendency to generate false information on individuals violates GDPR data protection…

8 hours ago

EU Designates Apple’s iPad OS As DMA ‘Gatekeeper’

European Commission says Apple's iPadOS is 'gatekeeper' due to large number of businesses 'locked in'…

8 hours ago

Beating the Barbarians in the Cloud

As the cloud continues to be an essential asset for all businesses, developing and maintaining…

8 hours ago

Austria Conference Calls For Controls On ‘Killer Robots’

Internatinal conference in Vienna calls for controls on AI-powered autonomous weapons to ensure humans remain…

9 hours ago

Taiwanese Chip Giant Exits China Mainland

Major Taiwan chip assembly and test firm KYEC to sell Jiangsu subsidiary, exit mainland China…

10 hours ago

Deepfakes: More Than Skin Deep Security

As deepfake technology continues to blur the lines between reality and deception, businesses and individuals…

10 hours ago