Microsoft and Cambridge University Design AI That Can Create Its Own Programs

Microsoft is working with Cambridge University researchers to develop an artificial intelligence (AI) to take developer directions and turn them into code in a matter of seconds.

Dubbed DeepCoder, the software uses a process called program synthesis to sift through a database of source code fragments from existing software and assemble them together to over come basic programming challenges set by developers.

Working much like a human programmer might, DeepCoder can learn which snippets of code are needed to sole certain requirements set by developers, only with the ability to search code libaries more widely and thoroughly than humans.

While the AI can only solve challenges that require only a few lines of code, it could in time be capable of producing more complex programs, and could make it easier for developers to build apps without any knowledge of coding.

DeepCoder AI

“We have found several problems in real online programming challenges that can be solved with a program in our language,” Microsoft and Cambridge University’s DeepCoder: Learning To Write Programs paper explained.

The researchers noted that they see the potential for DeepCoder to extend its functionality with abilities, such as being able to understand instructions in natural language rather than rely on developer inputs and outputs.

“DeepCoder represents a promising direction forward, and we are optimistic about the future prospects of using machine learning to synthesise programs,” they said.

“A dream of artificial intelligence is to build systems that can write computer programs.”

Human coders need not start circling the wagons, as DeepCoder has been designed to take care of more mundane programming tasks, freeing up time for human programmers to concentrate on solving more higher-level and sophisticated coding requirements.

AI based on deep learning algorithms have already been found to be able to come up with their own encryption keys to prevent data interception, decryption and surveillance by hackers.

Given access to hardware that enables the training of AIs via the cloud, it is no surprise the technology is being put to use to solve all manner of tasks, including being tested by the Royal Navy to ascertain if it can issue commands to a fleet and target weapons effectively in a conflict.

Put your knowledge of artificial intelligence (AI) to the test. Try our quiz!

Roland Moore-Colyer

As News Editor of Silicon UK, Roland keeps a keen eye on the daily tech news coverage for the site, while also focusing on stories around cyber security, public sector IT, innovation, AI, and gadgets.

Recent Posts

AI Safety Summit 2024: Tech Firms Agree AI Safety Pledges

Second AI Safety Summit sees major players in the AI space pledge to develop the…

7 hours ago

Former OpenAI Executive Raises Safety Concerns

Parting shot. Former head of OpenAI's safety team criticises safety practices and OpenAI's focus on…

8 hours ago

US Warns Rising Cyberattacks Against Water Supplies

Critical infrastructure. Utility firms in the US are being urged to do more to protect…

9 hours ago

Scarlett Johansson ‘Shocked, Angered’ Over OpenAI’s Artificial Voice

OpenAI pulls synthetic voice released with an update to ChatGPT, amid complaint from actress Scarlett…

10 hours ago

Productivity Increases in Sectors Exposed To AI, PwC Finds

Sectors more exposed to AI are experiencing almost fivefold greater labour productivity growth, new report…

13 hours ago

BT Extends Deadline For PSTN Switch To Digital Landlines

Carrier 'refines' its digital switchover programme, and extends deadline for UK move from old analogue…

14 hours ago