Microsoft Uses AI Bug Hunter To Detect Security Risks

Microsoft has developed a cloud service that uses artificial intelligence (AI) to hunt for bugs in software generally available, with plans in place to also offer a preview version of the tool to Linux users.

Microsoft Security Risk Detection, formerly known as Project Springfield, can be used by developers to search for security vulnerabilities in software they are preparing to use or release.

Detecting weakness

As it has been designed to detect bugs and weaknesses before the software pre-release, it could save companies the trouble of later having to create and release a patch, deal with crashes or respond to an attack.

According to Microsoft researcher David Molnar, who is leading the group delivering the risk detection tool, businesses that conducted such work (called fuzz testing) have traditionally hired security experts to do it.

As companies create more software, it has become increasingly difficult to adequately test it. Molnar believes the risk-detection service can act as an additional helper, augmenting the work developers do by using AI to check for security problems.

He explained: “We use AI to automate the same reasoning process that you or I would use to find a bug, and we scale it out with the power of the cloud.”

The Microsoft Security Risk Detection service essentially uses AI to ask a series of “what if?” questions to try to determine what may represent a security concern or cause a crash. Microsoft says it targets the most critical areas, looking for vulnerabilities that “other tools that don’t take an intelligent approach might miss”.

Molnar said the tool is ideal for companies that build software themselves, modify off-the-shelf software or license open source offerings.

DocuSign, specialists in electronic document signatures, was part of a small trial of the Windows version of the risk detection tool, which was released in preview in autumn 2016.

John Heasman, senior director of software security at DocuSign, said the tool helped them identify potential bugs they might not have otherwise found. He said: “It also was especially helpful because it almost never returned false positives, which are potential bugs that turn out not to be problematic.”

False positives are a key problem for the industry because it takes so much time to investigate each one and security experts risk missing real bugs because they have so many false ones to sort through. Heasman added: “It’s rare that these solutions have such a low rate of false positives. We used Microsoft Security Risk Detection as an extra step of assurance.”

Microsoft plans to offer the tool for sale in late summer through Microsoft Services.

How much do you know about biometric tech security? Try our quiz!

Duncan Macrae

Duncan MacRae is former editor and now a contributor to TechWeekEurope. He previously edited Computer Business Review's print/digital magazines and CBR Online, as well as Arabian Computer News in the UAE.

Recent Posts

EU Widens Investigations Into Chinese Imports, Subsidies

After the United States imposes 100 percent tariffs on certain Chinese goods, Europe widens its…

2 days ago

Reddit Deal With OpenAI Gives ChatGPT Access To Content

OpenAI strikes deal with Reddit to train its AI tech on user posts and give…

2 days ago

Microsoft Invests 4 Billion Euros In France For AI, Cloud

Global spending spree from Microsoft continues, with huge investment for new data centre to drive…

2 days ago

Toshiba Axes 4,000 Staff In Post-Delisting Restructuring Operation

Workforce blow. Newly privatised Toshiba has embarked on a 'revitalisation plan' that will entail the…

3 days ago

European Union Opens Child Safety Probe Into Meta

European Commission opens an official child safety investigation into Facebook and Instagram-owner Meta Platforms

3 days ago