Microsoft has said it is now using machine learning and artificial intelligence techniques across the company, including in Windows Phone and Bing
Microsoft is using machine learning across its products, and “deep learning” techniques are finding their way into more and more Microsoft technologies, including Windows Phone, according to the head of the company’s machine learning department.
Speaking at the GigaOm Structure Data conference in New York City, John Platt, Microsoft distinguished scientist and manager of the machine learning department at Microsoft Research, said the industry is getting closer to delivering on one of the old dreams of Microsoft co-founder Bill Gates: computers that can see, hear and understand.
AI used across Microsoft
Platt said machine learning is big not only in Microsoft Research, but “machine learning is pretty much pervasive throughout all Microsoft products. So whenever you use a Microsoft product you’re using a system that’s been generated from machine learning.”
Moreover, whenever you use the search engine Bing, you’re using many components that have been trained with machine learning, he said.
“Large amounts of that system are all done by machine learning because that’s how you can do scale,” Platt said. “The only way you can answer the billions of questions Bing answers is to have something that operates autonomously. In Xbox, the Kinect was also trained with machine learning. The fact that it can see you in the room even though it’s poor lighting and you can wave your arms and it can track you—that’s all done with a piece of software that was trained with machine learning.”
In addition, Microsoft is using machine learning in security. The company arms its malware analysts with machine learning-driven technology, both to give the analysts “superpowers” to make them much more effective at searching through lots of data, and also by autonomously helping to find malware authors, Platt said.
As for deep learning surfacing in Microsoft products, “If you use the speech recognition on the Windows Phone or if you do it in Windows 8, that’s totally trained with deep learning,” Platt said. “And it’s starting to make its way into the general search products, too.”
Approaching human performance levels
However, despite advances in artificial intelligence (AI) technology, Platt said he does not yet see it being viable for “safety-critical” applications. “But we are starting to see some of these AI systems, in certain restricted sets, actually approach human levels of performance.”
Platt said he looks at AI in three different silos: business intelligence, machine learning, and classic AI or deep learning.
“What I tend to work on is the machine learning, or data mining is what I call it for this particular subset, which is using data to actually create software,” Platt said. “That’s how we create a lot of software at Microsoft. So instead of following a spec, what you do is you gather a data set and you specify the goals of the software on that data set and at the end you get a piece of software that you can ship that was trained on the data set. That’s sort of classical machine learning.”
The recent breakthroughs in deep learning techniques have led to solving classical, hard AI problems that are emulating a lot of what people can do—things like vision and speech and reading. “That’s really broken open recently, and that’s very exciting,” Platt said.
Although the techniques go back to the early ’90s, the problems were not solved because computers were very slow then. “And there was not much data. And a lot of these techniques about neural networks were put on the shelf because they were just too slow or not effective enough,” Platt said. “But people have revisited them in the last two to three years. And it’s possible now because we have much more compute, particularly with parallel computing. And we have much more data that we’ve gathered, and also many more labels. And now that we have all these ingredients, we’re getting these spectacular breakthroughs.”
Artificial intelligence APIs
Danny Sabbah, CTO and general manager of Next Generation Platform at IBM, told eWEEK the very same thing. Indeed, Sabbah said IBM had the wherewithal to produce its Watson deep learning cognitive computing system more than a decade ago, but the computer technology was not readily available to make it feasible.
Platt said he believes the trend of making AI consumable through APIs—which IBM is doing by opening up Watson to developers—is an important one because “machine learning is tricky. Not all developers can use machine learning,” he said. “They have to learn a little bit before they can start using it effectively. There are many libraries you can use to write machine learning code. There are even a few deep learning libraries developers can use. But the deep learning itself is difficult. Some people call it black magic to try to get it to work.”
So, he added, a lot of developers will not be using deep learning directly, but will be consuming Web services that were built on top of deep learning. And companies that are not in IT will be able to build line of business apps that will call into Web services built on deep learning. They could use something like a speech recognition API based on deep learning and then wrap some of their own business code around it and add the vocabulary from their business or industry.
“So anything where you say, ‘I want my computers to see, hear and understand in a way that might be specialized to my business,’ then you can imagine calling into an API,” Platt said. However, “A lot of this is future,” he added.
Do you know all about Big Data and large-scale analytics? Take our quiz!
Originally published on eWeek.