Siri Assistant puts Apple ahead of Google on voice recognition, but Clint Boulton asks if people will get serious about Siri
Apple seems to have shot past Google with the iPhone 4S in one respect: Siri Assistant, the artificially intelligent virtual assistant that lets users schedule meetings, make calls and book restaurant tables.
Since August 2010, Google has offered similar speech recognition software technology for Android phones, called Google Voice Actions. Like Siri, Voice Actions let users call businesses and contacts, send texts and e-mail, listen to music, and browse the Web by speaking into their phone.
Looking at the context
But Siri goes beyond simple task conduction and completion. It lends context to certain actions. Ask it the weather and it will retrieve the info, using an iPhone 4S user’s current location. Or cut to the chase and ask Siri if you need an umbrella and it will cull local weather info to determine whether it’s going to rain.
Siri also taps into Wolfram Alpha, allowing users to access info like how many calories are in a bagel; recognise relationships within a user’s phone contact listings; and schedule meetings with the phone’s calendar app. If a user asks Siri to set up a meeting for a time, Siri will tell the user if that time slot is taken.
The Unofficial Apple Weblog has a nice big list of things you can do with Siri. The applicability is broad, if not staggering. Google Voice Actions as it stands today pales in comparison.
Google is concerned enough about Siri’s potential that is has shifted a key speech recognition engineer, Dave Burke, from the UK to join the Android team at Google’s headquarters in Mountain View California, according to the Guardian. Burke developed Google’s mobile voice search app, among other tools.
With Burke and Mike Cohen, Google’s director of speech technology, who founded Nuance Communications (Nuance combined voice recognition with T9 predictive text), Google has more than enough engineering forepower to accept the gauntlet Apple has thrown down with Siri.
That is, commingling speech recognition with context for more intelligent information transactions.
The presumption is that because Siri is purportedly ahead of the rest of the class, and because Apple is now nurturing it, that it will usher in artificial intelligence into computing’s mainstream
Foundations are key, but there is no guarantee consumers will help with the rest of the building. This is not a Field of Dreams, “if you build it, they will come” scenario. Just look at mobile payments enabled by near field communications (NFC) technology. This adoption curve is reflected in analysts’ cautious optimism.
Forrester Research analyst Frank Gillet said Siri promises is the “beginning of a new user experience built around context that will eventually create a much more personal, intimate experience for using all of Apple’s mobile and Mac products.”
Gillet’s colleague, Forrester analyst Charles Golvin, was a bit more pessimistic in his comment:
Apple’s new Siri Assistant, unique to the new 4GS, is a powerful harbinger of the future use of mobile devices — not just the power of voice but, more importantly, the ability to contextualize a statement or request. However, Forrester believes that consumers will be much more slow to adopt this new interface than they did Apple’s revolutionary touchscreen of its first iPhone.
If everyone who purchased an iPhone 4S used Siri for most of the interactions it was intended, we would have a cacophony of queries uttered in homes, streets and offices.
Who are you talking to on the phone, grandma asks? Siri! You shout back. Who’s she? Wonders grandma. You get the idea. This is no surefire solution; it will take a lot of getting used to at a time when people are still typing on their phones more than speaking into them for anything but voice calls.