Watson to Enable In-Game Voice Interaction In Star Trek VR Game

Players in the new “Star Trek: Bridge Crew” virtual reality game will benefit from in-game voice interaction powered by IBM Watson’s VR Speech Sandbox

IBM’s Watson first made headlines in 2011 when it was on the “Jeopardy” TV game show, and now it is boldly going to the final frontier, with the new Sony PlayStation virtual reality game “Star Trek: Bridge Crew.”

IBM and game developer Ubisoft on May 11 announced that Watson VR Speech Sandbox will enable players to communicate in the new virtual reality game via voice.

Voice control in the fictional Star Trek universe is a core principle, as both Captain Kirk and Captain Picard were able to query the starship Enterprise’s artificial intelligence about operations and status.

Star Trek Bridge Crew 2

Star Trek Watson

In Star Trek shows and movies, the voice of the computer has long been that of Majel Barrett-Roddenberry, but that’s not the voice that IBM Watson will bring to “Star Trek: Bridge Crew.”

“Watson speaks in the voices of the Star Trek characters that the wizards at Ubisoft have brought to life in the game,” Joel Horwitz, vice president of the Digital Business Group at IBM, told eWEEK. “Watson isn’t a branded ‘Watson Avatar’ or ‘Computer’ character in the game, but rather it’s a technology underpinning the voice interactions between players and NPCs [non-player characters].”

IBM has been steadily bringing its Watson cognitive computing technology to the enterprise computing world, but with the new collaboration with Ubisoft, the company is now targeting a new type of enterprise. Bringing Watson to the gaming world was the result of a number of circumstances.

“A little over a year ago, before the launch of the HTC Vive and Oculus Rift, we noticed that every VR headset had a microphone,” Horwitz said. “In many cases, you could walk around, touch and hear but not speak and be understood in-game.” 

IBM realized that it had two services that, if combined the right way, could very quickly create advanced voice interaction systems: Watson Speech-to-Text and Watson Conversation (the chatbot training GUI and service) and Watson Unity SDK, making it easy to call the services from within the Unity game development engine.

“So we hacked together an early alpha version of what eventually became the VR Speech Sandbox and showcased it in the HTC VR booth at a huge VR conference called VRLA,” Horwitz said. “Ubisoft was also there showing off ‘Star Trek: Bridge Crew’ in VR, and long story short, we started up a conversation that today means launching this as an in-game feature, allowing for play with a virtual Star Trek crewmate alongside human buddies.”

VR AI

Watson isn’t literally being installed on a game disc or download. The interactive speech functions make live API calls to the Watson Speech-to-Text and Watson Conversation cloud-based services. However, having an in-game feature calling out to a cloud service introduced the possibility of lag and latency, which is not something that any game developer wants.

Horwitz explained that to get the latency low enough to support real-time conversation, the “Star Trek: Bridge Crew” game makes use of an IBM streaming feature, which leverages a narrowband connection to deliver a constant stream of voice from a user to the service, instantly transcribing in context what it sees. 

“Then a server-side script in a database passes the transcribed text in json form to the Watson Conversation service to parse the meaning and intent of what someone says,” he said. “In this way, we are able to be very efficient and eliminate the need to make multiple API calls from the local application, and this greatly reduces latency.”

Video game developers are not the only ones that can benefit from Watson’s speech capabilities as the VR Speech Sandbox is free and open-source. 

“All you need is to add your credentials for the Watson Speech-to-Text and Watson Conversation services, both of which have a free tier and offer free API calls each month, so it should be free to prototype with the VR Speech Sandbox,” Horwitz said. “If a developer needs more API calls because they have the happy problem of having a very popular application, then they can switch to a pay-as-you-go model.”

Originally published on eWeek