Google I/O 2017: TensorFlow Lite For Android, Accelerating Machine Learning In The Cloud, And Assistant On Apple iOS

Google aims to push its machine learning and Assistant tech deeper and further

Google is going big on AI, aiming to add more smart features into its software and unveiling TensorFlow Lite for Android.

At Google I/O 2017, the search giant’s chief executive, Sundar Pichai, touted the company’s AI doctrine: ” We’re moving from mobile-first to AI-first.” 

Google started showing off its AI tech with a smart feature that can filter out obstructions in the way of subjects in a photo, for example intelligently removing a fence obstructing the view of a person behind it. 

Google didn’t say where this feature would make its debut, but it is likely to find its way into the Google Photos web and mobile app. 

Assistant everywhere 

google-assistantSpeaking of mobile apps, Google revealed it is bringing the Assistant to iOS, so that iPhone users will have an alternative to Siri

Google Assistant is also coming to smartphone cameras through a new feature dubbed Google Lens, likely making its debut with Android handsets, which Google boasted has now reached a total of two billion monthly activated devices. 

The smart tech will enable users to point their camera at an object, which the Assistant will then analyse and provide information and suggest actions around it; for example, if pointed at a flower it could tell the user what it is, or if pointed at a unrecognised router, Lens can identify the Wi-Fi network and automatically look up the network’s name and password an connect to it automatically. 

Lens will be rolling out in Google Photos later this year. 

With cameras in mind Google is also adding a suite of other smart features into its Photos app, with the aim to select the best photos in a users photo library, improve the way photos are shared, and create photo books made out of AI powered suggestions. 

google-assistant-homeThe Assistant’s role in Google Home is also beefing beefed up, with the ability to make calls through voice commands, with the Assistant intelligently working out who to cal through conversational commands and, if required, it will identify a person voice and make it so that the call comes from their personal number. 

Chromecast integration also joins the Google Home mix, so that the results of commands to the Assistant can be displayed on a connected TV. 

Finally, users not happy with barking commands at the Assistant on compatible smartphones, will soon be able type in their commands by tapping ono a small keyboard icon in the Assistant interface. 

AI acceleration in the cloud 

TPUGoogle also took the time to reveal its second generation Tensor Processing Unit (TPU), a hardware and software system for running machine learning in the cloud. The AlphaGo AI was created on Google’s first TPU. 

The new TPU is already in action across the Google Compute Engine and within the Google Cloud, but the company is also providing it as a resource for other companies to tap into for powering their own machine learning and AI systems. 

Pichai noted that a rig of 65 TPUs can be configured into “pods” to turn a server rack into a form of supercomputer with 11.5 petaflops of compute power. 

The interesting part of the TPUs is that they can carry out both the training and the inference – putting smart algorithms into use – of AIs on them, which put simply should provide researchers and companies with the scope to test and deploy AI systems faster, providing the machine learning is carried out with Google’s TensorFlow. 

Google is also committed to making its Ai tech into an open source model, so it is offering researchers access to the new TPU power through free access via a program called the TensorFlow Research Cloud, providing they are willing to publish their research results and potentially share their machine learning code as open source. 

TensorFlow Lite for Android 

googleAnd finally, Google also took the lid of TensorFlow Lite, a take on its machine learning framework library, optimised for mobile applications and designed to give developers the means to run deep learning algorithms on Android smartphones. 

The concept of TensorFlow Lite is to use a new neural network application programming interface (API) to allow for machine learning to be accelerated by the processors on an mobile device, rather than just be reliant on a cloud connection to powerful server banks. 

Essentially, TensorFlow Lite is a means to enable optimised  and lightweight machine learning algorithms to run naively on mobile device, basically pushing out AI tech to the edge o network devices, to cut down on lag when carrying out tasks such as image recognition. 

With powerful mobile chips like Qualcomm’s Snapdragon 835, TensorFlow Lite looks to give developers more scope to tap into machine learning and smart features within their Android apps, and thus help  the spread of AI-based technologies and services beyond the confines of cloud connectivity. 

Quiz: What do you know about Android?