Categories: InnovationResearch

Google Launches Betas Of New Machine Learning APIs

Looking to sell customers better tools for extracting value from large sets of unstructured data, Google has released beta versions of two new machine learnings APIs for its Google Cloud Platform.

The tools, Cloud Natural Language API and Cloud Speech API, are designed for digging in to gargantuan text and audio files and pulling out information on specified topics such as people, locations, dates and events.

This mean organisations can carry out large analyses of text and audio to produce fine-tuned information on customers or users.

“You can use it to understand sentiment about your product on social media or parse intent from customer conversations happening in a call centre or a messaging app,” explained Google on the Cloud Natural Language API product page.

“You can analyse text uploaded in your request or integrate with your document storage on Google Cloud Storage.”

British online supermarket and tech company Ocado said it’s already using the Natural Language API, and it’s a viable replacement to its own machine learning language analyser.

“NL API has shown it can accelerate our offering in the natural language understanding area and is a viable alternative to a custom model we had built for our initial use case,” said Ocado’s head of data Dan Nelson.

Speech

Google Cloud Speech API lets developers convert audio to text by applying neural network models in an API. Google said that the API recognises over 80 languages and variants.

“You can transcribe the text of users dictating to an application’s microphone, enable command-and-control through voice, or transcribe audio files, among many other use cases,” said Google.

“Enterprises and developers now have access to speech-to-text conversion in over 80 languages, for both apps and IoT devices. Cloud Speech API uses the voice recognition technology that has been powering your favorite products such as Google Search and Google Now.”

More than 5,000 companies signed up for Google’s Speech API alpha, including video chat app HyperConnect that uses Cloud Speech and Translate API to transcribe and translate conversations between people who speak different languages.

The Speech API also support word hints, meaning custom words or phrases by context can be added to API calls to improve recognition. An example of this may be in smart TV listening for ‘rewind’ and ‘fast-forward’.

Take our video game tech quiz here!

Ben Sullivan

Ben covers web and technology giants such as Google, Amazon, and Microsoft and their impact on the cloud computing industry, whilst also writing about data centre players and their increasing importance in Europe. He also covers future technologies such as drones, aerospace, science, and the effect of technology on the environment.

Recent Posts

Russia Accused Of Cyberattack On Germany’s Ruling Party, Defence Firms

German foreign minister warns Russia will face consequences for “absolutely intolerable” cyberattack on ruling party,…

2 days ago

Alphabet Axes Hundreds Of Staff From ‘Core’ Organisation

Google is reportedly laying off at least 200 staff from its “Core” organisation, including key…

2 days ago

Apple Announces Record Share Buyback, Amid iPhone Sales Decline

Investor appeasement? Apple unveils huge $110 billion share buyback program, as sales of iPhone decline…

3 days ago

Tesla Backs Away From Gigacasting Manufacturing – Report

Tesla retreats from pioneering gigacasting manufacturing process, amid cost cutting and challenges at EV giant

3 days ago

US Urges No AI Control Of Nuclear Weapons

No skynet please. After the US, UK and France pledge human only control of nuclear…

3 days ago