Google’s AI Neural Networks Learn To Create Their Own Encryption

data encryption

Two neural nets were created to communicate securely while a third tried to crack their inhuman cipher messages

Google has created an artificial intelligence (AI) system that can produce cyber security encryption independent of human-made algorithms.

A research paper filed with  Cornell University Library, details how Google used a trio of artificial neural networks to devise a system of AI-powered automated encryption and decryption.

AI meets cyber security

airbusIn the most basic terms Google started with two neural networks, which use a process called deep learning to filter data through a series of computational nodes like an artificial take on data processing carried out by the human brain, called Alice and Bob.

The brace of neural networks was set up to have Alice send encrypted messages to Bob who has the job of decrypting them while a third neural net called Eve tried to eavesdrop on the chatter between Alice and Bob.

“A system may consist of neural networks named Alice and Bob, and we aim to limit what a third neural network named Eve learns from eavesdropping on the communication between Alice and Bob,” Google researchers Martín Abadi and David Andersen explained in the paper.

“We do not prescribe specific cryptographic algorithms to these neural networks; instead, we train end-to-end, adversarially. We demonstrate that the neural networks can learn how to perform forms of encryption and decryption, and also how to apply these operations selectively in order to meet confidentiality goals.”

Both Alice and Bob started off with a pre-assigned encryption key but had to create their own algorithms from it. Alice then took plain text which she had to turn into a cipher and send to Bob in the form of a binary 16 bit long message. Bob who would then try and figure out the cipher based on the encryption key to convert the message back into its original plain text.

Alice and Bob

In the early stages Alice and Bob were poor at sending secured messages to each other, but as they learnt through practice they created a stronger encryption and decryption strategy.

After 15,000 communications, Bob could successfully decrypt the cipher text messages sent by Alice into understandable plain text, while Eve could only work out eight out of the 16 bits of the messages. Given the bits were formed of either a 0 or a 1, the 50 percent success rate of Eve was essentially like guessing heads or tails in a coin toss.

However, this process of encryption is rather basic compared to the sophisticated ciphers used by modern human cryptography, so Google’s research is more of a proof-of-concept rather than any indication that the future of cyber security and encrypted communications will be reliant on AIs.

The researchers were also not able to work out the encryption Alice and Bob eventually come up with, meaning that using such a system in the real-world would throw up challenges of being able to guarantee the strength and type of encryption being used.

Furthermore, neural networks take a lot of computational power and often a lengthy process of self-training before they can deliver desirable results, which also make the application of such a system in real-world cyber security currently impractical.

But the research serves as evidence on how far Google is coming with its AI research, having already created a system that can learn to mimic human speech rather than have the stilted sounds of Apple Siri and Microsoft’s Cortana.

And Google is already putting some of its AI tech into action with the Google Assistant in the search giant’s Pixel smartphones serving as a practical example of AI-powered software in action.

Quiz: What do you know about Android?