News

Google Develops Neural Networks that Can Communicate Secretly with Encrypted Messages

November 16, 2016 by Chantelle Dubois

Scientists at Google have successfully trained two neural networks, Alice and Bob, to communicate secretly using its own developed encryption in order to keep a third neural network, Eve, from listening in.

Scientists at Google have successfully trained two neural networks, Alice and Bob, to communicate secretly using its own developed encryption in order to keep a third neural network, Eve, from listening in.

In a paper titled "Learning to Protect Communications with Adversarial Neural Cryptography" published this past October by the Google Brain research group, researchers Martin Abadi and David Andersen describe how they implemented a generative adversarial network (GAN) to successfully convert plain text messages between Alice and Bob into encrypted messages which Eve could not decipher.

 

Generative Adversarial Network (GAN)

GAN is a branch of machine learning that occurs autonomously, first introduced by Ian Goodfellow in 2014. The goal of GAN is for a model to maximize the chance than an adversary model will make a mistake in the interpretation of data.

The adversary model must guess the data set as being produced by the adversary model or being a part of training data. However, researchers from this particular study did not make it a goal for Eve to determine if the data was part of a training set. Instead, Eve’s goal was only to reconstruct the message being sent with as much accuracy as possible. Alice and Bob’s goals were focused on keeping their messages secret by increasing the occurrence of Eve making a mistake.

Neural networks Bob and Alice use symmetric-key cryptography to communicate with one another; Alice and Bob share a key which is sent with a plain text message. Eve, on the other hand, does not have a key and attempts to recreate the message being sent.

 

Image courtesy of Google Brain

It's worth noting that Alice based encryption on the parameter of secrecy, not integrity. Because of this, the implementation of secrecy was not enforced in any way. The neural networks were permitted to generate whichever method was sufficient for secrecy. Eve acted as a passive listener, unable to modify or otherwise interfere with communication. Cryptographic algorithms were not taught to the neural networks. Alice and Bob were also permitted to learn encryption and decryption independently.

 

Encryption Experiements

In experiments, Eve was initially able to reconstruct the message sent between Alice and Bob accurately. With each pass, Alice and Bob became increasingly better at encrypting their message until, eventually, Eve could not decipher their messages at all. If Bob and Alice were not able to secretly communicate within 150,000 steps of training, the training experiment was deemed a failure.

In each experiment, Eve would be retrained five times up to 250,000 steps. The best result produced by Eve was then recorded.

If Bob and Alice were able to communicate secretly within 150,000 steps, and if Eve was still not able to decipher their messages within 250,000 steps, the experiment would be considered successful.

 

Image courtesy of Google Brain

In some cases, the experiments began so that Bob and Alice were not able to communicate with each other at all. With each iteration, they eventually developed an encryption method to successfully communicate.

The experiments were conducted using machine-learning language, TensorFlow, on a single workstation GPU. The publication states that the computation platform has no impact on the outcome of the experiments and that the source code is planned for release.

Applications of GAN-implemented encryption include automated selective encryption of data, also known as steganography. The authors of the paper state that it is unlikely the neural networks could be used for cryptanalysis; we do not need to worry about Eve deciphering our messages anytime soon.

Current methods of encryption are much more sophisticated and it may be some time until methods developed by neural networks can be securely used. Since it is not known how the neural networks came to their solutions, verifying for security is not possible. As machine learning expands, however, that may eventually change.

Alice, Bob, and Eve – A sordid love triangle?

Why Alice, Bob, and Eve?

In the fields of cryptography, game theory, and physics, these names are commonly used as placeholder variables when describing communication protocols. These names are used in place of “system A sends a message to system B”. Eve, of course, is used in place of “eavesdropper”. Each name implies the system’s role in the described protocol.

Interestingly, there are many names commonly used in these fields to denote the role of a variable. In Applied Cryptography by Bruce Schneier, various archetypes in cryptography are described. Some such examples are:

  • Chuck, a participant with mal-intention
  • Grace, a government representative usually trying to enforce Bob or Alice to implement backdoors into implemented protocols
  • Mallory, an eavesdropper that aggressive (unlike Eve, who is passive)
  • Wendy, a whistleblower who has access to privileged information

 

Image courtesy of XKCD