AI fake-face generators can be rewound to reveal the real faces they trained on

Still, this presupposes that you can get hold of this training data, Kautz says. He and his colleagues at Nvidia have come up with a different way of revealing private data, including pictures of faces and other objects, medical data and more that do not require access to training data at all.

Instead, they developed an algorithm that can recreate the data that a trained model has been subjected to by reversing the steps the model goes through when processing that data. Take a trained image recognition network: To identify what is in an image, the network sends it through a series of layers of artificial neurons, each layer extracting different levels of information, from abstract edges, to shapes for more recognizable functions.

Kautz’s team found that they could interrupt a model in the middle of these steps and reverse its direction and recreate the input image from the model’s internal data. They tested the technique on a number of common image recognition models and GANs. In a test, they showed that they could accurately recover images from ImageNet, one of the most well-known image recognition datasets.

Images from ImageNet (top) along with renderings of the images created by rewinding a model trained on ImageNet (bottom)

Like Webster’s work, the recreated images closely resemble the real ones. “We were surprised by the final quality,” Kautz says.

Researchers claim that this type of attack is not just hypothetical. Smartphones and other small devices are starting to use more AI. Due to battery and memory limitations, the models are sometimes only half-processed on the device itself and sent to the cloud for the last computer crack, a method known as split computing. Most researchers assume that split computing does not reveal private data from a person’s phone because only the model is shared, Kautz says. But his attack shows that this is not the case.

Kautz and his colleagues are now working on ways to prevent models from leaking private data. We wanted to understand risks so we can minimize vulnerabilities, he says.

Although they use very different techniques, he thinks that his work and Websters complement each other well. Webster’s team showed that private data could be found in the output of a model; Kautz’s team showed that private data could be revealed by going reverse and restoring input. “Exploring both directions is important to gain a better understanding of how to prevent attacks,” Kautz says.

Leave a Comment