Researchers at the University of Toronto, led by Professor Parham Aarabi and graduate student Avishek Bose, have developed an algorithm to disrupt facial recognition technology.
Aarabi and Bose use ‘adversarial training,’ which sets up two algorithms that work against each other. They created two neural networks — one that IDs faces and one that disrupts it from its stated goal. The two learn from each other, and the net effect is that they both improve. The algorithms create a filter designed to subtly alter images at the pixel level to throw off digital facial recognition technologies, so that what may look like a familiar face to the human eye cannot be deciphered by an algorithm.
“The disruptive AI can ‘attack’ what the neural net for the face detection is looking for,” says Bose. “If the detection AI is looking for the corner of the eyes, for example, it adjusts the corner of the eyes so they’re less noticeable. It creates very subtle disturbances in the photo, but to the detector they’re significant enough to fool the system.”
In a study of more than 600 faces that ranged in ethnicity, lighting, and environment, the system was able to reduce detectable faces from 100 percent down to 0.5 percent. The system is not expected to protect people from real-time facial recognition that captures your image in public, but it could give some control over how much data can be gleaned from photographs posted online.
“Ten years ago these algorithms would have to be human-defined, but now neural nets learn by themselves — you don’t need to supply them anything except training data,” Aarabi said. “In the end, they can do some really amazing things. It’s a fascinating time in the field, there’s enormous potential.”