Tricking Surveillance Systems to Render a Person Invisible

Researchers at Katholieke Universiteit Leuven in Belgium have developed “new adversarial techniques” that can make objects “invisible” to A.I. image detection systems.

“These same techniques can also trick systems into thinking they see another object or can change the location of objects,” the researchers said. Their findings were released earlier this month in a paper titled “Fooling automated surveillance cameras: adversarial patches to attack person detection.”

Adversarial images that can trick machine learning systems have been the subject of increased interest in the past few years. There are many examples of objects designed to trick computer vision systems, like stickers that can render stop signs and license plates unrecognizable. The researchers claim no previous work has developed systems that mask an entire class of things, like people, from image detection systems.

The research is not welcomed by everyone. It comes as no surprise that government agencies with substantial investments in mass surveillance technologies are not amused. Homeland and national security officials have said that these “adversarial techniques,” if developed and used by the wrong hands, could pose a serious risk to the vast system of government surveillance deployed against regular people. 

Photo: “The lights” by Ana Patícia Almeida is licensed under CC BY 2.0