Researchers trick facial-recognition systems
Neural networks powered by recent advances in artificial intelligence and machine learning technologies increasingly have become adept at generating photo-realistic images of human faces completely from scratch.
The systems typically use a dataset comprised of millions of images of real people to “learn” over a period of time how to autonomously generate original images of their own.
At the Black Hat USA 2020 virtual event last week, researchers from McAfee showed how they were able to use such technologies to successfully trick a facial-recognition system into misclassifying one individual as an entirely different person. As an example, the researchers showed how at an airport an individual on a no-fly list could trick a facial-recognition system used for passport verification into identifying him as another person.
“The basic goal here was to determine if we could create a fake image, using machine learning models, which looked like one person to the human eye, but simultaneously classified as another person to a facial recognition system,” says Steve Povolny, head of advanced threat research at McAfee.
To read the complete article, visit Dark Reading.