How to navigate the mitigation of deepfakes
Roughly 10 years ago, the world of cybercrime had a celestial alignment. Cybercriminals had already been around for decades, routinely using phishing and malware. But two other technologies created the cybercrime boom.
One was the usage of anonymous networks, or darknets, such as Tor. The other was the introduction of cryptocurrency, in the form of Bitcoin. These two innovations — darknets and cryptocurrency — allowed cybercriminals to securely communicate and trade, creating a cascading effect in which new cybercrime services were being offered, which in turn lowered the bar for launching phishing and malware attacks. The opportunity to earn cash without the risk of detection lured newcomers into cybercrime. And today, cybercrime poses the biggest online threat to businesses.
Misinformation and disinformation campaigns are heading in the same direction. Psyops might be a modern term, but influence campaigns have been around for centuries. However, never before was it so easy to reach a massive number of targets, amplify a message, and, if needed, even distort reality.
How? Social media, bots, and deepfakes.
The process of creating online personas and bots as well as injecting the message that you want your targets to see into fringe forums and niche discussion groups has been automated and perfected. Once the information is seeded, it’s just a matter of time until it grows and branches out, hitting mainstream social networks and media, and getting organic amplification.
To make things worse, as discussed in Whitney Phillips’ “The Oxygen of Amplification,” merely reporting on false claims and fake news — with the intention of proving them baseless — amplifies the original message and helps their distribution to the masses. And now we have technology that allows us to create deepfakes relatively easily, without any need for writing code. A low bar to use the tech, methods to distribute, a method of monetizing — the cybercrime-cycle pattern reemerges.
While some view the usage of deepfake technology as a future threat, the FBI warned businesses in March they should expect to be hit with different forms of synthetic content.
Unfortunately, these types of attacks have already happened — most notably, the deepfake audio heist that landed the threat actors $35 million. Voice synthesis, the sampling and use of a person’s voice to commit such a crime, is a stark warning for authentication that relies on voice recognition, as well as, perhaps, an early warning for face-recognition solutions.
To read the complete article, visit Dark Reading.