Deepfake Democracy: AI technology complicates election security
Recent events, including an artificial intelligence (AI)-generated deepfake robocall impersonating President Biden urging New Hampshire voters to abstain from the primary, serve as a stark reminder that malicious actors increasingly view modern generative AI (GenAI) platforms as a potent weapon for targeting US elections.
Platforms like ChatGPT, Google’s Gemini (formerly Bard), or any number of purpose-built Dark Web large language models (LLMs) could play a role in disrupting the democratic process, with attacks encompassing mass influence campaigns, automated trolling, and the proliferation of deepfake content.
In fact, FBI Director Christopher Wray recently voiced concerns about ongoing information warfare using deepfakes that could sow disinformation during the upcoming presidential campaign, as state-backed actors attempt to sway geopolitical balances.
GenAI could also automate the rise of “coordinated inauthentic behavior” networks that attempt to develop audiences for their disinformation campaigns through fake news outlets, convincing social media profiles, and other avenues — with the goal of sowing discord and undermining public trust in the electoral process.
Election Influence: Substantial Risks & Nightmare Scenarios
From the perspective of Padraic O’Reilly, chief innovation officer for CyberSaint, the risk is “substantial” because the technology is evolving so quickly.
“It promises to be interesting and perhaps a bit alarming, too, as we see new variants of disinformation leveraging deepfake technology,” he says.
Specifically, O’Reilly says, the “nightmare scenario” is that microtargeting with AI-generated content will proliferate on social media platforms. That’s a familiar tactic from the Cambridge Analytica scandal, where the company amassed psychological profile data on 230 million US voters, in order to serve up highly tailored messaging via Facebook to individuals in an attempt to influence their beliefs — and votes. But GenAI could automate that process at scale, and create highly convincing content that would have few, if any, “bot” characteristics that could turn people off.
“Stolen targeting data [personality snapshots of who a user is and their interests] merged with AI-generated content is a real risk,” he explains. “The Russian disinformation campaigns of 2013–2017 are suggestive of what else could and will occur, and we know of deepfakes generated by US citizens [like the one] featuring Biden, and Elizabeth Warren.”
The mix of social media and readily available deepfake tech could be a doomsday weapon for polarization of US citizens in an already deeply divided country, he adds.
“Democracy is predicated upon certain shared traditions and information, and the danger here is increased balkanization among citizens, leading to what the Stanford researcher Renée DiResta called ‘bespoke realities,'” O’Reilly says, aka people believing in “alternative facts.”
The platforms that threat actors use to sow division will likely be of little help: He adds that, for instance, the social media platform X, formerly known as Twitter, has gutted its quality assurance (QA) on content.
To read the complete article, visit Dark Reading.