'

Deepfakes emerge as a top security threat ahead of the 2024 US election

The United States is heading into a crucial election year, with a high-stakes presidential election that could determine the republic’s fate for decades. In addition, all 435 seats in the United States House of Representatives, 34 Senate seats, and 13 governorships are up for grabs, along with thousands of local government elections.

While official sources say the 2020 elections in the US occurred without significant voting malfeasance, despite “unfounded claims and opportunities for misinformation,” the prospects for what might happen next year are cloudy and uncertain. In November, content delivery and cybersecurity company Cloudflare issued a report identifying what it saw as the cyberattack trends for various groups in the elections space that could threaten trusted, secure, and reliable elections in the US.

While Cloudflare reported that from November 2022 to August 2023, it mitigated 34.7 million threats to US elections groups it surveyed, mostly DDoS attacks and managed-rules mitigations, cybersecurity experts say that the dissemination of disinformation, aided by advances in artificial intelligence, poses the biggest threat to elections as the US moves into a momentous election year.

Eroding trust will be the primary goal of election disruptors

“One of the key pillars of democracy is trust, so ensuring that the Internet is secure, reliable, and accessible for the public and those working in the election space is critical to any free and fair election,” Grant Bourzikas, CSO at Cloudflare, tells CSO.

“When it comes to top threats, we will continue to see governments and nation-state-backed actors trying to undermine and control the flow of information and dissemination of false or misleading information that casts doubt in public opinion or perception through internet shutdowns, restricted social media sites during elections, and imposed blocking of websites that report on results.”

Unlike the breaches and dissemination of hacked information by Russian threat actors that roiled the US presidential election in 2016, nation-states and affiliated groups are more likely to turn to more diffuse methods next year, relying not only on the tried-and-true phishing tactics deployed in previous elections but also more widespread use of artificial intelligence-aided tools such as deepfakes. “Threats such as deepfakes pose a grave risk,” Bourzikas said.

“While they have been around for years, today’s versions are more realistic than ever, where even trained eyes and ears may fail to identify them. Both harnessing the power of artificial intelligence and defending against it hinges on the ability to connect the conceptual to the tangible. If the security industry fails to demystify AI and its potential malicious use cases, 2024 will be a field day for threat actors targeting the election space.”

Slovakia’s general election in September might serve as an object lesson in how deepfake technology can mar electops. In the run-up to that country’s highly contested parliamentary elections, the far-right Republika party circulated deepfakes videos with altered voices of Progressive Slovakia leader Michal Simecka announcing plans to raise the price of beer and, more seriously, discussing how his party planned to rig the election. Although it’s uncertain how much sway these deepfakes held in the ultimate election outcome, which saw the pro-Russian, Republika-aligned Smer party finish first, the election demonstrated the power of deepfakes.

Politically oriented deepfakes have already appeared on the US political scene. Earlier this year, an altered TV interview with Democratic US Senator Elizabeth Warren was circulated on social media outlets. In September, Google announced it would require that political ads using artificial intelligence be accompanied by a prominent disclosure if imagery or sounds have been synthetically altered, prompting lawmakers to pressure Meta and X, formerly Twitter, to follow suit.

Deepfakes are ‘pretty scary stuff’

Fresh from attending AWS’ 2023 Re: Invent conference, Tony Pietrocola, president of Agile Blue, says the conference was heavily weighted toward artificial technology regarding election interference.
“When you think about what AI can do, you saw a lot more about not just misinformation, but also more fraud, deception, and deepfakes,” he tells CSO.

“It’s pretty scary stuff because it looks like the person, whether it’s a congressman, a senator, a presidential candidate, whoever it might be, and they’re saying something,” he says. “Here’s the crazy part: somebody sees it, and it gets a bazillion hits. That’s what people see and remember; they don’t go back ever to see that, oh, this was a fake.”

Pietrocola thinks that the combination of massive amounts of data stolen in hacks and breaches combined with improved AI technology can make deepfakes a “perfect storm” of misinformation as we head into next year’s elections. “So, it is the perfect storm, but it’s not just the AI that makes it look sound and act real. It’s the social engineering data that [threat actors have] either stolen, or we’ve voluntarily given, that they’re using to create a digital profile that is, to me, the double whammy. Okay, they know everything about us, and now it looks and acts like us.”

Adding to the unsettling scenario is that because of AI technology’s open and increasingly widespread availability, deepfakes might not be limited to traditional nation-state adversaries such as Russia, China, and Iran. “If we thought it was bad in 2020 and 2016, which, for the most part, involved extremely sophisticated threat actors… people from all over the world can now use these tools,” Jared Smith, Distinguished Engineer, R&D Strategy, SecurityScorecard, tells CSO. “In a sense, we’re moving from one industrial age to another where many more people now have tools to do things that they couldn’t do before.”

Solutions to the problems are hazy

There are no easy solutions to battling the threats that might emerge as the 2024 campaigns heat up. At the top of the list: political organizations should already have the fundamental cybersecurity hygiene practices in place to defend themselves from old-school cyber malfeasance.

“As technology plays an ever-increasing role in the electoral process, many of those in the election space are unaware of the online risks they may face, the resources available to keep their online presence secure, and strategies to mitigate cyber threats,” Bourzikas says. “First and foremost, all election and political organizations must adopt an ‘assume breach’ mindset. This means that ongoing small and large-scale attacks on related groups should be an expectation, not a surprise.”

Part of this essential practice includes hiring the necessary staff. “Number one, before we talk about social engineering, anybody running a campaign or doing a campaign should have an IT director or a cybersecurity person responsible for that campaign,” SecurityScorecard’s Smith advises.

When it comes to spotting deepfakes or other AI-generated misinformation, the challenge is more complex. Automated detection tools for AI-generated text have not proven reliable, and existing tools for spotting voice cloning used to create AI voices have fared poorly.

AgileBlue’s Pietrocola is not optimistic that the automated tools for determining fakes can outpace malicious actors’ techniques because it’s so easy to replicate the voices and images of people in the public eye. “The discovery tools become harder because [the fakes] look like them and sound just like them,” he says.

It’s impossible at this stage to have a crystal ball, but it’s virtually sure that a wide array of new threats will increasingly emerge before election day in the US. “The momentum of threats to political organizations will steadily increase, but as with any large-scale world event, we will see more impactful and strategic attacks take place close to key moments in time,” Bourzikas says.

Election Hacking, Government, Security, Security Practices


Go to Source
Author: