Meet the UC Berkeley Professor Analyzing Election Deepfakes: Combatting Misinformation in Today's Digital Age

Not in recent history has technology posed a greater threat to society than deepfakes.

This manipulative, AI-generated content is already being weaponized in politics and is expected to dominate the upcoming U.S. Presidential election, as well as Senate and House races. As regulators struggle to rein in the technology, incredibly realistic deepfakes are being used to smear candidates, influence public opinion, and manipulate voter turnout. In contrast, some candidates have attempted to leverage generative AI to enhance their campaigns, often with disastrous results.

Professor Hany Farid from the University of California, Berkeley’s School of Information has taken action. He has initiated a project to monitor deepfakes throughout the 2024 presidential campaign.

“My hope is that by casting a light on this content, we raise awareness among the media and the public — and signal to those creating this content that we are watching, and we will find you,” Farid stated.

In one notable example, Farid's site showcases images of President Joe Biden in military fatigues at a command center. The site highlights inconsistencies, such as misplaced computer mice and a warped ceiling design, which reveal the images as manipulated.

Farid's research also addresses infamous deepfake robocalls impersonating Biden before the New Hampshire primary. These calls urged citizens not to vote, stating, “Voting this Tuesday only enables the Republicans in their quest to elect former President Donald Trump again.” The origin of the calls remains unclear, but the voice quality appears low and peculiar.

In another post, a deepfake of Ron DeSantis falsely claims, “I never should have challenged President Trump, the greatest president of my lifetime.” The site further critiques a montage of Trump with former Chief Medical Advisor Anthony Fauci, noting glaring inconsistencies like a nonsensical White House logo and distorted elements on the American flag.

Farid observes that even slight shifts in voter sentiment can swing an election, particularly in key battleground states.

The reach of deepfakes is expanding, with increasingly sophisticated examples depicting Trump in arrest scenarios, Ukrainian President Zelensky urging soldiers to surrender, and Vice President Kamala Harris appearing inebriated at a public event. These manipulative tools have previously influenced elections in countries like Turkey and Bangladesh, while some politicians, such as Rep. Dean Phillips of Minnesota and Miami Mayor Francis Suarez, have used deepfakes to engage voters.

“I’ve observed a rise in both the sophistication and misuse of deepfakes,” Farid notes. “This year feels like a tipping point, with billions poised to vote globally and technology advancing rapidly.”

The danger extends beyond voter manipulation, as deepfakes can serve as excuses for unlawful or inappropriate behavior. This phenomenon, referred to as the “Liar’s Dividend,” has already been exploited by figures like Trump and Elon Musk.

“When anything can be faked, nothing has to be real,” Farid emphasizes.

Research indicates that humans can discern deepfake videos just over 50% of the time and detect fake audio 73% of the time. As the technology becomes remarkably realistic, the spread of doctored content on social media can rapidly incite misinformation.

“A year ago, deepfakes were mostly image-based and fairly obvious,” Farid recalls. “Today, we see sophisticated audio and video deepfakes that can easily mislead viewers.”

While it’s challenging to identify consistent warning signs, Farid advises against relying on social media for news. “If you must use social media, slow down, think critically before sharing, and recognize your biases. Sharing false information only exacerbates the problem.”

For those seeking practical tips, Northwestern University’s Detect Fakes project offers a test to assess spotting skills, and the MIT Media Lab provides guidance, including:

- Focus on faces, as high-quality manipulations almost always involve facial alterations.

- Watch for skin inconsistencies, like overly smooth or wrinkly cheeks and foreheads, which may appear disjointed from hair and eye textures.

- Observe shadows and lighting effects that don’t align with physics.

- Check for exaggerated glare on glasses that doesn’t change with movement.

- Analyze facial hair for unnatural additions or removals.

- Monitor blinking patterns and lip movement, as many deepfakes rely on lip-syncing.

If you suspect a deepfake related to the U.S. elections, consider reaching out to Farid for further investigation.

Most people like

Find AI tools in YBX