It seems that although the internet is increasingly drowning in fake images, we can at least take some stock in humanity’s ability to smell BS when it matters. A slew of recent research suggests that AI-generated misinformation did not have any material impact on this year’s elections around the globe because it is not very good yet.
There has been a lot of concern over the years that increasingly realistic but synthetic content could manipulate audiences in detrimental ways. The rise of generative AI raised those fears again, as the technology makes it much easier for anyone to produce fake visual and audio media that appear to be real. Back in August, a political consultant used AI to spoof President Biden’s voice for a robocall telling voters in New Hampshire to stay home during the state’s Democratic primaries.
Tools like ElevenLabs make it possible to submit a brief soundbite of someone speaking and then duplicate their voice to say whatever the user wants. Though many commercial AI tools include guardrails to prevent this use, open-source models are available.
Despite these advances, the Financial Times in a new story looked back at the year and found that, across the world, very little synthetic political content went viral.
It cited a report from the Alan Turing Institute which found that just 27 pieces of AI-generated content went viral during the summer’s European elections. The report concluded that there was no evidence the elections were impacted by AI disinformation because “most exposure was concentrated among a minority of users with political beliefs already aligned to the ideological narratives embedded within such content.” In other words, amongst the few who saw the content (before it was presumably flagged) and were primed to believe it, it reinforced those beliefs about a candidate even if those exposed to it knew the content itself was AI-generated. It cited an example of AI-generated imagery showing Kamala Harris addressing a rally standing in front of Soviet flags.
In the U.S., the News Literacy Project identified more than 1,000 examples of misinformation about the presidential election, but only 6% was made using AI. On X, mentions of “deepfake” or “AI-generated” in Community Notes were typically only mentioned with the release of new image generation models, not around the time of elections.
Interestingly, it seems that users on social media were more likely to misidentify real images as being AI-generated than the other way around, but in general, users exhibited a healthy dose of skepticism. And fake media can still be debunked through official communications channels, or through other means like Google reverse image-search.
If the findings are accurate, it would make a lot of sense. AI imagery is all over the place these days, but images generated using artificial intelligence still have an off-putting quality to them, exhibiting tell-tale signs of being fake. An arm might unusually long, or a face does not reflect onto a mirrored surface properly; there are many small cues that will give away that an image is synthetic. Photoshop can be used to create much more convincing forgeries, but doing so requires skill.
AI proponents should not necessarily cheer on this news. It means that generated imagery still has a ways to go. Anyone who has checked out OpenAI’s Sora model knows the video it produces is just not very good—it appears almost like something created by a video game graphics engine (speculation is that it was trained on video games), one that clearly does not understand properties like physics.
That all being said, there are still concerns to be had. The Alan Turing Institute’s report did after all conclude that beliefs can be reinforced by a realistic deepfake containing misinformation even if the audience knows the media is not real; confusion around whether a piece of media is real damages trust in online sources; and AI imagery has already been used to target female politicians with pornographic deepfakes, which can be damaging psychologically and to their professional reputation as it reinforces sexist beliefs.
The technology will surely continue to improve, so it is something to keep an eye on.
Trending Products

AULA Keyboard, T102 104 Keys Gaming Keyboard and Mouse Combo with RGB Backlit Quiet Laptop Keyboard, All-Steel Panel, Waterproof Gentle Up PC Keyboard, USB Wired Keyboard for MAC Xbox PC Players

Acer Aspire 3 A315-24P-R7VH Slim Laptop computer | 15.6″ Full HD IPS Show | AMD Ryzen 3 7320U Quad-Core Processor | AMD Radeon Graphics | 8GB LPDDR5 | 128GB NVMe SSD | Wi-Fi 6 | Home windows 11 Residence in S Mode

Megaccel MATX PC Case, 6 ARGB Fans Pre-Installed, Type-C Gaming PC Case, 360mm Radiator Support, Tempered Glass Front & Side Panels, Mid Tower Black Micro ATX Computer Case (Not for ATX)

Wireless Keyboard and Mouse Combo, Lovaky 2.4G Full-Sized Ergonomic Keyboard Mouse, 3 DPI Adjustable Cordless USB Keyboard and Mouse, Quiet Click for Computer/Laptop/Windows/Mac (1 Pack, Black)

Lenovo Newest 15.6″ Laptop, Intel Pentium 4-core Processor, 15.6″ FHD Anti-Glare Display, Ethernet Port, HDMI, USB-C, WiFi & Bluetooth, Webcam (Windows 11 Home, 40GB RAM | 1TB SSD)

ASUS RT-AX5400 Twin Band WiFi 6 Extendable Router, Lifetime Web Safety Included, Immediate Guard, Superior Parental Controls, Constructed-in VPN, AiMesh Appropriate, Gaming & Streaming, Sensible Dwelling

AOC 22B2HM2 22″ Full HD (1920 x 1080) 100Hz LED Monitor, Adaptive Sync, VGA x1, HDMI x1, Flicker-Free, Low Blue Mild, HDR Prepared, VESA, Tilt Modify, Earphone Out, Eco-Pleasant

Logitech MK540 Superior Wi-fi Keyboard and Mouse Combo for Home windows, 2.4 GHz Unifying USB-Receiver, Multimedia Hotkeys, 3-12 months Battery Life, for PC, Laptop computer
