Large language models. Artificial intelligence. Five years ago, hearing those words would have sounded like the prequel to The Matrix franchise, with the rain of green code falling down the screen. Today, when people hear “politics,” the image is no longer a debate stage but a feed, one where truth doesn’t talk, it scrolls in a ten second clip. Or sometimes, it hops, like the viral video of bunnies bouncing on a trampoline that left half of the internet in disbelief that it was not real.
In 2024, New York City’s mayor, Eric Adams, created multilingual robocalls using an AI-generated version of his own voice, speaking languages he doesn’t actually speak, without telling voters. He called it innovation. Civil rights advocates waved their red flags and called it a warning. When governments teach citizens to trust synthetic speech, they argued, the line between leadership and illusion begins to blur.
That line has already faded elsewhere. In Indonesia, a digital avatar of a deceased president appeared to endorse a living candidate. In Argentina, political rivals used AI-generated images to paint themselves as heroes and their opponents as weak. Political caricature has existed for centuries, but generative AI changes the scale. Anyone can now replicate a leader’s face or voice with tools that cost less than a dinner. What once took a newsroom now takes a single upload and a few minutes.
The spread of manipulated media exposes just how far regulation has fallen behind. Safeguards such as watermarking or labeling haven’t kept pace with the speed and accessibility of these technologies. And as digital platforms accelerate the spread of content, each deepfake circulates faster than truth can catch up. The result is not simply more misinformation, but less certainty, which hurts political parties and democracy equally.
Voice cloning is one of the most dangerous frontiers. NPR recently reported that several popular AI platforms could accurately mimic political leaders, including President Donald Trump and U.K. Prime Minister Keir Starmer, despite supposed guardrails. The Center for Countering Digital Hate warned that “bad actors can tell their lies at an unprecedented scale and persuasiveness for virtually nothing.” In 2024, a fake Joe Biden robocall before the New Hampshire primary urged voters not to cast ballots, prompting criminal charges. In Slovakia, a fabricated audio clip of a candidate bragging about rigging votes spread rapidly online. You can question a photo, but a familiar voice that sounds human, warm, and direct may feel real enough to believe.
This erosion of certainty is democracy’s quiet crisis. Political systems rely on a shared foundation of truth, that when a political figure speaks, it is truly them speaking. As Carnegie Europe stated, “AI models enable malicious actors to manipulate information and disrupt electoral processes,” while regulators seem to move at a glacial pace. That speed, combined with the global reach of social media, creates a constant fog where voters are left doubting everything or believing the wrong things completely.
Yet the issue is not artificial intelligence itself, but its unrestrained and undisclosed use. Transparency, not panic, must be the goal. Governments and candidates should disclose when AI is used in official communication. Although platforms like OpenAI, Google, and Adobe have started embedding hidden watermarks or digital signatures into the content they generate, removing those watermarks swiftly counteracts this attempt. Regulators across countries must define penalties for deceptive use during elections. Education, too, is part of the solution. Finland’s national AI literacy programs teach citizens how to identify manipulated content, a pertinent reminder that democracy depends not just on free speech, but on informed listening.
AI is not going away. Its capacity to translate, personalize, and even humanize communication for voters in a language or format they understand can be constructive if guided by ethics and transparency. But without accountability, technology becomes a multiplier of manipulation. Each deepfake, each cloned voice, pulls the red thread of credibility a little looser.