As we get closer to the 2024 presidential election, artificial intelligence (AI) is becoming more involved in the work of government agencies, candidates, and officials. The use of AI is so widespread in elections could that it could impact the results. However, the public’s trust in politicians is shaky, and the lack of comprehensive AI regulations contributes to the obscured use of AI in political processes.

Kevin Pérez-Allen, chief communications officer for the nonpartisan healthcare advocate organization United States of Care, told CNBC that AI will help understand voting patterns, craft resident messages, and analyze social media behavior.

Pérez-Allen, with years of experience in political campaigns, has seen how technology changes campaigning. For example, ChatGPT is already being used to write first drafts of speeches, campaign ads, and fundraising emails and texts. With AI making some tasks easier, there might not be a need for as many people working on campaigns.

However, he emphasizes that although AI can replicate many aspects in campaigning, like collecting information, analyzing data, and writing. But it can’t replace going door-to-door or directly talking to voters.

The Positive Potential of AI in Elections

AI could really change how elections work, but this depends on the rules set for it.

Pérez-Allen talks about how voters are often wrongly put into single categories in elections. For example, people think various groups and ethnicities like Latinos, Black voters, or suburban women all have the same opinions. AI could help by making political campaigns more detailed and personalized to different voters, moving away from this oversimplified view.

AI can also make it easier to get information about politics. Pérez-Allen imagines a future where AI chatbots can explain a candidate’s policies in a detailed and data-supported way, creating a feeling of direct communication with the campaign.

This could be especially helpful for diverse groups like Latinos, who might receive campaign messages in their specific languages and dialects. AI could also improve the way messages are translated at voting booths, making sure they keep the original meaning and not just translating word by word.

However, all these advancements hinge on effective regulation. Despite President Biden’s executive order drawing attention, there’s still no clear path to how these AI integrations will be regulated and implemented.

Sinclair Schuller, co-founder and managing partner of AI firm Nuvalence, who has worked with governments on AI integration, is concerned about the dangers of AI in elections. He believes we might see a lot of made-up content against and in favor of political candidates, which could cause confusion.

Schuller is talking about deepfakes – fake videos, images, and sounds created by AI that seem real. In politics, the first story often gets the most attention. Schuller warns that some political groups might use deepfakes to falsely show their opponents in bad situations. He added that:

We will hopefully reach a spot where if you don’t have an authenticated, direct line to the source, then we can’t trust it.

In fact, the use of deepfakes is already happening in politics. A clear example is from Chicago’s mayoral primary in February 2023. A deepfake video made it look like candidate Paul Vallas supported police brutality. Vallas lost the election, but it’s hard to say how much the video influenced that.

Setting the Rules: AI Governance in Elections

As AI becomes more involved in politics, especially with the 2024 presidential election on the horizon, there’s a push for better rules to manage its use. The European Union’s (EU) new AI Act and President Biden’s Executive Order in the U.S. are big steps in this direction. Here’s a look at what they mean:

EU’s AI Act: Setting the Standard for AI Use

The EU’s AI Act is a major move to make AI safe and fair. It sets clear rules for how AI should work, with a focus on protecting people’s rights. This includes:

  • New Rules for AI: Clear guidelines for AI systems, especially those that are high-risk, which could influence how AI is used in elections.
  • Better Governance: A new system to keep an eye on AI, with powers at the EU level to make sure rules are followed.
  • Stronger Rights Protections: More safeguards for people’s rights, and special conditions for using AI in sensitive areas like law enforcement.

Biden’s Executive Order: Enhancing AI Safety in the U.S.

President Biden’s order emphasizes AI’s safety, security, and respect for privacy and rights. This could greatly affect how AI is used in political campaigns, focusing on:

  • AI Safety and Security: Making sure AI systems are safe before they are used widely.
  • Privacy and Rights: Protecting people’s personal data and ensuring AI doesn’t discriminate.

The AI Act and the Executive Order provide a blueprint for how AI could be regulated in elections. By tying AI’s growth to ethical standards and people’s rights, these rules aim to make sure AI is used responsibly in politics, making the democratic process stronger and more trustworthy.