ARTIFICIAL IS THE NEW REALITY: HOW AI IS CHANGING ELECTIONS IN 2024

Photo by Chris Ried on Unsplash

Since its introduction in the mid-1950s, Artificial Intelligence (AI) has nothing but gained momentum in development, rapidly becoming a tool that permeates our lives and must be reckoned with. 2023, however, witnessed the popularisation of generative AI such as ChatGPT, capable of producing content that is as close to that of humans. At the same time, this year saw elections being held in more than 60 countries – nearly half the world’s adult population. As the world was bracing for transitions and democracy is being put to the test, it is important to examine the extensive role AI has played in this year’s elections. 

How AI has been used in elections this year

So what are these hypes about Generative AI about? Generative AI refers to machine-learning models that are trained to create new data such as text, images, videos and other content. It is transforming industries and our society through its power to create realistic and creative content such as texts, audio, and images. Quite often, it is also used to generate “deepfakes” – convincing images, videos, or audio of real or fictional people that make it harder for people to distinguish them from genuine content. 

Photo by Emiliano Vittoriosi on Unsplash

Generative AI has been playing an extensive role in the 2024 U.S. Presidential Election. Earlier this year, voters in the state of New Hampshire received robocall messages with a deep-fake voice of President Joe Biden, urging them not to vote. The case was later investigated and led to criminal charges.  

Donald Trump, the country’s former president and candidate for the 2024 election, has been quick to jump on the AI boat. Before the Democratic National Convention, he posted on his social media platform Truth Social a fake image of what looks like U.S. Vice-President Kamala Harris speaking in front of a communist rally. In another post, he posted AI-generated images of Taylor Swift’s fans, widely known as Swifties, in “Swifties for Trump” shirts. The post also includes an image of Taylor Swift as Uncle Sam, in a caption that read “Taylor Wants You to Vote for Donald Trump”. 

In Indonesia, AI was used to generate a cartoonish avatar of presidential candidate Prabowo Subianto, targeting young voters. Despite accusations of human rights abuse in East Timor, Subianto was now branded as the “gemoy”, an Indonesian slang meaning “cute and cuddly”. In Pakistan, the jailed opposition leader Imran Khan deployed AI to generate a video to address his supporters, and later a victoria speech as his party emerged victorious.

The list goes on. As Venezuela was still suffering from unrest after a turbulent election and controversial victory by Nicolás Maduro, a deepfake of a supposedly high-profile military officer addressing Maduro’s defeat got more than one million views. India – the world’s biggest democracy – saw its election this year embedded in AI, from AI impersonation of party leaders calling voters to AI-generated footage of a deceased politician.

Tell me some good news

Despite all of the “doomsday” stories about AI’s destructive role in elections, it can be a force of good if used properly and ethically. The popularisation and accessibility of AI now means it can act as an access point for information, helping people make better-informed decisions. This aspect is even more significant when access to information is restricted and/or freedom of speech is suppressed. 

AI’s ability to store and analyse large amounts of data means it can improve efficiency and accuracy in election campaigns, helping to identify trends and voters’ insights. AI can also be a fact-checking tool to combat misinformation or used to moderate content and flag potentially harmful accounts on social media. This depends, however, on the willingness of platform companies to invest in this potential. 

The use of AI always needs to come with the awareness of its risks and biases, and hence with a responsibility to regulate – including for oneself. 

The challenges

Unfortunately, the powerfulness and accessibility of AI coupled with the loopholes in laws and regulations mean that AI is being used in many ways that are harmful in elections.

As we have seen, politicians and users have heavily used generative AI as a cost-effective way to spread their messages, albeit falsely. AI can now easily impersonate humans, generating footage and writings that mimic real-life people. AI can also be used for political micro-targeting, meaning extracting and analyzing users’ data to tailor messages that cater to their preferences. This results in filter bubbles and echo chambers, where users are only exposed to content aligned with their views instead of diverse perspectives. The notorious Cambridge Analytica scandal, for example, used data harvesting of up to 87 million Facebook users to build a “psychological warfare tool” to help with Donald Trump’s presidential campaign. 

Photo by Tobias Tullius on Unsplash

With AI growing increasingly sophisticated, it can be hard to distinguish between truths and falsehoods. For example, chatbots are computer software designed to simulate human conversations. They have been increasingly used in election campaigns as agents to distribute political messages and, no less often, to interfere with voters’ actions by spreading misinformation.

Conclusion

Photo by Julio Lopez on Unsplash

So where do we go from here? And is there nothing we can do? 

One thing for certain is that AI will only become more powerful, and its use more widespread. And while there are grounds to be worried, we can also see how to best minimise the negative impacts of AI and turn it into a useful tool. Currently, countries are yet to adopt a solid AI guideline, which prompts for changes. Better laws and regulations need to be in place, including those for political candidates and government. Platform companies need more transparency and accountability for their implementation of AI, as well as investments in efforts to moderate AI and train AI tools so they can be used productively. As for users, who have a huge role to play, we need to be aware of our own exposure to AI and to use AI responsibly. 

If there was anything we have learned from how AI has been used in elections this year, it is that the future indeed looks very scary. Unless there is a collaborative effort, we risk the future of democracy.

Eleanor Truong
+ posts

Eleanor Truong is in her final year studying Media and Journalism at Monash University. She has a huge interest in international affairs, linguistics, and history, and can be found going down the rabbit hole for too many things than the time permits. She also likes reading and running, and has recently gotten into chess, though she claims she is “meagre at best”. After graduation, Eleanor wants to work in diplomacy or government and hopes to one day be able to travel around the world, especially South America!

Leave a Reply

Your email address will not be published. Required fields are marked *