In 2024, roughly half of the world’s population are living in countries hosting major
elections (Ewe, 2023). This mass example of democracy combined with the growing
highlight of AI in the media has caused many to question how protected our elections
actually are. With the majority of U.S. adults “extremely or very concerned” about
AI’s potential impact, it leaves us to wonder: Is AI encroaching on our democratic
right to vote (Gracia, 2024)?
Fairness in elections has always been a focus of mainstream media worldwide however with the growth of AI, it has been called even further into question by
political figures openly questioning opposition’s tactics and claiming the use of AI to manipulate voters. Earlier this year in India, we saw this when AI videos of actors
were created to criticise candidates and sway public opinion(Das,2024). Mirroring this, the US presidential election has become the forefront of political tensions,
bringing attention to the potential impact AI can have. It has been used by Trump to falsely tell voters that he had Taylor Swift’s endorsement (which backfired when she
released a statement endorsing Harris in response) and again when he tried to discredit Harris by claiming she “AI’d it” ,in reference to a photo of a crowd of her
supporters at an airport, to make herself seem popular (Looker,2024) (Trump,2024). It’s not only an internal problem but a potential method for foreign powers to swoop
in and manipulate voters into choosing a candidate who would benefit them. A spokesperson from US National Intelligence reported a significant number of
Russian AI stories that were used to “denigrate the vice president” and sway voters to Trump via a mix of conspiratorial narratives and falsified articles ( Landay and
Brunnstrom ,2024). Dozens were published on mimic American news sites promoting or martyring Trump. Many contain ‘sources’ to try and further their ‘legitimacy’ which
are just AI voice overs on youtube videos (Myers, Robinson, Sardarizadeh, & Wendling, 2024). While Trump’s use of AI was easily disproved causing limited
harm, his accusation of Harris may have caused a ripple effect that could discredit her campaign. As AI images advance, it becomes increasingly harder to spot them;
people start to scrutinise every image they see which could cause them to doubt the authenticity of real images posted by Harris if they associate her with AI. Perhaps the
real danger of AI is not the misinformation it spreads, but the trust deficit it leaves in its wake (Griffin, 2024).
The security of worldwide elections has been scrutinised as a result of the potential for voter suppression with an intent to destabilise a government and bring its
legitimacy into question. In 2010, Facebook experimented with the effect of virtual peer pressure on getting people to vote and found that one simple button directly
brought 60, 000 people to the polls (Corbyn, 2012). If such a basic thing could have that major of an impact on so many people, just imagine what might happen if
targeted AI ads are shown to undecided voters. This aligns with the hypodermic needle model of media influence which further supports this point by explaining how
people absorb information from the media without questioning it. Only this month, Chinese hackers managed to break into the phones of top Republican/Democrat
officials, this act was considered neutral and had the intent of destabilising the election in general - with the help of AI, attacks like these will only become easier to
perpetrate with AI assisted phishing scams and security scans (Tucker, 2024). Similar instances have also been recorded earlier this year in the Pakistani general
election where deepfake videos of candidates appeared telling people to boycott the election which undermined the credibility and legitimacy and led to widespread
concern.(Iqbal and Mushtaq, 2024).
There have been many attempts to mitigate AI risks over the past decade; from theoretical ideas such as an AI pause, to public education to the implementation of
fact checkers. However most of these have only exacerbated fears about existential risks. Mitigation is not something that can be achieved without the cooperation of
governments and tech giants but it is something that is needed to protect our democratic, human right to vote.If we continue to let it run rampant, who knows what
effects it could have. I believe that there should be a complete ban of AI in political campaigns and media to prevent candidates from trying to manipulate voters and to
maintain the existing level of trust that people have in the electoral system. I do recognise though that this would be exponentially hard to police; if people are
struggling now to tell the difference between what’s real and what’s AI, then how will they be able to tell 5, 10 years in the future as technology continues evolving. You
would run the risk of candidates accusing their opponents of having ‘AI’d it’ as an attempt to distract them and destabilise their campaign strategy, It could also infringe
on the rights of free speech and expression if people try and use this to limit what their opponents can say by accusing them of using AI-written speech or views. Where
would we draw the line? How could you prove that somebody had used AI without discrediting them in the process. As well as this, I think it is vitally important
for social media companies to set up harsher restrictions on the use of AI-generated ads to prevent demographics from being unfairly targeted by personalised material. I
think a complete ban of all AI is unrealistic because there are so many ways it can benefit us as a society but in order to make sure that our democracies are not at risk
of eroding completely, people in power need to step up and do their part to protect this sacred part of our society.
Bibliography:
Corbyn, Z. (2012) “Facebook experiment boosts US voter turnout”, Nature (online), https://doi.org/10.1038/nature.2012.11401
Das, S. (2024) “Video Of Ranveer Singh Criticising PM Modi Is A Deepfake AI Voice Clone”, Boom Live (online), https://www.boomlive.in/fact-check/viral-video-bollywood-actor-ranveer-singh-congress-campaign-lok-sabha-elections-claim-social-media-24940
Ewe, K. (2023) “The Ultimate Election Year: All the Elections Around the World in 2024”, Time (online), https://time.com/6550920/world-elections-2024/
Gracia, S. (2024) “Americans in both parties are concerned over the impact of AI on the 2024 presidential campaign”, Pew Research Centre (online), https://www.pewresearch.org/short-reads/2024/09/19/concern-over-the-impact-of-ai-on-2024-presidential-campaign/
Griffin, A. (2024) “Donald Trump is invoking AI in the most dangerous possible way”, The Independent (online), https://www.independent.co.uk/tech/donald-trump-ai-kamala-harris-picture-photo-b2595228.html
Iqbal, A. and Mushtaq, S. (2024) “AI’s impact on South Asian elections: Technological Innovation, Voter Rights and Regulatory Frameworks”, RSIL (online), https://rsilpak.org/2024/ais-impact-on-south-asian-elections-technological-innovation-voter-rights-and-regulatory-frameworks/
Landay , J. and Brunnstrom , D. (2024), “Russia produced most AI content to sway presidential vote, US intelligence official says”, Reuters (online), https://www.reuters.com/world/us/russia-produced-most-ai-content-sway-us-presidential-vote-says-us-intelligence-2024-09-23/
Looker, R. (2024) “Trump falsely implies Taylor Swift endorses him”, BBC (online), https://www.bbc.co.uk/news/articles/c5y87l6rx5wo
Myers, P., Robinson, O., Sardarizadeh. S., and Wendling, M. (2024), “A Bugatti car, a first lady and the fake stories aimed at Americans”, BBC (online), https://www.bbc.co.uk/news/articles/c72ver6172do
Trump, D. (@realDonaldTrump), (2024),” Has anyone noticed that Kamala CHEATED at the airport?”, Truth Social (online), https://truthsocial.com/@realDonaldTrump/posts/112944255426268462
Tucker, E. (2024), “AP sources: Chinese hackers targeted phones of Trump, Vance, people associated with Harris campaign”, AP News (online), https://apnews.com/article/china-fbi-trump-vance-hack-cellphones-d085787db764d46922a944b50e239e4a#
hello