Peer reviewed analysis from world leading experts

South Korea contends with AI and electoral integrity

Reading Time: 5 mins
Han Dong-hoon, interim leader of the ruling People Power Party (PPP) speaks to supporters during a campaign event for the upcoming parliamentary elections at Anyang in South Korea, 29 March 2024 (Photo: Reuters/Kim Jae-Hwan/SOPA Images).

In Brief

As South Korea gears up for its upcoming elections, the country is grappling with the potential impact of AI on its electoral process. While AI can be used to enhance democratic engagement, there are also concerns about its misuse, such as the spread of misinformation and deepfakes. South Korea has taken steps to address these concerns, such as banning deepfakes in political campaigning. But critics argue that such measures may not be enough, and that more needs to be done to ensure the ethical use of AI in politics. The ongoing debate in South Korea reflects the larger global conversation about the role of AI in democracy.

Share

  • A
  • A
  • A

Share

  • A
  • A
  • A

In the lead-up to South Korea’s general elections on 10 April 2024, the country is at the forefront of a global conversation about the interplay between artificial intelligence (AI) and democracy. This represents a critical juncture, where the potential for technology to transform democratic engagement intersects with the imperative to protect electoral integrity.

Generative AI tools offer political parties and candidates the ability to automate the creation of election pledges, speeches and even campaign media such as songs and videos. These technologies employ advanced algorithms to produce human-like text, audio and visuals from extensive data. The ability to save time and costs for candidates while effectively tailoring campaign promises to voters based on gender, age and location offers significant advantages, enabling personalised and efficient communication with voters.

But the potential for abuse is clear and AI may disadvantage certain candidates. Controversy flared in South Korea prior to local elections in June 2022, when a supporter of a People Power Party candidate created a video featuring an AI-rendered avatar of President Yoon Suk Yeol endorsing their candidate. The incident sparked debate over whether it violated the Public Official Election Act as well as whether it breached the President’s duty to remain neutral during elections.

To counter misinformation and uphold democratic fairness, the National Assembly has prohibited AI-generated deepfake content in political campaigning within 90 days of an election. Violations of this law could result in up to seven years in prison or fines of up to US$38,000 (50 million won).

Election monitors have also established guidelines to mitigate the risks associated with AI-generated content. It mandates transparency in the use of AI for political communication, requiring that any AI-assisted content be clearly disclosed to better ensure voters are not duped by AI-generated falsehoods.

Considering how rapidly AI-generated content can spread and how challenging it is to remove, South Korean tech companies, along with government and political bodies, are adopting vigilant management strategies.

DeepBrain AI, the startup behind the Yoon avatar, released software that can help detect deepfakes online. Naver, South Korea’s leading search engine, has ramped up monitoring efforts to defend against new abusive patterns, including AI-generated comments and deepfakes. The platform also introduced features allowing users to report election misinformation directly, with a dedicated reporting centre established to facilitate communication with the National Election Commission.

Scepticism about the effectiveness of these measures persists, with critics highlighting the challenge of enforcing regulations against content produced overseas. Domestic internet users’ use of virtual private networks complicates matters further, potentially prolonging investigations until after the elections have concluded, highlighting the ongoing struggle against digital misinformation in the political arena.

Critics of current regulations argue that focusing solely on the technological means of content creation, such as banning deepfake videos, might not fully address the challenges posed by AI in politics. They contend that restricting specific IT technologies, rather than addressing their underlying issues, is limiting. There is also concern that such an approach could lead to excessive censorship, potentially impacting freedom of expression online.

This highlights the need for regulations that target the veracity and ethical use of content, rather than the tools used to create it.

The debate extends to the potential of AI to not only assist but eventually replace human roles in political activities. AI experts posit that with future advancements, AI could be trained to understand and even embody political ideologies. This prospect raises concerns about accuracy, bias and the loss of human oversight in political decision-making, highlighting the need for ongoing research and ethical considerations in AI development.

South Korea’s ban on deepfake technology in election campaigning reflects a concerted effort to pre-emptively address the challenges posed by AI. The Central Election Commission of South Korea has clarified that activities encouraging voter participation using deepfake videos or similar technologies are not restricted by law, provided they do not endorse, condemn or reference a specific candidate or political party.

This legislative approach, aimed at curbing the misuse of AI in spreading false or biased information, indicates a recognition of the importance of a balanced approach that leverages AI opportunities to enhance democratic participation and efficiency, while also protecting against its potential to compromise electoral fairness and integrity.

South Korea is not alone in this new era of digital democracy. Canada, India, the United States and others face similar challenges in upcoming elections this year. They are all striving to strike a balance between harnessing the benefits of AI in electoral processes and mitigating its risks. This global effort signifies a collective commitment to protecting democratic integrity while leveraging technological progress.

The success of these measures hinges on a comprehensive understanding of AI technologies by both regulators and the public. The steps South Korea takes and the obstacles it encounters leading up to the April general elections will serve as an instructive case study for the international community, contributing to the discourse on AI’s role in evolving democratic practices and institutions in the digital age.

Tae Yeon Eom is a Research Scholar at the Asia Pacific Foundation of Canada and a PhD Candidate and sessional lecturer at the University of British Columbia.

Comments are closed.

Support Quality Analysis

Donate
The East Asia Forum office is based in Australia and EAF acknowledges the First Peoples of this land — in Canberra the Ngunnawal and Ngambri people — and recognises their continuous connection to culture, community and Country.

Article printed from East Asia Forum (https://www.eastasiaforum.org)

Copyright ©2024 East Asia Forum. All rights reserved.