OpenAI is hoping to alleviate concerns about its technology’s influence on elections, as more than a third of the world’s population is gearing up for voting this year. Among the countries where elections are scheduled are the United States, Pakistan, India, South Africa, and the European Parliament.
“We want to make sure that our AI systems are built, deployed, and used safely. Like any new technology, these tools come with benefits and challenges,” OpenAI wrote Monday in a blog post. “They are also unprecedented, and we will keep evolving our approach as we learn more about how our tools are used.”
There’s been growing apprehension about the potential misuse of generative AI (genAI) tools to disrupt democratic processes, especially since OpenAI — backed by Microsoft — introduced ChatGPT in late 2022.
The Open AI tool is known for its human-like text generation capabilities. And another tool, DALL-E, can generate highly realistic fabricated images, often referred to as “deep fakes.”
OpenAI gears up for elections
For its part, OpenAI said ChatGPT will redirect users to CanIVote.org for specific election-related queries. The company is also focusing on enhancing the transparency of AI-generated images using its DALL-E technology with plans to incorporate a “cr” icon on such photos, signaling they are AI-generated.
The company also plans to enhance its ChatGPT platform by integrating it with real-time global news reporting, including proper attribution and links. The news initiative is an expansion of an agreement made last year with the German media conglomerate Axel Springer. Under that deal, ChatGPT users gain access to summarized versions of select global news content from Axel Springer’s various media channels.
In addition to those measures, the company is also developing techniques to identify content created by DALL-E, even after the images undergo modifications.
Growing concerns about mixing AI and politics
There’s no universal rule for how genAI should be used in politics. Last year, Meta declared it would prohibit political campaigns from using genAI tools in their advertising and mandate that politicians reveal any such use in their ads. Similarly, YouTube said all content creators must disclose whether their videos contain “realistic” but altered media, including those created with AI.
Meanwhile, the US Federal Election Commission (FCC) is deliberating on whether existing laws against “fraudulently misrepresenting other candidates or political parties” apply to AI-generated content. (A formal decision on the issue is pending.)
False and deceptive information has always been a factor in elections, said Lisa Schirch, the
Richard G. Starmann Chair in Peace Studies at the University of Notre Dame. But genAI allows many more people to create ever more realistic false propaganda.
Dozens of countries have already set up cyberwarfare centers employing thousands of people to create false accounts, generate fraudulent posts, and spread false and deceptive information over social media, Schirch said. For example, two days before Slovakia’s election, a fake audio recording was released of a politician attempting to rig the election.
Like ‘gasoline…on the burning fire of political polarization’
“The problem isn’t just false information; it is that malignant actors can create emotional portrayals of candidates designed to generate anger and outrage,” Schirch added. “AI bots can scan through vast amounts of material online to make predictions about what type of political ads might be persuasive. In this sense, AI is gasoline thrown on the already burning fire of political polarization. AI makes it easy to create material designed to maximize persuasion and manipulation of public opinion.”
One of the major concerns about genAI and attention-grabbing headlines involves deep fakes and images, said Peter Loge, director of the Project on Ethics in Political Communication at George Washington University. The more significant threat comes from large language models (LLMs) that can generate endless messages with similar content instantly, flooding the world with fakes.
“LLMs and generative AI can swamp social media, comments sections, letters to the editor, emails to campaigns, and so on, with nonsense,” he added. “This has at least three effects — the first is an exponential rise in political nonsense, which could lead to even greater cynicism and allow candidates to disavow actual bad behavior by saying the claims were generated by a bot.
“We have entered a new era of, ‘Who are you going to believe, me, your lying eyes, or your computer’s lying LLM?’” Loge said.
Stronger protections needed ASAP
Current protections are not strong enough to prevent genAI from playing a role in this year’s elections, according to Gal Ringel, the CEO of the cybersecurity firm Mine. He said that even if a nation’s infrastructure could deter or eliminate attacks, the prevalence of genAI-created misinformation online could influence how people perceive the race and possibly affect the final results.
“Trust in society is at such a low point in America right now that the adoption of AI by bad actors could have a disproportionately strong effect, and there is really no quick fix for that beyond building a better and safer internet,” Ringel added.
Social media companies need to develop policies that reduce harm from AI-generated content while taking care to preserve legitimate discourse, said Kathleen M. Carley, a CyLab professor at Carnegie Mellon University. They could publicly verify election officials’ accounts using unique icons, for instance. Companies should also restrict or prohibit ads that deny upcoming or ongoing election results. And they should label election ads that are AI-generated as AI-generated, thus increasing transparency.
“AI technologies are constantly evolving, and new safeguards are needed,” Carley added. “Also, AI could be used to help by identification of those spreading hate, identification of hate-speech, and by creating content that aids with voter education and critical thinking.”