AI tools are already helping bad actors automize the spread of election disinformation

By Axios | Created at 2024-10-22 08:08:01 | Updated at 2024-10-22 10:25:23 2 hours ago
Truth

AI-powered election disinformation is lurking in the shadows of social media, and many might not spot it before they cast their votes this year.

Why it matters: Margins for several major U.S. elections — including the presidential race — are razor thin.


  • Even the smallest social media campaign could influence a voter's choice and possibly sway an election's outcome.

Driving the news: Two researchers at Clemson University published a study last week detailing how a disinformation campaign has used large-language models to respond to political posts on X, formerly Twitter, seemingly autonomously.

  • The disinformation network, first reported by NBC News, has been running at least 686 X accounts and posted more than 130,000 times since January, according to the study.
  • Researchers believe the operators are likely acting on behalf of a domestic political group, and they've targeted a range of elections, including five Senate races, a House race and the presidential campaign.

The intrigue: The network exclusively replies to posts from legitimate users. It also replies to posts about other topics, including professional and collegiate sports, to help the accounts appear more legitimate.

  • The operatives switched to using the Dolphin large language model, which has fewer content moderation constraints, in June after initially launching their efforts with ChatGPT.

Yes, but: Engagement is relatively low on these posts, researchers found.

  • AI-powered posts alone aren't likely to sway many voters in 2024, Darren Linvill, one of the report's authors and co-director of Clemson's Watt Family Innovation Center Media Forensics Hub, told Axios.
  • Bots still represent a small percentage of the accounts posting on social media about politics, Linvill noted. "If you're talking about politics, whoever you're talking to, at the end of the day, is probably a real person."

Between the lines: Like any other kind of disinformation, the full impact of AI-fueled campaigns on the 2024 U.S. vote won't be fully realized for months, possibly years.

  • Determining which posts are coming from legitimate users vs. bot accounts requires a lot of research.
  • And even then, most researchers uncover these networks through random encounters with bots online or tips from outside sources.

The big picture: Election officials have been fending off a litany of mis- and disinformation threats leading up to this year's vote.

  • Several nation-states have already tapped AI tools — including DALL-E and ChatGPT — to create images and write fake news stories that spread lies about political candidates.

The big picture: 2024 is just the first major election where AI-enabled content tools have been readily available. Experts fear that coming years could be worse.

  • "While I don't want people to be too scared of the power of AI disinformation just yet, it is real, and we do need to be concerned about it," Linvill said.

The bottom line: Caution is the best way to combat the threat of disinformation.

  • State and local election officials provide accurate information about voting on their websites.
  • Several organizations are either fact-checking election lies or have tools to help people fact check information themselves.
Read Entire Article