Chinese Propaganda and AI-Generated Influence Operations

Chinese propaganda and social engineering operations have been using ChatGPT have become increasingly sophisticated, utilizing AI tools to create posts, comments, and drive engagement both domestically and internationally. Recently, OpenAI reported that it has disrupted four covert influence operations originating from China that employed its technology to generate social media posts and replies across various platforms, including TikTok, Facebook, Reddit, and X.

Topics of Engagement

The generated comments covered a wide range of topics, from U.S. politics to a Taiwanese video game video game where players combat the Chinese Communist Party. The use of ChatGPT in these operations allowed for the creation of social media posts that both supported and criticized various contentious issues, thereby stirring misleading political discourse.

Insights from OpenAI

Ben Nimmo, principal investigator at OpenAI, stated, “What we’re seeing from China is a growing range of covert operations using a growing range of tactics.” OpenAI also indicated that it has disrupted several operations believed to have originated in Russia, Iran, and North Korea. Nimmo elaborated on the Chinese operations, noting that they “targeted many different countries and topics […] some of them combined elements of influence operations, social engineering, and surveillance.”

Historical Context

This is not the first instance of such operations. In 2023, researchers from cybersecurity firm Mandiant highlighted that AI-generated content has been utilized in politically motivated online influence campaigns on numerous occasions since 2019. The increasing sophistication of these campaigns raises concerns about the implications for global political discourse.

Recent Developments

In 2024, OpenAI published a blog post outlined its efforts to disrupt five state-affiliated operations across China, Iran, and North Korea that were leveraging OpenAI models for malicious purposes. These applications included debugging code, generating scripts, and creating content for use in phishing campaigns.

Manipulation of Political Narratives

That same year, OpenAI reported that it disrupted an Iranian operation was being used to create long-form political articles about U.S. elections, which were subsequently posted on fake news sites masquerading as both conservative and progressive outlets. The operation also involved generating comments for posting on X and Instagram through fake accounts, promoting opposing viewpoints.

Engagement Metrics

Nimmo remarked, “We didn’t generally see these operations getting more engagement because of their use of AI. For these operations, better tools don’t necessarily mean better outcomes.” This observation suggests that while the technology may enhance the volume of content generated, it does not guarantee increased effectiveness in influencing public opinion.

Future Implications

The implications of these findings are significant. As generative AI gets cheaper continues to evolve and become more accessible smarter, it is likely that the ability to produce content en masse will facilitate the development of influence campaigns like these, making them easier and more cost-effective to execute, even if their overall efficacy remains unchanged.

This article originally appeared on Engadget at Engadget.

Sources:

Source: Original Article