Chinese Official’s ChatGPT Use Uncovers Global Harassment Campaign Targeting Dissidents

A newly released report from OpenAI has revealed a sweeping international harassment campaign allegedly linked to Chinese state interests. The operation, which targeted critics of Beijing living abroad, came to light in an unexpected way: a Chinese law enforcement official reportedly used ChatGPT as a personal logbook to document the campaign’s activities.

According to investigators, the official treated the AI platform like a private diary — recording operational notes, drafting strategies, and outlining intimidation tactics aimed at overseas dissidents. What might have been routine digital note-taking ultimately exposed a coordinated effort involving fake identities, forged legal documents, and disinformation campaigns.

The findings illustrate how artificial intelligence tools are being integrated into modern influence operations — and how small digital habits can unravel large-scale state-linked campaigns.


How ChatGPT Became an Accidental Evidence Archive

The report from OpenAI describes how a user account linked to a Chinese law enforcement official was used to record sensitive operational details. Rather than generating propaganda directly through AI, the individual used ChatGPT as an organizational workspace.

Investigators say entries included:

  • Notes about targeting specific dissidents abroad
  • Descriptions of impersonation attempts
  • Plans to spread legal intimidation messages
  • Documentation of fabricated materials

The account functioned less like a chatbot session and more like a digital operations notebook.

Once OpenAI researchers detected suspicious activity patterns, they analyzed the logs and identified connections between the planning notes and real-world online harassment efforts. The account was subsequently banned.


Impersonating U.S. Immigration Officials

One of the most concerning tactics described in the report involved impersonation of U.S. government authorities.

Operators allegedly posed as U.S. immigration officers and contacted a Chinese dissident living in the United States. The message warned that the individual’s public criticism of Beijing had violated American law — an apparent attempt to create fear of deportation or legal consequences.

Such impersonation tactics are designed to:

  • Intimidate activists
  • Undermine trust in host-country institutions
  • Pressure individuals into silence
  • Disrupt political speech

Although the communication was fraudulent, the psychological impact on targets can be severe.


Forged Legal Documents and Account Takedown Attempts

The report also describes efforts to create fabricated legal paperwork.

According to investigators, operators allegedly generated forged documents that appeared to originate from a U.S. county court. These documents were then used in attempts to request removal of dissidents’ social media accounts.

The strategy combined legal intimidation with digital censorship.

OpenAI researchers later linked elements of the diary entries to real online actions, suggesting that the operation progressed beyond planning stages into implementation.

This blending of fabricated legal threats and platform manipulation represents a growing tactic in cross-border repression campaigns.


A Network of Fake Accounts and Coordinated Messaging

Investigators say the broader operation involved:

  • Hundreds of participants
  • Thousands of fake online personas
  • Coordinated social media posts
  • Website content dissemination

While ChatGPT was primarily used for planning and documentation, propaganda content itself was reportedly created through various other tools and distributed across multiple platforms.

This structure reflects a sophisticated digital influence strategy designed to overwhelm critics with disinformation and legal threats.

Ben Nimmo, a principal investigator at OpenAI, described the campaign as a “new wave of cross-border repression” connected to the Chinese Communist Party.

He emphasized that the activity was not random harassment but a targeted and organized campaign aimed at silencing dissent.


The Fabricated Obituary Scheme

Among the most disturbing findings was a plan to spread false reports of a dissident’s death.

The ChatGPT log allegedly documented a strategy to:

  • Draft a fake obituary
  • Create images of a gravestone
  • Circulate death rumors online

In 2023, similar false death rumors appeared online and were later covered by Voice of America in its Chinese-language reporting.

While it is not confirmed that the online rumors directly resulted from this specific operation, the parallels highlight how digital disinformation can escalate into real-world psychological harm.

Spreading false death narratives can isolate dissidents, confuse supporters, and create reputational damage that persists even after debunking.


Attempted Political Influence in Japan

The operation extended beyond targeting Chinese dissidents.

Investigators found that the same user account requested assistance designing a campaign against Japanese political figure Sanae Takaichi.

The proposed campaign aimed to stir public anger over U.S. tariffs on Japanese goods — an attempt to exploit trade tensions for political influence.

According to OpenAI, ChatGPT declined to assist with that request.

However, researchers later observed hashtags critical of Takaichi circulating on a graphic design forum, including posts referencing trade disputes.

This suggests that even when AI systems refuse to generate content, operators may pursue campaigns using other channels.


AI as a Tool in Geopolitical Competition

The report emerges amid escalating competition between Washington and Beijing over artificial intelligence.

Governments increasingly view AI as:

  • An economic growth driver
  • A military capability enhancer
  • A strategic information control tool
  • A cybersecurity risk vector

The debate extends beyond private platforms.

The United States Department of Defense is reportedly engaged in a dispute with AI firm Anthropic over safety restrictions embedded in its systems.

Defense Secretary Pete Hegseth has pressed Anthropic CEO Dario Amodei to reconsider certain safeguards or risk losing a major Pentagon contract.

These tensions underscore a central question: how should AI systems balance openness, safety restrictions, and national security considerations?


Small Habits, Major Exposure

Security analysts say the OpenAI findings reveal an important insight about modern state operations.

Michael Horowitz, a former Pentagon official now at the University of Pennsylvania, told CNN that the case demonstrates how governments integrate AI into routine information operations — not only advanced research programs.

What makes this case remarkable is that exposure did not occur through a cyberattack or whistleblower leak.

Instead, it was a simple operational habit — using an AI chatbot as a personal notebook — that left a traceable record.

Digital footprints, even in seemingly private spaces, can accumulate into evidence trails.


The Evolution of Cross-Border Repression

Cross-border repression refers to efforts by governments to silence or intimidate critics living outside their borders.

Tactics may include:

  • Surveillance
  • Harassment
  • Threats to family members
  • Legal intimidation
  • Disinformation campaigns
  • Online impersonation

The integration of AI tools introduces new efficiencies into these efforts.

AI can assist in:

  • Organizing targets
  • Drafting messaging
  • Coordinating fake identities
  • Monitoring online reactions
  • Scaling propaganda efforts

At the same time, AI systems can also expose patterns when misuse is detected.


The Role of Tech Companies in Countering Influence Operations

The case highlights the growing responsibility of technology companies in identifying and disrupting coordinated inauthentic behavior.

OpenAI responded by banning the account involved and publishing a detailed transparency report.

Such reporting serves multiple purposes:

  • Informing policymakers
  • Warning potential targets
  • Improving detection systems
  • Signaling consequences for misuse

As AI platforms grow more powerful, the need for proactive monitoring and transparent disclosure increases.


Balancing AI Innovation and Security

Artificial intelligence tools offer immense benefits in research, communication, and productivity.

However, they also create new opportunities for manipulation.

Governments, researchers, and private companies must navigate a delicate balance between:

  • Protecting user privacy
  • Enforcing platform safeguards
  • Preventing misuse
  • Preserving free expression

The ChatGPT diary case illustrates that AI systems can function as both tools of influence and instruments of accountability.


Global Implications

The exposure of this harassment campaign may have broader geopolitical consequences.

It underscores:

  • Rising tensions between China and the United States
  • The weaponization of information channels
  • The strategic importance of AI governance
  • The vulnerability of diaspora communities

For dissidents abroad, the case serves as a reminder that digital intimidation efforts can extend across borders.

For democratic governments, it reinforces the need for:

  • Strong cybersecurity policies
  • Protective measures for activists
  • International cooperation on digital repression

Final Thoughts

The revelation that a Chinese official used ChatGPT as a logbook for documenting a global harassment campaign represents a striking intersection of AI technology and state-led influence operations.

What began as a routine digital habit ultimately exposed a network of fake identities, forged documents, and intimidation strategies targeting critics overseas.

The case demonstrates two powerful truths:

First, artificial intelligence is becoming embedded in modern geopolitical tactics.

Second, even advanced influence campaigns remain vulnerable to human error.

As AI competition intensifies and governments expand digital capabilities, transparency, oversight, and responsible platform governance will play a critical role in protecting open societies from covert manipulation.

In this instance, a chatbot’s digital memory became a window into a broader campaign — proving that in the age of AI, even small actions can have global consequences.