Site icon Tech-Wire

OpenAI Bans Accounts Misusing ChatGPT for Surveillance and Influence Campaigns

chatgpt

OpenAI Bans Accounts Misusing ChatGPT for Surveillance and Influence Campaigns

OpenAI on Friday revealed that it banned a set of accounts that used its ChatGPT tool to develop a suspected artificial intelligence (AI)-powered surveillance tool.

The social media listening tool is said to likely originate from China and is powered by one of Meta's Llama models, with the accounts in question using the AI company's models to generate detailed descriptions and analyze documents for an apparatus capable of collecting real-time data and reports about anti-China protests in the West and sharing the insights with Chinese authorities.

The campaign has been codenamed Peer Review owing to the "network's behavior in promoting and reviewing surveillance tooling," researchers Ben Nimmo, Albert Zhang, Matthew Richard, and Nathaniel Hartley noted, adding the tool is designed to ingest and analyze posts and comments from platforms such as X, Facebook, YouTube, Instagram, Telegram, and Reddit.

In one instance flagged by the company, the actors used ChatGPT to debug and modify source code that's believed to run the monitoring software, referred to as "Qianyue Overseas Public Opinion AI Assistant."

Besides using its model as a research tool to surface publicly available information about think tanks in the United States, and government officials and politicians in countries like Australia, Cambodia and the United States, the cluster has also been found to leverage ChatGPT access to read, translate and analyze screenshots of English-language documents.

Some of the images were announcements of Uyghur rights protests in various Western cities, and were likely copied from social media. It's currently not known if these images were authentic.

OpenAI also said it disrupted several other clusters that were found abusing ChatGPT for various malicious activities –

The development comes as AI tools are being increasingly used by bad actors to facilitate cyber-enabled disinformation campaigns and other malicious operations.

Last month, Google Threat Intelligence Group (GTIG) revealed that over 57 distinct threat actors with ties to China, Iran, North Korea, and Russia used its Gemini AI chatbot to improve multiple phases of the attack cycle and conduct research into topical events, or perform content creation, translation, and localization.

"The unique insights that AI companies can glean from threat actors are particularly valuable if they are shared with upstream providers, such as hosting and software developers, downstream distribution platforms, such as social media companies, and open-source researchers," OpenAI said.

"Equally, the insights that upstream and downstream providers and researchers have into threat actors open up new avenues of detection and enforcement for AI companies."

Found this article interesting? Follow us on Twitter  and LinkedIn to read more exclusive content we post.

________________________________________________________________________________________________________________________________
Original Article Published at The Hackers News
________________________________________________________________________________________________________________________________
Exit mobile version