Share this @internewscast.com
A researcher managed to uncover over 100,000 sensitive ChatGPT conversations that could be found via Google, due to a ‘short-lived experiment’ by OpenAI.
Henk Van Ess was one of the first to figure out that anyone could search for these chats using key certain key words.
He found that people had been discussing topics like non-disclosure agreements, confidential contracts, relationship issues, insider trading plans, and how to cheat on papers.
This unexpected issue occurred because of the share feature, which when used by the user, would generate a predictably formatted link using words from the chat.
This allowed people to search for the conversations by typing in ‘site:chatgpt.com/share’ and then putting key words at the end of the query.
Van Ess mentioned discovering a chat that detailed cyberattacks aimed at specific targets within Hamas, the group controlling Gaza that has been at war with Israel since October 2023.
Another involved a domestic violence victim talking about possible escape plans while revealing their financial shortcomings.
The share feature was supposed to simplify the process for users to share their chats with others, though most likely didn’t realize how publicly accessible their discussions would become.

OpenAI has acknowledged that the way ChatGPT was previously set up allowed more than 100,000 conversations to be freely searched on Google
In a statement to 404Media, OpenAI did not dispute that there were more than 100,000 chats that had been searchable on Google.
‘We just removed a feature from [ChatGPT] that allowed users to make their conversations discoverable by search engines, such as Google. This was a short-lived experiment to help people discover useful conversations,’ stated Dane Stuckey, the chief information security officer at OpenAI.
‘This feature required users to opt-in, first by picking a chat to share, then by clicking a checkbox for it to be shared with search engines,’ Stuckey added.
Now, when a user shares their conversation, ChatGPT creates a randomized link that uses no key words.
‘Ultimately we think this feature introduced too many opportunities for folks to accidentally share things they didn’t intend to, so we’re removing the option,’ Stuckey said.
‘We’re also working to remove indexed content from the relevant search engines. This change is rolling out to all users through tomorrow morning. Security and privacy are paramount for us, and we’ll keep working to maximally reflect that in our products and features,’ he added.

Researcher Henk Van Ess plus many others have already archived many of the conversations that were exposed
However, much of the damage has already been done, since many of the conversations were already archived by Van Ess and others.
For example, a chat that’s still viewable involves a plan to create a new bitcoin called Obelisk.
Ironically, Van Ess used another AI model, Claude, to come up with key words to use to dredge up the most juicy chats.
To find people discussing criminal conspiracies, Claude suggested searching ‘without getting caught’, ‘avoid detection’, ‘without permission’ or ‘get away with.’
But the words that exposed the most intimate confessions were ‘my salary’, ‘my SSN’, ‘diagnosed with’, or ‘my therapist.’