Posted on June 24, 2023 at 8:57 AM
Data Belonging To Over 1,000,000 ChatGPT Users Leaked Online
The data belonging to over one million ChatGPT users was leaked online. The majority of the leaked data belonged to India and Pakistan. A report published by Group-IB noted that the stolen details were being sold on the dark web between June 2022 and May 2023.
Data belonging to over 1M ChatGPT users leaked online
Group-IB, a cybersecurity leader located in Singapore, has detected that more than one million stealer-infected devices with saved ChatGPT credentials were compromised. The data is now being sold on the dark web.
Just last month, the cybersecurity company claimed that it detected over 26,800 ChatGPT credentials. The figure was a peak since Group-IB started tracking the data. The Threat Intelligence Unit at Group-IB has also reported that some of the leading countries where the exploits were targeted include Brazil, India, and Pakistan.
In Brazil, 6,531 ChatGPT users were affected, while in Pakistan, 9,217 users were affected. In India, the exploit targeted 12,632 users. The report has also said that the majority of the stolen credentials were traded across the Asia-Pacific region.
The cybersecurity researchers from the cybersecurity company have also said that the stolen ChatGPT credentials were accessed because of the Racoon malware. The malware operates like the malware, and it is used to conduct a wide range of hacking campaigns.
The malware steals information from the targeted victims. The hackers obtain this information from the computers of the affected users. This information was stolen from the targeted devices when these users downloaded the malicious software. The software is usually hidden in the open as a file and an app.
The app or file that the hackers use to lure victims is usually comprised of one of the most popular offerings across the market. The Raccoon malware is known to be notorious, and it has triggered a wide range of hacking attacks across the market.
The Raccoon malware is easily available to users. The malware is usually deemed to be dependable. The malware allows the threat actors to have a healthy subscriber base. The fact that this malware gives the threat actors access to a vast user base makes it an ideal option for hackers.
The Head of Threat Intelligence at Group-IB, Dmitry Shestakov, commented on the development, saying that a majority of enterprises had integrated the ChatGPT chatbot, and they were now using it within their operational flow.
Shestakov has further said that the tool can be used to access a wide range of information from the victims. The chatbot can be used to perform a wide range of functions, such as retaining all the conversations, resulting in sensitive information being stolen by the threat actors who want to access sensitive account credentials.
“Employees enter classified correspondence or use the bot to optimize proprietary code. Given that ChatGPT’s standard configuration retails all conversations, thus could inadvertently offer a trove of sensitive intelligence to threat actors if they obtain account credentials,” Shestakov said.
The threat actors that have access to their ChatGPT accounts can also gain access to other accounts using the malware. However, the hackers will gain access to the user accounts if the users reuse the same password across multiple platforms.
Additionally, a hacker might also be paying for the ChatGPT premium plan known as the ChatGPT Plus. Hackers might be paying for this premium feature on behalf of users in order to lure them into conducting the hacking campaign.
ChatGPT exploit poses a major threat
Some experts have raised concerns about the potential security vulnerabilities related to having a ChatGPT account. Such accounts can be compromised by threat actors, resulting in users suffering immense losses if the threat actor is successful in their exploits.
Many tech companies, such as Google, have issued an alert to their customers warning them of the impending dangers of having a ChatGPT account. The data gathered from such accounts could be used in training the AI language model, which compromises the privacy of the users with these accounts.
The ChatGPT AI chatbot also comes with a feature that can be used to save the chat history of a user. Saving this information was enabled after the creator of the chatbot, OpenAI, launched the feature several months ago, allowing such possibilities.
Many companies have also been forced to warn their employees about the chatbot. This trend shows that there is a likelihood that such attacks might have happened before, and they pose a danger to many users, and they might also be prevalent.