Hackers are bypassing ChatGPT restrictions to create malware code

Posted on February 8, 2023 at 5:58 AM

Hackers are bypassing ChatGPT restrictions to create malware code

ChatGPT is the currency sensation on the internet. The chatbot created by OpenAI has become the fastest-growing software. However, the rising popularity has also attracted threat actors who are now looking for ways to bypass restrictions in the artificial intelligence (AI) tool. Cybercriminals are using the tool to make AI-generated improvements to malware code or emails used to launch phishing campaigns.

Hackers bypass restrictions on ChatGPT

CheckPoint, one of the leading cybersecurity companies, published a blog post earlier this week saying that researchers had detected cybercriminals using the chatbot to improve the code of a malicious tool known as InfoStealer.

The findings of this research were collaborated by Ars Technica, which published a report on February 9 saying that the application programming interface (API) for an OpenAI GPT-3 model dubbed text-davinci-003 was being used by threat actors in place of ChatGPT. Hackers were using the API to bypass security restrictions.

“The current version of OpenAI’s API is used by external applications (for example, the integration of OpenAI’s GPT-3 model to Telegram channels) and has very few, if any anti-abuse measures in place,” the researchers at CheckPoint said.

The researchers further explained the technique used by the threat actors supporting the creation of malicious content, such as emails used for launching phishing emails and creating malware code. The researchers noted that the hackers were using this technique because it did not have the limitations and the barriers that ChatGPT had imposed on the user interface, which made it easier for them to use the tool for malicious campaigns.

The use cases for this AI tool in the hacking world have triggered increased interest, given that cybercriminals are now looking for ways to use the tool to raise money by selling a modified version to other cybercriminals.

One user in an underground hacking forum is selling a service for a modified version of the chatbot. The user is combining the API with the Telegram messaging platform. This way, the interested parties can make any queries they want to the chatbot without being locked out by the restrictions that OpenAI has put in place.

The user appears to be making money from selling this service. They are selling the first 20 questions for free to the other users. However, any additional queries are being sold at $5.50 for 100 queries.

Besides this user, another cybercriminal is creating an OpenAI API-based script that bypasses the imposed restrictions to prevent the abuse of the chatbot. The actions by the threat actors show that while the growth of ChatGPT is a major milestone for the AI industry, it shows that hackers are also taking advantage of the chatbot’s capability to advance their actions.

“We see cybercriminals continue to explore how to utilize ChatGPT for their needs of malware development and phishing email creation. As the controls ChatGPT implement improve, cybercriminals find new abusive ways to use OpenAI models – this time using their API,” the report said.

The report by Ars Technical noted that researchers had reached out to OpenAI to ask the technology company whether it was aware of the abuse of the ChatGPT API by cyber criminals. However, the company has yet to respond to these findings. Moreover, it is yet to issue a notice on how it plans to ensure that the chatbot remains safe and free from abuse amid its growing popularity. 

OpenAI receives huge investment from Microsoft

OpenAI’s potential is captivating internet users and tech giants that do not want to be left behind in what is being seen as the next big thing in the evolution of technology. In January, Microsoft announced that it would continue its partnership with OpenAI. The tech giant also invested around $10 billion in the company.

Microsoft is also on the front line of adopting ChatGPT in some of its products. The tech giant announced that the chatbot would be integrated into the Microsoft Bing search engine and the Edge browser. The company noted that the integration of ChatGPT would support user experience on the platform.

Google is planning to compete with Microsoft in the Ai field. The tech giant noted that it would release the Bard chatbot that would rival ChatGPT. However, the launch did not go smoothly, as the chatbot made an error while responding to one of the queries. This raised concerns that Google might have rushed the release of its chatbot.

Summary
Hackers are bypassing ChatGPT restrictions to create malware code
Article Name
Hackers are bypassing ChatGPT restrictions to create malware code
Description
Hackers are exploiting the potential of ChatGPT. The threat actors are using the tool to improve the malware code and create phishing emails. Last month, OpenAI received a huge investment from Google.
Author
Publisher Name
Koddos
Publisher Logo

Share this:

Related Stories:

Newsletter

Get the latest stories straight
into your inbox!

YOUTUBE

Discover more from KoDDoS Blog

Subscribe now to keep reading and get access to the full archive.

Continue reading