ChatGPT Aiding Data Theft Through Developing Malicious Code

Posted on January 14, 2023 at 2:57 PM

ChatGPT Aiding Data Theft Through Developing Malicious Code

It is the nature of newfound technologies to develop unforeseen consequences of the latest innovation. Such is the case with ChatGPT (Generative Pre-trained Transformer). The program has gained popularity with its realistic and sometimes disturbingly accurate chat interactions, answering questions like a human being.

However, the latest development in ChatGPT’s story comes from malicious hackers. Bad actors can quite easily use ChatGPT’s resources to develop malicious codes, hacking your device through these means.

New Tools; New Problems

As it stands now, ChatGPT is currently free for all to use. This is all part of a feedback exercise, with a paid subscription to the software being planned somewhere in the future. The software was developed by OpenAI, which is subsequently owned by Microsoft. Now, however, a fresh tool has been unleashed for all parties involved, both benevolent and otherwise.

Check Point Research (CPR), a cyber security company, has recently shown that it’s noticing an increase in attempts made by Russian cybercriminals to bypass the restrictions imposed by OpenAI. Through doing so, these malicious actors are hoping to make use of ChatGPT for malicious purposes.

Helping Those They Never Wanted To

CPR brought to light the various hackers in underground hacking forums that are deep in discussion regarding how to bypass payment cards, IP address and phone number controls. These are all measures needed to access ChatGPT if you’re situated within Russia.

CPR highlighted that a large amount of hackers are gaining newfound interest in ChatGPT thanks to its ability to scale their malicious activity.

Sergey Shykevich stands as Check Point Research’s Threat Intelligence Group Manager, and highlighted the increase in Russia-based hackers that are deep in discussion regarding ChatGPT. He stated that they were trying to circumvent the geofencing in order to utilize the software for malicious purposes. Shykevich explained that the most probable explanation to CPR is that these malicious actors are seeking to utilize ChatGPT in their day-to-day operations as criminals.

The Wheels of Progress Keep Turning

It’s a bit of a double-edged sword. On the one hand, ChatGPT allows an aspiring software developer to become more cost-efficient, helping him scale their programming skills up and generally serving as a powerful assistant. Sadly, the very same can be said for malicious actors, who can make use of ChatGPT in order to develop malicious code.

One such example comes from a popular underground hacking forum.  On the 29th of December, 2022, a new thread was made with the title “ChatGPT – Benefits of Malware”. Within this thread, the publisher revealed that he was in the process of experimentation with ChatGPT. He explained that he was utilizing the software in order to recreate malicious techniques and codes that were described in various write-ups and publications about the common malware of today.

Another key example occurred earlier, on the 21st of December 2022. A malicious actor posted a Python script on underground forums, pointing out that it was the first script he had ever written. A fellow cyber criminal took notice of something interesting: The script the original poster had used strongly resembled the code that OpenAI provides. 

Disinformation Also a Threat


Responding, the original poster revealed that OpenAI had given him a “helping hand” when it comes to the original malicious script. The implications this brings highlights a new breed of so-called “script kiddies” or skids, individuals with little to no technical skill capable of hacking thanks to premade scripts they found somewhere.

Another surprising problem is ChatGPT is capable of spreading fake news and misinformation just as well as any human.
Now, the good news is that OpenAI is both aware and working on the problems. The firm has entered into collaboration with the Center for Security and Emerging Technology at the Georgetown University, as well as the Stanford Internet Observatory. Both of these partnerships come with the express goal of investigating how large language models could potentially be used for malicious disinformation purposes.

Every time new technology gets developed, there are always those that would see that technology be used for malicious purposes. Sadly, the constant improvement of the generative language modules is no exception to the rule. While new technologies such as this could greatly improve a number of fields, it could also increase the criminal elements of the world, as well. Particularly through recurring online influence operations or various other efforts employed to sway the opinions of a targeted group. With the rampant pace of artificial intelligence being developed, it’s something worth being cautious about, no doubt

Summary
ChatGPT Aiding Data Theft Through Developing Malicious Code
Article Name
ChatGPT Aiding Data Theft Through Developing Malicious Code
Description
As it stands now, ChatGPT is currently free for all to use. This is all part of a feedback exercise, with a paid subscription to the software being planned somewhere in the future. The software was developed by OpenAI, which is subsequently owned by Microsoft.
Author
Publisher Name
Koddos
Publisher Logo

Share this:

Related Stories:

Newsletter

Get the latest stories straight
into your inbox!

YOUTUBE