Posted on April 9, 2023 at 8:27 PM
OpenAI launched a bug bounty program that offers up to $20k for security flaws in ChatGPT
ChatGPT, a new chatbot created by OpenAI, impressed the world with its advanced and capable capabilities. Recent reports have suggested that it is capable of passing law school exams with a score among the top 10%. However, OpenAI wishes to eliminate any potential security vulnerabilities, so it started a new bug bounty program yesterday, April 11th.
OpenAI launches a bug bounty program
According to the program, OpenAI is calling for white hat hackers to check out their products and try to find flaws. In return, the company will pay up to $20,000, depending on the severity.
Apart from its commitment to securing artificial intelligence, the company also wishes to reassure security experts that it is doing everything in its power to act responsibly involving its product. The fact is that it has been under scrutiny by security experts ever since it launched its ChatGPT prototype back in November 2022.
Mike Thompson of Zen Internet commented on the move, stating that it is important that OpenAI runs a bug bounty program as a matter of priority. “As the technology is from November 2022 the insane giddiness that has ensued has completely overshadowed the potential risk,” he added.
In its announcement, the company admitted that, despite its investment in research and engineering meant to ensure that AI systems are secure, vulnerabilities and flaws might still emerge. No code is perfect, and OpenAI is aware of that. It added that it believes that collaboration and transparency are crucial to addressing this reality. This is why it is inviting the global community of ethical hackers, security researchers, and tech enthusiasts, in general, to help identify any flaws that the system might contain.
Before this, OpenAI published an announcement on March 23rd, stating that it had fixed a flaw found in ChatGPT4. The flaw allowed users to see the titles of chats by other users, and it was active during a nine-hour period on March 20th. This led to some privacy concerns about ChatGPT and its open-source library.
Lionfish Tech’s cybersecurity advisor, Zaira Pirzada, said that this is not the limit of flaws found, nor of what will ever exist. This is why bug bounty programs exist, and they have proven to be one of the most efficient steps that firms could take to ensure the security posture of their products. She also noted that OpenAI’s CEO, Sam Altman, likely understands that the public is a necessary part of testing, not just consuming.
A bug bounty program is a big step in the right direction
In order to manage the submission and reward process, OpenAI teamed up with Bugcrowd. Bugcrowd’s founder and acting CTO, Casey Ellis, said that actively soliciting feedback from the hacking community on the product’s security is huge for OpenAI. More than that, it will continue to validate hackers as the internet’s immune system.
Furthermore, the company’s willingness to be transparent will go a long way to continuing to build trust with the users in this relatively new and rapidly developing market. These thoughts were echoed by Nikki Webb, Custodian360’s global channel manager, who said that the collaborative approach of bug bounties allows for continuous improvement. Furthermore, it protects user data and it also bolsters overall security in the digital landscape.
As mentioned, OpenAI offers up to $20,000 for the most exceptional discoveries, while minor vulnerabilities will receive lower rewards. At the time of writing, hackers have already found more than 10 flaws that have been deemed legitimate and were already rewarded. However, the hackers are not allowed to release the details of the flaws as part of the program, at least not until they have been fixed by the OpenAI’s team.
Ethical hackers were instructed to explore OpenAI’s APIs, ChatGPT, AP Keys, and any third-party corporate targets related to OpenAI, the OpenAI.com website, as well as OpenAI research org. As Jake Moore from ESET noted, a single bug bounty program will certainly not address all possible attack vectors, but it will act as another tool in the toolkit for the prevention of threats.