OpenAI opens its bug bounty program


ChatGPT is a very popular demonstration of the wonders of generative AI. But the app still has a lot to learn and a lot of hurdles to overcome, especially when it comes to privacy and security vulnerabilities. To help mitigate these issues, OpenAI has opened its own bug bounty program.

In partnership with Bugcrowd, OpenAI asks ethical hackers to find vulnerabilities in its software and report it to it. The company also offers these security researchers to test ChatGPT plug-ins to find authentication, authorization and security problems.

OpenAI’s bug bounty program also asks ethical hackers to find out if sensitive OpenAI information could be exposed to third parties, including Notion, Asana, Salesforce and many others. Researchers should be careful to find only issues within the scope of the program, as any issue outside of this scope will not be eligible.

Be careful not to do anything

Among the security issues that fall outside the scope are jailbreaking, circumventing chatbot security features, tricking the chatbot into pretending to code (OpenAI claims that if the ChatGPT is running code , it’s hallucinating), or tricking the chatbot into saying harmful things to you.

If an ethical hacker finds a problem in the scope, they can earn up to $20,000. OpenAI doesn’t specify exactly which vulnerability discovery is worth $20,000, but the jackpot applies to multiple categories.

If a researcher discovers a particularly notable vulnerability in API targets, ChatGPT connections and subscriptions, or vulnerabilities in the OpenAI research organization’s website and services, they can win a $20,000 prize.

Lower priority vulnerabilities can fetch between $200 and $600, medium priority vulnerabilities between $600 and $1,250, and high priority vulnerabilities between $1,000 and $3,500, depending on which category the vulnerability falls into.

Ethical hackers who have made discoveries can submit them to OpenAI’s Bugcrowd page.

Source: ZDNet.com





Source link -97