OpenAI: a Red Team network on AI security, and you can apply


OpenAI’s ChatGPT has accumulated over 100 million users worldwide, highlighting both the positive use cases of AI and the need for greater regulation. OpenAI is therefore building a team to build safer and more robust models.

On Tuesday, OpenAI announced it was launching its OpenAI Red Teaming Network, comprised of experts who can help inform risk assessment and mitigation strategies to deploy more secure models.

This network will transform the way OpenAI conducts its risk assessments into a more formal process involving different stages of the model and product development cycle. A logic in opposition to “one-off commitments and selection processes before major model deployments”, according to OpenAI.

No need to have prior experience with AI systems

OpenAI is looking for experts from all backgrounds to build the team, including education, economics, law, languages, political science and psychology, to name a few .

OpenAI clarifies, however, that it is not necessary to have prior experience with AI systems or language models.

Members will be compensated for their time and subject to non-disclosure agreements (NDAs). Since they will not be involved in every new model or project, being part of the Red Team (editor’s note. a team that tests for vulnerabilities, as opposed to the Blue Team, the team of defenders in the world of cybersecurity) could only represent a commitment of… five hours per year. You can apply to be part of the network on the OpenAI website.

“A unique opportunity to shape the development of AI technologies and policies”

In addition to OpenAI’s red teaming campaigns, experts can engage with each other on “red teaming practices and outcomes,” according to the blog post. “This network provides a unique opportunity to shape the development of safer AI technologies and policies, as well as the impact that AI can have on the way we live, work and interact,” OpenAI says.

Red teaming is an essential process for testing the effectiveness and ensuring the security of new technologies. Other tech giants, like Google and Microsoft, have similar processes dedicated to their AI models.


Source: “ZDNet.com”



Source link -97