Meta launches Purple Llama, for reliable and responsible AI


Image: Meta.

Last Thursday, December 7, Meta announced Purple Llama, a global project to support the development of responsible and reliable artificial intelligence.

This project aims to provide cybersecurity, validation and I/O protection tools to developers using Llama, the open source LLM that the company launched last February.

Why “Purple Lllama”?

Meta explains that to mitigate the risks associated with generative AI, this project will require a “purple” approach, i.e. requiring a mixture of “red”, which represents the offensive, and “blue”, which represents defense.

Purple Llama will initially offer cyber tools to guarantee and validate entries and exits. These tools will be expanded in the future.

The project components will be licensed and made available for research and commercial use. The project hopes to promote collaboration between developers and the standardization of security tools for generative AI.

Assess risks and prevent the creation of malicious tools

In terms of cybersecurity, an industry-wide assessment of the security of large language models (LLMs) will be carried out and the results will be published. This assessment will include metrics to quantify the cyber risk of LLMs, tools to assess how often LLMs offer insecure AI-generated code, and measures to make it more difficult for LLMs to generate malicious code. LLMs.

As part of the I/O protection measures, Llama Guard, an underlying model intended to prevent developers from generating potentially risky results, will also be made available to the public. The model was trained on a variety of publicly available datasets and can detect potentially risky or offensive content.

Meta worked with more than 100 companies to release the new version of its LLM, Llama 2, last July. The company wants to continue working with a number of them, while adopting an open source approach.

ZDNet.com



Source link -97