Hugging Face accounts vulnerable due to exposed API tokens


If generative artificial intelligences arouse enthusiasm, their growth is not without risks. Lasso Security, an Israeli company specializing in security around large language models (LLM), has just demonstrated this.

She claims to have discovered nearly 1,700 vulnerable Hugging Face API tokens, these tokens which allow you to authenticate in an online service. These tokens opened access to software supply chain attacks against large companies such as Meta, Microsoft, Google or Vmware.

Vulnerable accounts

Lasso Security researchers explain that they were able to access the accounts of more than 700 organizations, including 655 with write permissions. In 77 cases, the latter were even able to take total control of the repository. The company also claims to have obtained the necessary rights to modify 14 data sets totaling tens of thousands of downloads per month. Likewise, they had the rights to steal more than 10,000 private models.

Hugging Face, this gem founded by French people based in New York, is a sort of GitHub of artificial intelligence. This open source service, a true generative AI toolbox, hosts more than 500,000 AI models and 250,000 datasets.

Risks for generative models

By monitoring account control at the source of millions of downloads, Lasso Security notes that it was possible to “manipulate existing patterns, potentially transforming them into malicious entities.” This represents a serious threat to the targeted entities, as the injection of corrupted models could “affect millions of users who rely on these models for their applications”.

Hugging Face, contacted by ZDNET.fr, indicated that it had revoked all the tokens concerned. She also implicitly underlined the responsibility of users, recalling that she advises not to publish tokens on a code hosting platform. “We also work with external platforms like Github to prevent valid tokens from being published in public repositories,” she adds.

On this subject, Lasso Security suggests that Hugging Face, like GitHub does, constantly search for publicly exposed API tokens to revoke them or warn its users. API token leaks are indeed an old problem that is already well documented in IT security.



Source link -97