Even Corrected, Security Flaws in ChatGPT Plugins Could Put Hundreds of Thousands of Users and Organizations at Risk


Mélina LOUPIA

March 14, 2024 at 4:47 p.m.

0

At least 3 security vulnerabilities discovered in extension functions used by ChatGPT open door to unauthorized access © sf_freelance / Shutterstock.com

At least 3 security vulnerabilities discovered in extension functions used by ChatGPT open door to unauthorized access © sf_freelance / Shutterstock.com

Security breaches in the ChatGPT ecosystem allowed access to accounts on third-party websites and sensitive data.

Salt Security researchers have discovered three types of vulnerabilities in ChatGPT plugins that can lead to data exposure and account takeover.

ChatGPT plugins, such as the 6 that Clubic recommends you use, are additional tools or extensions that can be integrated with ChatGPT to extend its functionality or improve specific aspects of the user experience. These plugins may include new natural language processing features, search capabilities, integrations with other services or platforms, text analysis tools, and more. Simply put, plugins allow users to personalize and tailor the ChatGPT experience to their specific needs.

Plugins can allow users to interact with third-party services such as GitHub, Google Drive and Saleforce.
By using the plugins, users authorize ChatGPT to transmit sensitive data to third-party services. In some cases, this involves allowing access to their private accounts on the platforms they need to interact with and therefore exposing this data to hackers.

3 vulnerabilities detected

The first vulnerability discovered by researchers lies in the ChatGPT tool itself and is related to OAuth authentication, it can be exploited to install malicious plugins on ChatGPT users.

The attacker can write their own plugin, which tells ChatGPT to pass almost all chat data to this plugin, and then by exploiting a vulnerability in ChatGPT, they can install this malicious plugin on a victim account “, we read in the report published by Salt Security.

The attacker now being the owner of the plug-in, he can then have control over all of his victim’s data, such as private chats in which the user may have disclosed login information or other information. personal data.

The second vulnerability is a zero-click account takeover that affects multiple plugins. An attacker can exploit this vulnerability to take control of an organization’s account on third-party websites such as GitHub.
The flaw is in the AskTheCode plugin developed by PluginLab.AI, which allows users to access their GitHub repositories.

In our example, we will use “AskTheCode” – a plugin developed with PluginLab.AI that allows you to ask questions to your GitHub repositories – meaning that users who use this plugin have given it access to their GitHub repositories », Illustrates the report.

The third vulnerability is a manipulation of OAuth redirection which impacts several plugins. Researchers demonstrated the attack against the Charts by Kesem AI plugin. As with the first type of vulnerability, this issue can be exploited by tricking a user into clicking on a malicious link specifically designed for the attack.

Users in danger © Tada Images / Shutterstock.com

Users in danger © Tada Images / Shutterstock.com

Security breaches that put hundreds of thousands of AI users at risk

Security issues associated with generative artificial intelligence present a significant threat to a wide range of users and organizations, according to Yaniv Balmas, vice president of research at Salt Security. It highlights the need for security leaders to deeply understand the risks associated with plugins and integrating GenAI into their infrastructure, and recommends in-depth security reviews of the code. It also highlights the need for plugin and GPT developers to better understand the GenAI ecosystem to reduce potential risks.

Other cybersecurity experts have taken up this problem, which constitutes a real threat in its current state. This is the case of Sarah Jones, cyber threat research analyst at Critical Start, who confirms that Salt Lab’s findings reveal a broader security risk associated with GenAI plugins, highlighting the need for strict security standards and regular audits. Darren Guccione, CEO and co-founder of Keeper Security, warns of the dangers of third-party application vulnerabilities, emphasizing the importance of not sacrificing security for the benefits of AI .

The proliferation of AI-driven applications is also creating software supply chain security challenges, highlighting the urgent need to adapt security controls and data governance policies. Darren Guccione warns of the increased risk of unauthorized access to sensitive data, highlighting the potentially disastrous consequences of an account takeover attack on platforms such as GitHub.

However, the report states that the issues have since been resolved and that there was no evidence that the vulnerabilities had been exploited. Users should still update their apps as soon as possible.

The best AI to generate your content
To discover
The best AI to generate your content

Jan 31, 2023 at 6:03 p.m.

Service comparisons

Source : Dark Reading, Salt Security



Source link -99