Many employees reveal confidential information about their company and their missions to AI chatbots. These discussions are stored and may appear in other exchanges in the future.
An AI is not bound by professional secrecy. Far too many people have placed their trust in ChatGPT, forgetting that this tool also stores data. A study by cybersecurity company Cyberhaven, published in February and spotted by DarkReading on March 7, 2023, reveals that tens of thousands of employees transmitted company data to the company’s OpenAI chatbot. Of 1.6 million posts monitored by the cybersecurity company, 2.6% of users went so far as to reveal confidential information to the AI.
This risk has become significant enough in the space of three months, since the release of ChatGPT, for several multinationals to ban this tool in their offices, like JP Morgan or Verizon.
Internal discussions at Amazon, disclosed to the American media Insider, reveal that the legal department has taken up the case. A lawyer for the group told employees that they had seen texts generated by ChatGPT, resembling “ closely to internal company data. Amazon employees must now avoid providing content to the language model.
Microsoft, investor and partner of Open AI – to the point of including the tool in the Bing search engine – allows its employees to converse with the chatbot on condition that they do not share sensitive company information as well.
Private conversations recycled by ChatGPT
Concretely, what is the danger? One executive, for example, copied and pasted a 2023 strategy document from his group, asking him to provide a PowerPoint presentation. However, by definition, an artificial intelligence works from a database to improve itself. A discussion that is supposed to be professionally confidential could perfectly well be recorded and formulas could emerge during an exchange with another user.
Worse, an attack on the servers would make it possible to recover discussions directly. Note also that these are not encrypted. In an article published in June 2021, a dozen researchers from a list of companies and universities (including Apple, Google, Harvard University and Stanford University) found that an attack on GPT -2 – the earlier version of the language model – successfully retrieved text sequences. Then favor a discussion with your colleagues or your partner on your professional choices.
Do you want to know everything about the mobility of tomorrow, from electric cars to pedelecs? Subscribe now to our Watt Else newsletter!