The Zero Trust approach: an imperative step to use generative AI in complete security.


It is now no longer possible to ignore the impact of generative AI. Some see it as a miracle cure for the world of work, heralding a new era where low-value editorial tasks would be a thing of the past.

For others, it is the start of a new technological wave that should revolutionize all sectors of activity, from logistics to the development of new life-saving drugs.

But the enthusiasm generated by these technologies also raises a lot of concerns, particularly in terms of confidentiality and data security.

Earlier this year, Samsung banned the use of generative AI. Indeed, confidential information had accidentally leaked into the public domain, following the use of ChatGPT by employees to help themselves in their work.

The Korean electronics giant is not the only one to take this route. A number of companies and even countries have banned generative AI. And it’s easy to understand why.

Security issues posed by generative AI

The use of tools such as ChatGPT and other LLMs (Large Language Models) generally opens the door to uncontrolled shadow IT, i.e. devices, software and services that are beyond the ownership or control of the IT Department. Whether it’s an employee experimenting with AI or a company-led initiative, once proprietary data is exposed, there’s no going back.

According to a recent KPMG study of 300 business leaders, they anticipate a colossal impact of generative AI on their organizations. However, a majority of them say they are not ready for immediate adoption. This reservation is explained by a series of concerns, of which cybersecurity (81%) and data confidentiality (78%) top the list.

This is why a balance needs to be struck between leveraging the power of AI to accelerate innovation on the one hand and complying with data privacy regulations on the other.

To achieve this confidently, the best approach is to implement Zero Trust security controls, which allow the latest generative AI tools to be used securely, without risking compromising the company’s intellectual property or data of its customers.

The need for a “Zero Trust” approach?

Zero Trust security is a methodology that requires strict verification of the identity of every person and device that attempts to access corporate network resources. Unlike the traditional “fortified castle” approach, a Zero Trust architecture consists of trusting nothing and no one.

A first step aims to understand how many people use AI services and for what purpose. Then, system administrators must be given the means to supervise and control this activity, in the event that it needs to be suspended urgently. Adopting a data loss prevention (DLP) service helps provide additional protection to prevent any sharing of sensitive data to AIs by unsuspecting employees. More granular rules may even allow some users to experiment with projects containing sensitive data, while establishing stricter access and sharing limits for the majority of teams and collaborators.

To summarize, if organizations want to use AI in all its forms, they must improve their security and adopt a Zero Trust approach. The adoption of generative AI is therefore an opportunity to accelerate the transition to a security model based on Zero Trust for organizations that have not yet adopted it.

However, while it is essential to highlight these issues of security and confidentiality, we should not take advantage of this to create counterproductive sensationalism regarding a technology that offers tremendous development potential.

Let’s keep in mind that every significant technological advancement, from mobile phones to cloud computing, brings with it new security threats. And the good news is that each time, the IT industry has responded proactively by strengthening security, protocols and processes. It’s the same with AI.



Source link -97