How to harness the potential of generative AI while maintaining business security?


With the emergence of ChatGPT, which seduces and frightens at the same time with its ability to create quality content in seconds, generative AI has taken center stage since the beginning of the year. Although the foundations of Large Language Model (LLM) technology have been around for a while, it finally reached a tipping point this year to go beyond the simple (ultimately not really useful) chat tools that users are already familiar with. . For the first time and thanks to ChatGPT, they realize how technology is evolving and how it can facilitate their work and allow them to focus on tasks that require a real effort of reflection.

It is certain that ChatGPT can become the new iPhone of this decade and radically change the way we work and live. This is why, in the years to come, companies will have to adapt their practices and processes to adapt to the use of generative AI.

However, to make the best possible use of any emerging solution, it is essential to fully understand how it works and to understand its limits in order to guard against any potential danger or problem. The issue of data privacy, in particular, is an extremely sensitive topic, so security should be a priority for any company considering the use of generative AI.

The emergence of “super employees”

With the buzz around ChatGPT and generative AI running rampant, many workers may see this technology as mature and likely to threaten jobs if companies implement it into daily workflows. While this is certainly the most advanced and impressive form of AI the general public has seen so far, it is certainly not the definitive version. ChatGPT has many limitations that prevent it from working without human intervention. We are not yet at the stage where we can implicitly trust anything ChatGPT produces, as it can only summarize the information it has gathered from the internet. The solution is able to rely on multiple sources which may not be completely accurate and the resulting content may therefore be laden with errors or inaccuracies.

This is why the use of a generative AI tool always requires the intervention of human intelligence at the heart of the device. The expert must then be able to review the content produced and make the necessary changes to ensure that this content is correct before it is released.

That said, even if he is not fully trained in the use of ChatGPT, the user aware of the limits can always find ways to exploit it to better exercise his profession and gain in productivity and efficiency: and become in a way a “super employee”!

Potential negative effects

However, if generative AI can improve the performance of employees, it can also do the same for attackers. For example, if a hacker sends a phishing email to employees pretending to be a company official asking for a money transfer, it is unlikely that he will succeed, most of the employees being security conscious and the email containing enough signs that it is not credible. Generative AI, on the other hand, can change the situation because its use makes it possible to personalize the e-mail which will be sent, by adding data specific to each target: data which will have been extracted from the Internet without requiring any extra effort to the attacker.

Potential pitfalls also exist internally, as generative AI represents another solution that will need to be integrated into a company’s digital infrastructure or risk expanding the attack surface. The data leak is already raising potential concerns. Nearly all of the big names in tech are creating their own OpenAI-like services right now, and companies will need to pick and choose which ones are right for their business. The solution of their choice will then require access to the intranet to fully assist employees. To assist with any daily task supported by the AI ​​tool, additional (IoT) devices may be required, both in the office and for those working from home. In other words, the IT security team must monitor a large number of devices and solutions. It’s also a whole new avenue that attackers can exploit to gain access to a company’s data and assets.

sure to succeed

If the enterprise security architecture is sound, then don’t be afraid to add new devices or solutions, regardless of vendor. By solid architecture, I mean a Zero Trust architecture which then isolates the attacked device and prevents it from connecting to anything else. It is then possible to add many IoT devices without fear, whether in the office or at home, so that employees can benefit from generative AI. Indeed, even if one of these devices is compromised by an attack by external actors or by an accidental security problem caused by an employee, the Zero Trust architecture makes it possible to isolate it from the rest of the intranet and from minimize the attack surface.

Generative AI is also proving to be a support for security teams. Using the vast amount of contextual data available to it, AI can make human-like decisions at a speed and scale far beyond that of a human. Cybersecurity teams then become more agile than before.

This is arguably just the beginning, and the opportunities offered by ChatGPT and generative AI technology are huge for businesses. Early adopters may already be reaping the rewards. But as with any new technology, outside dangers will always seek to exploit any weakness in the solution. It is therefore essential to ensure that you have a security architecture capable of adapting quickly to new tools, and capable of guaranteeing that the digital transformation process will go smoothly. The health of any business depends on it.





Source link -97