AI is the future of cybersecurity. Here’s how to adopt it safely


AI helps developers code faster and be more productive. However, some managers fear that it will bring its share of problems in terms of security and risk management.

Fortunately, the cybersecurity industry is here to help anticipate and manage the risks inherent in emerging technologies. For their part, developers must also help secure the software ecosystem by using all the tools at their disposal.

AI makes the promise of “shift left” a reality

AI provides context for potential vulnerabilities and provides suggestions for secure code up front (although it remains essential to have AI-produced code systematically tested). These capabilities allow developers to write safer code in real time, finally delivering on the true promise of “shift left.”

With AI, security is truly built in, not tweaked, right where your developers bring their ideas to code, with the help of their AI-assisted buddy programmer.

This exciting new era will see generative AI placed at the forefront of cyber defense. However, just as AI will not replace developers, AI will not replace the need for security teams. We are not yet at level 5 of autonomous driving. We need to keep our hands on the wheel and work with our existing safety controls, without abandoning them.

Green-lighting AI within your organization

Although some companies are already seeing significant productivity benefits from AI, other executives remain concerned about security risks. They are concerned about creating the right standards around AI tools and wonder how to ensure good security outcomes while allowing software creators to be as efficient as possible, thanks to AI .

We’ve been practicing AI a lot at GitHub. This is why I want to share some best practices that will open up interesting perspectives for organizations considering adopting a generative AI tool. These strategies will help distinguish organizations that thrive from those that fail to protect their most valuable assets.

Treat AI tools like any other tools

To evaluate an AI tool, you can start by using the same security and risk frameworks as any other tool you’re considering integrating into your stack, and adapt them over time. Request data flow diagrams, external test reports and any other useful information on the security and maturity of the tool.

At GitHub, we have processes in place that help us identify and manage risks associated with any new tools acquired from an external vendor. These new tools or services are carefully reviewed by our procurement, legal, privacy and security teams.

Understanding data usage and retention

The key thing to watch out for is how your data, or that of your customers, is managed. Imagine the security issues that could arise if a third-party provider stores and uses your company’s or customers’ sensitive information.

You need to know how the AI ​​tool handles your data, where it goes, how it’s shared, and whether it’s retained. Check whether the vendor uses customer data to train its AI models and understand what options are available to accept or decline the use of that data based on your needs.

Check the clauses relating to intellectual property

AI tools open a new world on issues related to intellectual property, as the legal landscape in this area is still evolving.

To understand what protections may be offered, it is essential to examine the intellectual property provisions of the new tool and review the terms of the license agreement. For example, Microsoft and Google both offer their customers intellectual property compensation when using their generative AI tools.

Additionally, remember that any time a developer uses code that they did not create, they run this risk, such as when they copy code from an online source or it reuses code from a library. This is why responsible organizations and developers use code review policies and other review practices.

Track tool history

Knowing the tool’s background helps ensure that the AI ​​product is reliable, effective, and aligned with your business goals.

What is the performance and accuracy of the tool in the past? Look for successful use cases that demonstrate its effectiveness.

What type of dataset is used to train it? Make sure this dataset is relevant to your projects. Other things to consider include bias mitigation, user feedback, and the ability to personalize.

Check tool audits

We know that third-party testing and auditing is invaluable in assessing the effectiveness or security of a technology. Ask if the AI ​​tool has been third-party tested. It may not be compliant due to its newness, but are the company’s other products? Is there a compliance plan for the AI ​​product? Choosing a tool that has undergone rigorous testing and auditing will strengthen your organization’s security posture.

When it comes to AI, you don’t have to be “the department that says no.” By adopting the best practices mentioned above, you will establish safeguards and rules of engagement that lead to secure results.

AI won’t replace developers or even your security teams, but it will significantly improve their work. There’s no better way to practice the shift left approach than to have an AI-assisted pair programmer in your developers’ own IDE helping them write code. safer and more secure in real time. This will help you achieve better security results faster. I have no hesitation in asserting that this approach will radically transform the next decade of security.



Source link -97