Cybersecurity: how LinkedIn reduced its reaction time to attacks


Protecting against phishing, malware and other cyber threats is often a difficult challenge for an organization. But when the latter has more than 20,000 employees and manages a service used by almost a billion people, the challenge is even greater.

And that is precisely the challenge facing LinkedIn. The largest professional network in the world has more than 875 million members, ranging from juniors to high-level executives, who use this network to establish contacts with their colleagues and peers, to discuss, or to find a new position or a new employee.

With hundreds of millions of users, LinkedIn needs to ensure its systems are secure against ever-evolving cyber threats. A task that falls to its threat detection and incident response team.

“We must always be ready”

Jeff Bollinger, Director of Detection and Incident Response Engineering, heads up this team and has no illusions about the scale of the challenge cyber threats pose to the business.

Highly sophisticated hacker groups have high-profile companies like LinkedIn in their sights. Their means of action are multiple, from trying to trick users into clicking on phishing links to installing malware via social engineering.

“We always have to be ready – whether it’s an opportunistic attacker or a dedicated attacker, we have to have our sensors and our signal collection in place to deal with it, no matter who it is” , emphasizes Jeff Bollinger.

A six-month project to improve incident response

Establishing more mature cybersecurity has been no small feat, and Jeff Bollinger describes this project as “a kind of shot at the moon”. This is why the program was called “Moonbase”. Moonbase needed to improve threat detection and incident response, while improving the quality of life for LinkedIn security analysts and engineers through automation, reducing the need to manually review server files and logs.

It is with this goal in mind that, over a six-month period between March 2022 and September 2022, LinkedIn rebuilt its threat detection and monitoring capabilities, as well as its security operations center (SOC). This process began with a reassessment of how potential threats are analyzed and detected in the first place.

“Every good team and every good program starts with an appropriate threat model. We need to understand what the real threats to our business are,” explains the director.

Examination of models and real incident data

This awareness begins with an analysis of the data that urgently needs to be protected, such as intellectual property, customer information and information regulated by laws or standards, and then with a reflection on the potential risks to this data. .

For LinkedIn and Jeff Bollinger, a threat is “anything that undermines or interferes with the confidentiality, integrity and availability of a system or data”.

Examining actual incident patterns and data provides insight into what cyberattacks look like, what is considered malicious activity, and what kind of unusual behavior should trigger alerts. But relying solely on people to do this work is a time-consuming challenge.

The virtues of automation

By using automation as part of this analysis process, Moonbase steered the SOC towards a new model: a software-defined, cloud-oriented security operation. The goal of the “software defined” SOC is to leave much of the initial threat detection to automation, which flags potential threats for investigators to investigate.

But that doesn’t mean humans aren’t involved in the detection process at all. While many cyberattacks are based on common and proven techniques, which hackers rely on throughout the attack chain, the evolving nature of cyberthreats means that there are always new, unknown threats deployed to attempt to penetrate the network – and it is vital that this activity can also be detected.

“As for what we don’t know, we just have to look for strange signals in our threat hunt. And that’s really the way to do it – by spending time looking for unusual signals,” describes Jeff Bollinger.

Filtering out the legitimate and the dangerous

However, cyber attackers often use legitimate tools and services to carry out malicious activities. So while it might be possible to detect whether malware has been installed on the system, finding malicious behavior that might also be legitimate user behavior is a challenge. And that’s where LinkedIn focused.

“Normal, legitimate admin activity often looks exactly like hacking because attackers aim for the highest level of privilege – they want to be domain admins or they want to gain root access, so they can do whatever they want. “, details the leader.

However, by using SOC to analyze unusual behavior detected by automation, it is possible to either confirm that it is legitimate activity or find potential malicious activity before it becomes a problem. .

From detection to the fight against threats

Nor does the SOC require information security personnel to methodically monitor what every user in the enterprise is doing, only looking at individual accounts if strange or potentially malicious behavior is detected.

This strategy allows the threat hunting team to use their time to quickly examine more data in more detail and, if necessary, take action against real threats, rather than having to take the time to investigate. manually review each alert, especially when many of those alerts are false alerts.

“I think it gives us a lot more power to work on these issues,” said Bollinger.

But detecting threats is only part of the battle. When a threat is detected, LinkedIn must then act quickly and smoothly to avoid disruption and prevent a large-scale incident.

Reduce detection time

This is where the incident response team comes in. It searches for and filters threats, based on what has been detailed by the threat hunting team.

“We give our teams as much context and data as possible upfront, so they can minimize the time spent collecting data, digging, looking for things, and can maximize their time using the thinking skills of the human brain to understand what is really going on,” says the director.

How incident response works hasn’t changed drastically, but the way it’s approached, with the added context of data and analytics, has been overhauled. And this change has helped LinkedIn become much more effective at detecting and protecting against threats. According to Jeff Bollinger, investigations are now much faster, from the detection of threats to their treatment.

“Detection time is the time between when activity occurs and when you first see it – and the speeding up of that time has been dramatic for us. We went from several days to a few minutes,” he explains. “We have significantly reduced our detection time. Once we lower the detection time threshold, we also have more time to contain the incident itself. »

And the processes?

“Now that we are faster and see things better, it reduces the opportunities for attackers to cause damage – but the sooner we detect an incident, the sooner we can stop it, and that reduces the window that we have an attacker to cause damage,” he adds.

Keeping the business secure is an important part of overhauling LinkedIn’s threat detection capabilities, but there’s also another key part of the job: designing the process, so it’s useful and effective for SOC personnel, helping them avoid the stress and burnout that can accompany working in cybersecurity, especially when responding to live incidents.

“One of the key elements of this project is the preservation of our human capital. We want the staff to have a satisfying job, but we also want them to be efficient and not burn out,” he says.

Improve the quality of the work of cyber teams

The approach is also designed to encourage collaboration between detection engineers and incident responders, who, although split into two different teams, are ultimately working towards the same goal.

This common approach has also extended to LinkedIn employees, who are now part of the threat identification and neutralization process.

Users are notified of potentially suspicious activity around their accounts, with additional context and explanation of why the threat hunting team believes something is suspicious, as well as a request to user to find out if they think this item is suspicious.

Depending on the response and context, a workflow is triggered, which can lead to investigation of the potential incident – ​​and remediation. “Instead of making people work harder, we make them work smarter – that’s really one of the big things for us in all of this,” Bollinger argues.

Source: ZDNet.com





Source link -97