How to prioritize open source vulnerabilities

Author avatar
by Martin Hell
2021-01-05
9 min
How to prioritize open source vulnerabilities

Each year, several thousand new vulnerabilities are disclosed. The CVE database alone enumerated more than 17000 new vulnerabilities during 2019. If you use third-party dependencies, you need to keep track of which dependencies you are using, and the vulnerabilities affecting them.

Debricked provides a SaaS tool that can integrate with your development and build pipeline, allowing you to identify all vulnerabilities that you might be affected by. This post will discuss what to do when you have identified new vulnerabilities.

Vulnerability prioritization: first things first

With so many new vulnerabilities, you will probably get a long list of potential problems that you need to respond to. At least if it is the first time you scan your project or repository. This list needs to be prioritized and you need to decide which fires to extinguish first. Working efficiently with this prioritization is essential since it will allow you to both prioritize correctly and in a timely manner.

Such prioritization of vulnerabilities is known as triage. The word is often used in medical emergencies and is the process of dividing patients into one of three groups depending on how to prioritize them. Triage, when it comes to vulnerabilities, is the process of sorting out the vulnerabilities that need to be addressed immediately and which can wait for later. This can be a cumbersome process (as detailed later in this post), but in many cases it can be made much simpler.

Companies with continuous releases, developing e.g., web-based applications, can bypass many of the steps in the triage process. If there is a patch, or new version of the dependency, available, you do not really need to care about every triaging step. As long as you know that updating will not break functionality, then just go ahead and update. Debricked will even help you do this through the suggested fix functionality. After testing, the new dependency can go immediately to the production environment.

Why prioritize open source vulnerabilities?

In other cases, when you are not in control of the update process yourself, triaging can be of significant importance. For many organizations, rolling out new firmware patches is done based on a predetermined schedule, say once per month, once every three months, or even more seldom. An out-of-cycle patch causes disruption of the normal workflow, which can delay development of other features. 

The situation is similar if you develop COTS software and release new versions on a regular basis. New versions require customers to download the software and update to the new version. It is then important to understand which vulnerabilities need to be addressed immediately and which can wait until the next planned release. Customers should not be bothered with frequent notifications to download and update to the latest release. 

For many customers, such updates can also be very time consuming if it involves updating software for production databases, internet facing applications, hard-to-reach IoT devices, or systems with very high availability demands.

Thus, it is clear that in many situations triaging is needed in order to avoid unnecessary costs. These costs apply to both the development organization and the customer. In the remainder of this post, we take a closer look at the triage process.

Summary of actions

In brief, the triage can be divided into four steps. It is not necessary to fully finish each step before going to the next. An iterative process can also be useful since it builds up the understanding of the vulnerability incrementally. The steps include the following.

  1. Understand your application and the role of the vulnerable component
  2. Understand the severity of the vulnerability
  3. Understand the current status of the vulnerability in terms of exploitation and remediation
  4. Combine the information from steps 1-3

In the following, we look closer on each of these steps.

Step 1: Understand your application and the role of the vulnerable component

The main role of security is to protect humans and other assets. If there is nothing to protect, there is also no need for security. Thus, the first step should include knowing which assets you are protecting and what harm you want to protect them from. This can be done as part of threat modeling, which is a process of its own. You can read more about threats in our previous post “What is a security threat?”. If you do not want to go through the whole process of threat modeling you should still think about what can happen in terms of the CIA triad.

  • Confidentiality. What confidential information are you protecting, and how do you protect it?
  • Integrity. What information should not be modified or deleted?
  • Availability. To which extent is your application sensitive to downtime?

If the application handles e.g., personal data or credit card data, it must be understood how this is done and the particular regulations related to handling of such data. This analysis can be done prior to the actual prioritization step since it does not depend on the actual vulnerability and will save time when one is actually discovered

A vulnerability is a specific weakness in a component that can be used to compromise confidentiality, integrity, and/or availability. For each identified vulnerability, Debricked’s tool will also provide you with a list of exactly which dependency is affected. This can be a direct dependency, but it can also be a transitive dependency, i.e., a dependency of a dependency (in one or more levels). 

For the vulnerable component, it is important to get an understanding of how this software is used by the application. What does it do? What functionality is used? How is it used? If it communicates with a client/server, how, when and with whom does it communicate? This will make it easier to assess to what extent the vulnerability applies to your particular environment.

If the component is a library for parsing data, who controls the information sent to the library? Can input be crafted/controlled by an untrusted party? What is required for an untrusted outsider to provide input to the library? Typical examples here would be JSON och XML parsers, which need to handle rich information. If an untrusted user can craft the input files, then this is a potential attack vector. 

The answers to these questions will be useful when you move on to the second step, in which you will look more closely at vulnerability data.

Step 2: Understand the severity of the vulnerability 

With enough information about the vulnerable component, it is time to look at the vulnerability properties. Here, we will assume that the vulnerability is given as a CVE. The first information provided is then the CVE summary which is often insufficient, at least if you want to fully understand the severity.

In this case, it might be useful to also look at other sources of information, e.g., news articles, mailing lists, or security advisories from the vendor. However, most vulnerabilities that have been known for a few days or a week or so, have a CVSS score

The CVSS base score is a generic score that gives the severity of the vulnerability. The severity is given as a score between 0-10, with one decimal. It is added by NIST to each CVE as a way to provide richer vulnerability information. This information is collected in the widely used NVD database. The most simple usage is to just prioritize based on this score. However, this is suboptimal for several reasons.

  1. Since the score is generic it does not take into account how the vulnerable component is used by your application.
  2. The score does not immediately reveal how the application can be compromised or how it is affected.

Looking at the different metrics behind the CVSS base score will help you with the necessary information. Information that can be deduced by analyzing the CVSS metrics includes the following.

  • Attack vector. This will show how remote an attacker has to be in order to exploit the vulnerability. The most severe alternative is network, which means that the attack can be initiated over the internet. On the other end of the spectrum is physical, where the attacker must be in physical contact with the vulnerable component.
  • Attack complexity. This describes the conditions needed to exploit the vulnerability that are out of the attacker’s control. This includes guessing certain parameters or having to repeat the attack several times in order to succeed. The metric is given as low or high.
  • Privileges required. This describes the level of privileges required by the attacker before the attack can be launched. It can be none, low or high.
  • User interaction. This describes to which extent the user (not the attacker) must be actively involved in the attack. Some attacks can be launched purely at the will of the attacker, while others require e.g., that the user must install an application or click on a link. None and required are the available options here.
  • Scope. This was introduced in CVSS 3.0 and captures if an attack can impact a component other than the vulnerable component. Escaping a sandbox or a virtual machine are examples where a component under another security authority is affected by an attack.
  • Confidentiality, Integrity and availability impact. This describes to which extent a successful attack will impact the application. Each of these has its own metric and the impact is given as none, low or high.

These metrics are those used by CVSS 3.0 and CVSS 3.1. The previous CVSS 2.0 uses slightly different metrics. The CVSS score is maintained by FIRST, which has more detailed information on each metric. 

While the first four are related to the exploitability, the last bullet is related to the impact. These together can be used as input to determine the overall risk. Not by themselves, since they are generic, but together with other information. This includes how the vulnerable component is used in the application and the assets that have to be protected.

Understanding how the different metrics are determined can be very useful. As an example, if a certain configuration is required for the attack to be effective, the target system is assumed to be in this configuration. Thus, this will not make the attack complexity high. Here, knowing the configuration will be useful when understanding the actual exploitability with regards to your application.

Step 3: Understand the current status of the vulnerability in terms of exploitation and remediation

The data in step 2 can be immediately found in NVD. The current status of the vulnerability is  however changing and evolving, in particular for newly discovered vulnerabilities. Understanding the current status means that we need to focus on vulnerabilities that risk being exploited in the short term.

Moreover, if there is not a fix available, then we need to start looking at other possible remediations. This can include changing configuration, closing ports in the firewall, or temporarily disable that part of the service. 

The current status is also captured by the CVSS temporal score. This score includes metrics that are specific for the vulnerability, but changes over time. These metrics are not given in NVD, but have to be assessed by the organization or another third party. Such information is important in the triage since e.g., the status of known exploits or fixes will impact the severity of the vulnerability. The score includes the following metrics.

  • Exploit code maturity. This describes how well developed exploit code is. It can range between unproven and high, where the latter means that autonomous code exists and that it e.g., is actively used in worms.
  • Remediation level. This describes to which extent the vulnerability can be patched. It ranges from official fix to unavailable.
  • Report confidence. This describes the credibility of the available technical details. This ranges from unknown to confirmed.

The temporal score assumes a worst case scenario. This means high exploit code maturity, fix unavailable and confirmed existence of the vulnerability. If the situation is different for any of the metrics, the temporal score will be lower than the base score. The metrics defining the temporal score are important when prioritizing a vulnerability, whether or not the score is actually used.

In Debricked’s tool you will immediately get information on available exploit code and if there are fixes or new versions of the vulnerability. This will save you much time in this step. 

Step 4: Combine the information from steps 1-3

Now that you have the necessary information about the application, the dependency, the vulnerability, and its current status, this is used to determine if and when it is time to update and deploy a new version with the updated dependency.

The metrics of the CVSS score in step 2 should be mapped to the properties and requirements identified in step 1. As an example, looking more closely at the impact will give important information about the expected result of an attack. All these impact metrics will have the same influence on the resulting score. If exploitability is worst case and scope is unchanged, a vulnerability with high confidentiality impact but no impact on integrity and availability will have CVSS score 7.5. Another vulnerability with same exploitability and scope, but with no confidentiality and integrity impact, but with high availability impact will also have CVSS score 7.5.

In other words, these two vulnerabilities have the same severity according to CVSS, but are fundamentally different in how they affect applications and systems. What impact did you deem most important, and how does it align with the CVSS impact metrics? Is uptime of highest importance, but there is no impact on availability, then priority can be lowered.

A similar approach can be taken with the exploitability metrics, e.g., the attack vector. If exploitation requires physical presence, but a vulnerable device is unreachable for untrusted threat actors, then priority can be lowered. This is implicitly assumed in the CVSS scoring, since physical presence will lower the CVSS score. If such attacks are possible in your environment, then it could be a reason for a higher priority

For the current status, existence of exploits should increase the priority, compared to similar vulnerabilities where there are no known exploits, or where exploits are not known to be found in the wild.

CVSS includes a third score that can be used to map the vulnerability to the needs of the organization. This is achieved in two ways. First, requirements can be set for each of the impact metrics. If you, e.g., are most concerned by threats to availability, then this metric can be weighted higher. Secondly, each metric used in the base score can be explicitly modified depending on how it applies to the organization.

This results in a score that is tailored to the organization and its needs. Debricked takes this one step further. Not only can you choose requirements for confidentiality, integrity and availability, but the tool canlearn your preferences when you remediate or deem vulnerabilities uninteresting. Furthermore, we use more vulnerability properties than included by the CVSS score, in order to provide more accurate recommendations.

Final words

The triage process is not only important for prioritizing vulnerabilities. It is also important for understanding the current risks a system or an application is exposed to. Such knowledge can be important for future development and implementation of prevention measures. It is also useful when you need to respond to security incidents when that day comes.

Working with security is a process, and that process never really ends. Triaging will allow your organization to use the available security resources efficiently. No system is ever 100% secure, but by incorporating security in the everyday workflow, many attacks can be prevented. 

If you’re unsure of what to look for in a tool, please see our guide on how to evaluate an SCA tool.