CVSS is maintained and developed by the Forum of Incident Response and Security Teams (FIRST) and used primarily by NVD for scoring vulnerabilities. A main, and important feature of the metric is that it is designed to be repeatable, based on objective assessment of a few underlying characteristics. The score is primarily used for two purposes:
- To understand the characteristics and severity of a vulnerability
- To prioritize the work when remediating different vulnerabilities
The first is important when the score is used for vulnerability assessment, while the second is used to focus resources on the right tasks. With around 15 000 new vulnerabilities recorded in the NVD database each year, it is likely that your organization is faced with the problem of addressing vulnerabilities on a regular basis.
The most recent version of the specification is CVSS v3.1, which is used as the main basis for this post.
Different CVSS scores
Even though CVSS is often referred to as one score, it actually defines three different scores.
Base Score. This score reflects the internal properties of a vulnerability. It aims to collect information that will not change. Thus, the base score is typically fixed throughout its lifetime. It consists of both exploitability and impact metrics. This is the score you most often see and which is provided by NVD.
Temporal Score. This score is based on data that can change over time. This also means that the score will change over time. Data that is used as input is exploitability, the status of available fixes (remediation level), and the report confidence.
Environmental Score. This score can be adapted to how it affects a specific organization. It can both be used to modify the different parts of the base metric and to adapt the severity based on how much the organization values certain impacts.
NVD only provides data for the base score. The other scores must be compiled individually or through another third-party provider. Let us look more closely on the underlying data for the base score.
The base score consists of a set of metrics. These metrics are chosen to (1) reflect the exploitability of the vulnerability and (2) its impact if it is exploited. Both these metrics and their granularity have evolved through the different versions of CVSS.
There is a tradeoff between granularity and scoring complexity, which must be taken into account when defining CVSS. A highly granular metric provides better score diversity and it will also capture more information about the vulnerability.
On the other hand, it will make it more difficult to score it in a consistent way. An important aspect of the CVSS score is that two independent analysts should be able to provide the same score. Thus, unnecessary options, e.g., options that are rarely used and will not increase diversity, should be avoided.
The base score is a combination of two scores, Exploitability and Impact. These are in turn computed using a total of eight metrics.
Table 1. Overview of the metrics in the CVSS base score.
|Attack Vector (AV)||Network (N)|
|Attack Complexity (AC)||Low (L)|
|Privileges Required (PR)||None (N)|
0.62 (0.6 if scope is changed)
0.27 (0.5 if scope is changed)
|User Interaction (UI)||None (N)|
|Scope (S)||Unchanged (U)|
|Confidentiality (C) / Integrity (I) / Availability (A)||High (H)|
Apart from the specification, FIRST also provides a user guide and a set of examples for analysts to use when scoring a vulnerability. This, together with the specification, also provides more details on the different metrics.
The first four are used to compute the exploitability as:
Exploitability = 8.22 · AV · AC · PR · UI
The impact is computed by first finding the Impact Sub-Score (ISS) as
ISS = 1 - (1 - C) · (1 - I) · (1 - A)
and then computing the impact as
Impact = 6.42 · ISS, if Scope is Unchanged
Impact = 7.52 · (ISS - 0.029) · 3.25 · (ISS - 0.02)15, if Scope is Changed
Note that Scope affects both Exploitability and Impact, while the other metrics only affect one of them.
We can see that exploitability ranges between 0.12 – 3.89 when Scope is Unchanged and 0.22 – 3.89 when Scope is Changed. Impact ranges between 0 – 5.87 when Scope is Unchanged and -0.22 – 6.05 when Scope is changed.
The final step is to compute the base score as a combination of the Exploitability and Impact.
Base Score = 0 if Impact ≤ 0
Base Score = Roundup(Min(Impact + Exploitability, 10)) if Scope is Unchanged
Base Score = Roundup(Min(1.08 · (Impact + Exploitability), 10)) if Scop is Changed
The highest Base Score is then 9.8 when Scope is unchanged and 10.0 when Scope is changed. Read more about CVSS scores 9.8 and 10.0 here.
Choosing the constants
The expressions and the numeric values for the different options may at first seem a bit arbitrary. However, much work has been put into choosing the constants and the expressions. They were determined by rating real vulnerabilities according to the different options, ranking them in order of severity and also giving them a numeric score.
This results in a lookup table, where knowing the different properties allows finding a severity score. This lookup table was then converted to mathematical expressions, using constants that best matched the lookup table. In other words, the expressions are just approximations of the actual score intended by the CVSS Special Interest Group (SIG).
In some cases, having a qualitative rating instead of the 0-10 score can be beneficial. This is accomplished by a simple mapping from a range of scores to a qualitative severity scale. For CVSS v3.1 (and v3.0) this mapping is given by
Table 2. Mapping between quantitative and qualitative CVSS ratings.
|Score range||Severity rating|
If underlying data is not available, the worst-case scenario is assumed. For the base score, if it is not clear which option to use for a metric, the worst-case should be chosen. The temporal metrics always defaults to the worst case.
For example, if we do not know the status of an exploit, we just assume that there is one and that it is fully working. Thus, the temporal score starts out being equal to the base score, and additional information regarding exploits, fixes and report confidence can then lower the score.
Because of this, the temporal score is never higher than the base score. The environmental score does not have this property as it depends on how important or applicable certain aspects of the vulnerability are to an organization.
Representation of CVSS score
The CVSS score, including all metrics that constitute the score, is given in a standardized concise format. This supports transparency of the metrics and easy portability between systems and implementations.
An example of such a string, only including the base metrics, is
It starts with the string “CVSS”, followed by the version, separated by a colon. Then each metric is given, separated by a forward slash, using the abbreviations for the metric and the option (see Table 1). All base metrics must be included, while the temporal and environmental metrics are optional. From the string it is easy to compute the base score, while at the same time having a very compact format of describing the vulnerability in terms of the metrics.
Introduced in CVSS v3.1, there is also support for extending CVSS with additional metrics. The extension vector is then given as
The current most recent version is CVSS v3.1, while previous versions include v1.0, v2.0, and v3.0. There are significant changes between v1.0, v2.0, and v3.0, both in granularity, which metrics to include, and how to compute the score.
CVSS 1.0 was first published in 2004. CVSS 2.0 was published in 2007 and was adapted as an international standard for scoring vulnerabilities (ITU-T X.1521) in 2011. The widespread adoption of CVSS v2.0 allowed for identifying improvements. Such improvements were included in CVSS 3.0 in 2015.
As an example, the scope metric was introduced in CVSS v3.0. A changed scope means that the vulnerable component may not be in the same authority as the impacted component. Take a virtual machine as an example, where the VM monitor can be vulnerable, but it affects the host OS.
The scope is used to reflect this change in authorities. Another significant change was that the “access complexity” as used in CVSS v2.0 was split into “attack complexity” and “user interaction” in CVSS 3.0. The reason was that the “access complexity” included both the properties out of control for the attacker, and the requirement for human interaction. These were then put into two separate metrics.
Metrics and expressions were not changed for v3.1 compared to v3.0. Instead, v3.1 updated the specification to clarify the guidance and to remove ambiguity when determining different metric options. The goal was to allow analysts to make more consistent decisions.
NVD will not give CVSS v3.0 scores for vulnerabilities that were analyzed before 2015-12-20, but in some special cases it might still be done. Starting September 2019, NVD uses CVSS v3.1 for severity scoring, both for new CVEs and for those that are re-analyzed.
Using the CVSS score
As mentioned in the beginning, you can use the CVSS score to better understand the properties of a vulnerability. These are important in triaging, where you determine how to respond to the vulnerability. The response can be in the form of patching, deprecating support for certain functionality or algorithms, or accepting the risk. When it comes to risk it is important to understand that the base score should not be seen as a risk metric, but a severity score. Indeed, when computing risk, the likelihood of exploitation and the impact are the two main aspects to consider.
The exploitability subscore is here immediately related to the likelihood since it includes different aspects of how easy the vulnerability is to exploit. It is thus tempting to see it as a risk. At the same time, the CVSS base score only measures intrinsic aspects of the vulnerability, which is not enough for measuring the risk. The CVSS score should only be used as a part of the risk assessment, not be seen as an actual measure of the risk.
In conclusion, the base score does convey important information about the vulnerability but the underlying metrics, as well as an assessment of the environment and the exploit/patch status is essential in order to take the most appropriate action.