The CVSS score gets updated to CVSS v4.0

Author avatar
by Martin Hell
6 min
The CVSS score gets updated to CVSS v4.0

The CVSS score is the industry de facto standard for scoring the severity of vulnerabilities. The newest installment of the specification is CVSS v4.0. Here, we will discuss the changes made and compare it to CVSS v3.x.


Our previous post on the CVSS score was based on CVSS v3.1. This version was released in 2019. It was only a slight update from CVSS v3.0 and only added some clarifications and extensions. If we include version 3.0 as well, the current metrics and metric values have been in use since 2015 when CVSS v3.0 was released. This is a long time and we have seen a lot of water under the bridges since then.

It was time for a more significant update. Time to think about lessons learned. Time to collect perceived limitations and identified improvements. It was time for CVSS v4.0, and that time is now.

The CVSS metric is managed by FIRST.Org, Inc, and they have also been the driving force behind the new version. But they are not alone. There has been a public preview period allowing anyone to provide feedback. The updated scoring system has also used industry experts to make sure that the final severity score for a vulnerability matches peoples’ actual view of the severity. 

This post is not intended to be a complete overview of the CVSS v4.0 specification. Rather, we focus on the changes that have been made and compare it to the previous CVSS v3.x version. For an overview of CVSS 3.x, we refer to our previous post on the topic. FIRST also provides a list of changes and a slide deck discussing the CVSS v4.0 specification.

New name, not only new version

Well, kind of. The name is not entirely new. It is still called CVSS. But it turned out that using the CVSS to refer to the score sometimes turned out a bit confusing. 

There are three scores, or actually four of them if you combine them all. When we just say CVSS it is not really clear which one is being used. The base score is the one that we most often encounter, e.g., on NVD. Then there is the temporal score, which in CVSS v4.0 is renamed to the threat score, and then there is the environmental score. 

The base score is always used, and then you can add any subset of {threat, environmental} to that in order to get in total four different scores for one vulnerability. As a result, there are now four named scores:

  • CVSS-B for only the base score
  • CVSS-BT if only the threat metric has been used
  • CVSS-BE if only the environmental metric has been used
  • CVSS-BTE if both the threat and the environmental metric has been used

According to the specification, this new nomenclature should be used to refer to the respective score.

New metric: Attack Requirements 

In CVSS v3.x there is an Attack Complexity metric. This is still there in CVSS v4.0, but there is also an addition of a new metric called Attack Requirements. The idea is to separate attack difficulties into two categories.

Attack Complexity

This takes into consideration controls that have been added by the attacked system in order to evade attacks. This can be e.g., ASLR and DEP that makes it more difficult to execute code in known (static) memory locations and on the stack. Added complexity can also be performing additional attacks in order to get access to some secrets that are needed to exploit the vulnerability.

Attack Requirements

In contrast, this metric considers deployment and execution conditions that may make the attack more difficult. These conditions are not explicitly there to mitigate attacks, but will still make the attack more difficult. This can be e.g., grace conditions that require the attack to be launched several times in order to make it successful. 

The attack complexity is given as low or high, while the attack requirements are either not there (None) or they are present.

Higher granularity for user interaction

The user interaction metric was introduced in CVSS v3.x. In CVSS 2.0 this was included as an aspect of the attack complexity. In CVSS v3.x, user interaction is given as either required or not. In CVSS v4.0, one more level of granularity has been added. The idea is to separate user interactions that the user can not reasonably identify as part of an attack from those that an educated user can spot. These two variants are denoted passive and active respectively.

We can use the well-known XSS (cross-site scripting) attack to compare the two values passive and active. In a stored XSS attack, the attacker has managed to inject code into the stored state of a web page. When a user visits that page, the code is invoked. The user does not know (in advance) that this code is malicious and will be invoked. Thus, this is regarded as passive user interaction and adds more severity to the final score.

By contrast, in a reflected XSS attack, the malicious code is sent from the user to the vulnerable server, with the intent to return code that is executed in the user’s client. This is then executed in the context of the vulnerable application and can extract/modify information or perform actions on the user’s behalf. Since such code is often provided to the user by the attacker in a link, an educated user can identify that an attack is mounted. Thus, this will be regarded as active user interaction, and it will add less to the final score since this is more difficult to achieve.

Vulnerable and subsequent systems

When exploiting a vulnerability, this can have impacts on different parts of the system. In some cases, only the part that is vulnerable is impacted, but in other cases confidentiality, integrity and/or availability can be affected on other parts of the system.

In CVSS v3.x, this was captured by the Scope metric. The scope could be either changed or unchanged. If it was changed, an exploitation affected parts of the system that was outside the scope of the vulnerable component.

In CVSS v4.0, this concept is refined. The scope metric is removed, but replaced with explicit impact metrics for both the vulnerable and for the subsequent system. So, previously the CIA was measured as the overall impact, with the score indicating if this impact was beyond the vulnerable system. Now, the impacts are separated to clarify what impact there is on the different parts of the system. Still, if there is impact to several other components, this impact is grouped into one.

So, in CVSS v4.0 there are now two categories for each of the confidentiality, integrity, and availability metrics. One is the vulnerable system and one is the subsequent system.

There will probably be some difficulties in assessing whether an impacted component is part of the vulnerable system or not. The guideline here is that if the impacted component only is used to serve the vulnerable system, then it is still part of it, even if it is logically another component. 

Changes to the threat metrics

The threat metrics group was known as the temporal metrics in CVSS v3.x. It was named temporal since the metrics here could not be assumed to be constant over time. It included the metrics Remediation Level, Report Confidence, and Exploit Code Maturity. In CVSS v4.0, this metric group is renamed to threat metrics and both the Remediation Level and Report Confidence metrics have been removed. Left is only the Exploit Code Maturity which has been renamed to Exploit Maturity

In addition to Not Defined, the exploit maturity can take on three values. Attacked indicates that there are signs of the vulnerability being exploited, Proof-of-Concept (PoC) that there is some code available, but it has most likely not been in actual use, and Unreported that there are no signs of a PoC or any form of attempts to exploit the vulnerability.

In CVSS v3.x, when calculating the resulting score, the temporal metric is assumed to have the worst case values. If these values are specified (and not worst case), then the resulting score is slightly lowered. In CVSS v4.0, this is similar, but the impact on the resulting score is much more significant. 

Computing the score

In CVSS v3.x, there are defined expressions for computing the score. These expressions are quite complex, but most of all, they are very unintuitive. They kind of make sense though, as they were formed by creating mathematical expressions that would closely follow a score that was heuristically set by industry experts. A drawback of this approach is that not all of the 101 possible scores are achievable (0.0 to 10.0 with increments of 0.1).

The new scoring system for CVSS v4.0 was similarly developed by consulting industry experts and let them rank different metric combinations. Since there are millions of combinations, the CVSS vectors were grouped, making sure to take into account that single changes in a vector should also change the score. The resulting score is now computed using a table lookup and adding a slight modification based on the distance to similar vectors in the table. 

As a result, there are no more complex mathematical expressions that define the score, but there is still quite much complexity in the background to find the resulting score.

Supplemental metrics

In addition to the metrics that go into the computation of the CVSS score, there are also other optional metrics that can be defined. These will provide more context and information about the vulnerability that could be interesting to stakeholders. Exactly how to use this information is up to each organization and their relevance is likely dependent on the use case. They should be seen as input to the overall risk analysis related to a vulnerability.

  • Safety. The extent to which the vulnerability can cause injuries to humans.
  • Automatable. This captures if the steps in the kill chain “reconnaissance, weaponization, delivery, exploitation” are automatable by an attacker.
  • Provider urgency. This allows a provider of a service to indicate how urgently a vulnerability must be handled by the consumer.
  • Recovery. How easily a system can be recovered after an attack.
  • Value density. The number and the importance of the resources that an attacker will control as a result of exploiting the vulnerability.
  • Vulnerability response effort. How easy it is to respond to the vulnerability through mitigation and/or remediation.

All supplemental metrics will take on one of a few different values, very similar to the other CVSS metrics.


CVSS v4.0 introduces several changes to the CVSS score. Most end consumers of the score will only be moderately affected by this. If you only look at the final score, you will probably not even notice it. Still, the granularity is increased, providing more information to the triage process. This should lead to better and more accurate assessments.