OWASP Top 10, 2021 Edition: 10 Things You Need to Know

Author avatar
by Martin Hell
5 min
OWASP Top 10, 2021 Edition: 10 Things You Need to Know

The OWASP Top 10 has become a de facto industry standard for enumerating and ranking security risks in web applications. With the 2021 edition of the list recently published, we here list 10 things that you need to know in order to understand the list and to use it in an informed way as a developer and in your organization.

OWASP top 10 security risks in 2021

The long-awaited 2021 update of the well-known OWASP Top 10 list of security risks was released on September 24, 2021. The list has become a go-to resource for web application developers and organizations to better understand how to secure the applications in light of the most common and severe security risks.

Though not constituting a complete and comprehensive catalog and knowledge base, the list at least provides an excellent starting point for increasing the security in applications.

The 2021 OWASP Top 10 list in summary

Understanding the OWASP list

Several categories are new, based on mergers of previous categories, or just somewhat renamed compared to the previous edition released in 2017. For us at Debricked, being a developer of a Software Composition Analysis (SCA) tool, the category A06:2021-Vulnerable and Outdated Components is particularly interesting. It has climbed from spot nine in the 2017 list to being listed at sixth place in the 2021 list. Moreover, it was ranked number two in the community survey.

The OWASP Top 10 website gives an overview of the categories and how they relate to the previous edition. Below we list 10 things you need to know in order to understand the list and to get the most out of this valuable resource.

1. The 2021 list is the 7th installment of the OWASP Top 10 list

The first was published in 2003, which was followed by the 2004, 2007, 2010, 2013, 2017, and the current 2021 update. The injection category has had first place since 2010, but for the first time in a decade, it was dethroned in 2021. The first place is now taken by A01:2021 – Broken Access Control, a category holding fifth place in the previous 2017 edition. 

2. The categories are determined from a mix of quantitative and qualitative data

Most categories are based on statistics from security testing and code scans, collecting the most prevalent and severe weaknesses. The statistics in the 2021 list are gathered from data from 2017 onwards. This means that some very new weaknesses might not have made their way into quantitative statistics in a representable way. Indeed, it often takes time to build good tests for new weaknesses, and integrate these tests into tools.

Due to this, the list also takes into consideration weaknesses and risks highlighted by a community survey to developers and security experts.

This survey will help to identify weaknesses that are yet not so visible in the data. The quantitative data is used for eight of the ten categories, while the A10:2021 – Server-Side Request Forgery (SSRF) and A09:2021 – Security Logging and Monitoring Failures categories are based on the community survey. These were ranked #1 and #3 in the survey respectively. The A06:2021 – Vulnerable and Outdated Components category ranked #2 in the survey but had enough quantitative data to make it into the Top 10 even without the survey.

3. Each Top 10 risk consists of a set of underlying CWEs

General weaknesses that can be found in applications or organizations are given a CWE identifier. Each category is based on a set of such CWEs. This mapping allows the risk categories to be transparent. It is well defined which weakness or vulnerability should go into which category, at least to the extent to which CWEs are well defined.

The number of underlying CWEs varies greatly between the categories. A04:2021-Insecure Design has 40 CWEs mapped to it, while A10:2021-Server-Side Request Forgery (SSRF) has only 1.

4. Statistical data is based on application incidences rather than total frequency

This means that if the same vulnerability occurs in several places in an application, it is still only counted once in the statistics. Some weaknesses, such as Cross-Site Scripting or SQL injections can often be found in multiple places in an application. In order to reduce the impact of such systematic errors, the Top 10 is only based on the fact that the weakness does occur in the application. Still, injection vulnerabilities rank third in the 2021 list, showing that they are very common and often severe.

5. CVSS scores are used as input to the risk ranking

In addition to the incidence rate which describes how common a certain weakness is among tested applications, the ranking also takes into consideration how severe such vulnerabilities typically are. The CVSS scores for CVE vulnerabilities listed in NVD are used to find the severity. This score indicates how severe a particular vulnerability is. It consists of both an exploitability subscore and an impact subscore.

These subscores are collected for all CVEs within a CWE group and the Top 10 risks are weighted based on these subscores, such that they are ranked higher if the severity for such vulnerabilities tends to be high.

6. Security functions could be spread over several categories

Even if a specific weakness has a well-defined category, certain security functionality will have weaknesses spread over several categories. We can look at password handling as an example. Intuitively, weaknesses related to password handling would go into the A07:2021-Identification and Authentication Failures category.

Indeed, we will here find weaknesses such as hardcoded passwords (CWE-259), missing authentication (CWE-306), failure to restrict the number of successive authentication attempts (CWE-307), weak password requirements (CWE-521), and weak password recovery functionality (CWE-640). However, there are several related weaknesses that are captured in other categories. 

  1. If passwords are weakly encoded (CWE-261), e.g., using base64, or hashed with low computational effort (CWE-916), then that would fall under A02:2021-Cryptographic Failures
  2. If the database query to verify the password allows a SQL injection (CWE-89) to bypass authentication, this would fall under A03:2021-Injection
  3. If passwords are stored in a recoverable format (CWE-257) (e.g., compressed rather than hashed), this weakness belongs to the A04:2021-Insecure Design category. 
  4. Storing passwords in a configuration file (CWE-260) would be a weakness in the A05:2021-Security Misconfiguration category
  5. If a third-party component is used to handle passwords, and that component has a vulnerability, then A06:2021 – Vulnerable and Outdated Components would be the relevant category. 
  6. If the password is stored in a log file (CWE-532), this would belong to the category A09:2021 – Security Logging and Monitoring Failures

7. Testing your web application for security weaknesses is key to success.

Understanding the risks and raising awareness among developers is the first step to developing more secure web applications. Testing is key to identifying weaknesses and should be performed early on in the Software Development Life Cycle (SDLC). The cost of fixing security problems increases significantly if they are found later in the SDLC.

To aid developers and organizations in the testing process, OWASP provides a comprehensive Web Security Testing Guide. It can be used as a template to build an organization’s own testing program.

8. The shift-left approach is reflected in the updated list

Shift-left has become a commonly used term. It is a philosophy and a software development approach that aims to have security being tested and taken into account early on in the development process. In practice, this often means educating developers and making them more involved in, and responsible for, security design and security testing. This approach is reflected in the new category A04:2021-Insecure Design.

It is a very broad category, consisting of 40 CWEs, with aspects that should be considered early on in the software development process. This includes weaknesses that could be avoided with proper threat modeling, using secure design patterns and proper tooling. A secure design is supported by using a maturity model for assessing and improving security throughout the SDLC. OWASP provides the Software Assurance Maturity Model (SAMM). Another well-known model is The Building Security in Maturity Model (BSIMM).

9. Look out for the categories that ended up just outside the list

Even though the Top 10 collects many important risks, do not settle with just focusing on these categories. Other risks should also be paid attention to. A reasonable first step is to look at the categories that ended up just outside the list. These are Code Quality issues, Denial of Service, and Memory Management Errors. These might end up making it to the Top 10 in the next edition, but you should not wait until then to understand and mitigate them in web applications.

10. There are more Top 10 lists

In addition to the Top 10 list referred to here that has a focus on web applications in general, OWASP has also released a few more specific Top 10 lists of security risks that could be worth keeping an eye on. One is the OWASP IoT Top 10 from 2018 that focuses on building, deploying, and managing Internet of Things systems.

Motivated by the increased use of APIs for building applications, OWASP also has an API Security Top 10 list from 2019, devoted to security aspects that are specific to APIs. OWASP also has a mobile Top 10, ranking risks in mobile applications, with the latest instance being the 2016 edition. 

Leave a Reply

Your email address will not be published.