How to evaluate and choose an SCA tool

Author avatar
by Debricked Editorial Team
2021-07-12
8 min
How to evaluate and choose an SCA tool

Software Composition Analysis is an essential solution when it comes to tracking open source components, dependencies and licenses as a part of managing risk in software development. A multitude of vendors with different capabilities and features can make it confusing. Which SCA tool is the best for your business and how do you choose? 

Since there is no general standard in the evaluation of SCA tools, the task of choosing the right one for your business can seem quite overwhelming. They can offer a variety of capabilities and forms making it a perfect solution for holistic decision-making. However, there is no single best tool to implement; it all comes down to making a weighted choice regarding your specific priorities.

As a key leap towards common standards, Ibrahim Haddad of the Linux Foundation published a paper on the topic. A natural solution that came to him was to develop a framework of standardized metrics to simplify that process. The metrics are not set in stone and are continuously evolving and offering new performance enhancements. 

“As a consumer of these tools, I always felt the pain of going through these different demos and trying to distil information that will help guide my decision on what tool to use and for what purpose. “

– Ibrahim Haddad

Therefore, Ibrahim Haddad updated the framework with cumulative feedback on professional comparison and documented the process. In the publication, Haddad suggests a number of metrics that will help in the evaluation of SCA-tools. Debricked’s own Emil Wåréus has also made a contribution, which we will come back to later in this post. 

How do you choose one?

When leveraging the power of SCA tools it is vital to establish the most desired features tailored to your specific needs, environment, and requirements. Once that is done, you should test and evaluate the tool’s features benchmarked against the prioritized metrics. Such features can vary in maturity, deployment, and other capabilities. There is no single best option fitting every business, however, let’s look into all the criteria that will help you choose the best solution.

Over the next few paragraphs, we will outline the most important metrics to consider when evaluating a Software Composition Analysis solution. The presented metrics are an overview of some of the key points in Linux Foundation’s guide together with Debricked’s contributions and insights provided by Debricked CEO Daniel Wisenhoff.

Evaluating step 1: knowledge base

This metric refers to the number of open source components, as well as the kind of repositories and ecosystems being tracked. It also reflects upon the types of source languages used and how often the knowledge base is updated. You need to make sure that the tool you are using has a good knowledge- or database. Free tools are often limited due to the lack of commercial incentive to keep the data up to date. Higher frequency updates are necessary to keep up with fast-paced OSS development. You’d want to make sure the knowledge base covers all the languages majorly used in your business. Therefore, you ideally want to test several solutions and benchmark them on the same code, figuring out which one provides the best coverage for your application. Debricked’s Vulnerability Database is free and open for anyone to use, so take a look!

Evaluating step 2: detection capabilities

This metric is closely tied to the way vulnerability detection works for the used scanners. Moreover, it corresponds to the description of detection methodology, options for verification of results (e.g. is prioritization offered?), and the ability to auto-identify code without the necessity of manually directing the tool. If that is followed, false positives do not need to be evaluated manually, which is otherwise very troublesome work. There exists a couple of main ways in which SCA tools operate. First, one corresponds to the component level analysis that only looks at declared components used in your software. These could be dependencies stated in your dependency files such as package.lock and others. This use case is often fine for 90% of all applications.

However, in some situations (for example when it comes to M&A, fundraising, certain industries, and customer relationships) you want to be absolutely sure what OSS you are actually using. For this reason, there exists snippet level analysis that can find partial code from any known OSS and then correctly identify and classify it. Furthermore, there is a third option that involves binary scanners. These try to extract the source code from the already compiled files. This is a bit more challenging and error-prone, yet this technique is often used when the software is delivered to you, and you want to check whether you comply with the OSS policies. 

Evaluating step 3: ease of use & reporting

If the whole engineering team can use the tool, security and compliance issues can be avoided before they even arise.  A tool that is easy to use would also reduce the learning curve, cutting down the need for training of the staff. Having a tool that is enjoyable to use also increases the chance of your engineers actually spending time on it. Therefore, you want to make sure that the tool has a good user experience for every type of user. Managers for instance want to measure progress, so it is important to have proper reporting capabilities such as dashboarding enforced. However, you also have to be aware that the user interface is most likely only going to be used by managers. It must also support UI-less and fully integrated solutions that comply with the native developer workflow.

Evaluating step 4: operational capabilities

The Linux Foundation classifies operational capabilities as, for instance, the support for different CI/CD systems, using it for different programming languages, support for different auditing models, and the ability to use it for M&A activities – some of which have been mentioned above. However, one thing that has not yet been mentioned is the speed of the source code scans. Any potential integration mustn’t slow down your operations more than required – a scan should not take more than a few minutes. 

Evaluating step 5: integration capabilities

Organizations often aspire to integrate the SCA tool with the already existing development. It should also include the possibility for integration with the compliance policies of your business. It is crucial that CICD systems are supported. But you must not forget about UI-less/developer native workflow. That will require a CLI tool. Also, the tool should ideally have a rich API so that large enterprises/more demanding use cases with customer reporting and data flows are required. 

Evaluating step 6: updated database

As previously mentioned in the knowledge-base metric, the size of the database is very important. However, the Linux Foundation has separated the size of the vulnerability database in the dedicated topic because of its complexity. Besides the previously mentioned aspects of data sources and so on, the frequency of the update for the database should be considered. This is important especially if you run it in your CI/CD pipelines with several checks daily. Debricked has added to the research for the original Linux foundation guide that this blog post is based on. The contribution was made to the nature of precision and recall.

In essence, you need to check the tool’s ability to, firstly, correctly map the dependencies that are actually used with no noise/false positives. Moreover, to see if the dependency’s vulnerable code is actually being called. Furthermore, if this vulnerable code is being run during run-time. And lastly, if this means it can be exploited in real life (check out our friends at Detectify who has a great solution addressing this issue). Besides precision, which refers to the true positive rate, the recall (how much of all potential true vulnerabilities are found) should also be considered, because there is a tradeoff between the level of precision versus recall, as touched upon in the interview section below.

Evaluating step 7: support for deployment models

SCA tools come in many hosting variances. In most of them, you can choose between on-site/on-premise,  cloud-only, or a mixed hybrid of both. Traditionally, on-site deployment has been favored due to higher control of what happens with your information. However, today most vendors and customers prefer cloud variance that naturally decreases any costs associated with infrastructure for the customer.

Evaluating step 8: associated costs

These costs include infrastructure, operational, licensing, initial integration, lock-in, engineering customization, and exporting costs. There are several factors to investigate in regards to the total costs of running an SCA tool. Some of them are directly related to the operation and hosting of software (direct costs). The rest are costs associated with the labor of analyzing and reacting to the result generated by the tool (indirect costs). 

Direct costs

The first direct cost is of course the price of the tool and the associated fees. Watch out for how the licensing model is set up so that you pay for the right amount of usage. Some tools, for instance, are billing based on the number of developers, while some do so based on the amount of scanned code. Hence, you have to be careful that you are fairly billed. Integration/pilot cost – very often, when buying enterprise software, there can be a significant integration and pilot cost associated with the initial installation of the tool. Some vendors do this for free, while others charge substantial fees depending on extra services performed, such as education, training, strategy consultation, etc. Lastly, Depending on the hosting solution that you have selected (cloud or on-premise), you will need to factor in a quite substantial infrastructure cost, due to the large amount of data being processed.

Indirect costs

After the initial investment, the largest cost is perhaps associated with the labor reaction to and solving issues generated by the tools. To illustrate, solving one simple vulnerability for a developer could amount to 1 hour of work. This could cost approximately 200$. In a very large codebase, it is entirely possible that SCA tools will generate hundreds of potential vulnerabilities that have to be examined. This would imply that the cost of the labor would range by tens to hundreds of thousands of dollars annually. All these metrics may seem like a lot. And to be fair, choosing and evaluating an SCA tool is, and is probably going to continue being, a time-consuming task. 

“I would encourage you to create your own evaluation criteria based on requirements that you most care about. Then proceed with the evaluation which will include rating the tools with respect to metrics set within each of these categories.”

– Ibrahim Haddad

Head of Data Science’s view on key metrics

Having presented the overview of the key metrics to consider when choosing and comparing an SCA tool, let’s dive into more detail on the contributions made by the head of Data Science at Debricked Emil Wåreus. Emil has a very dynamic understanding of the current shifts and values in the industry. Being a driven data scientist he is committed to leveraging the power of data in enhancing performance and the competitive edge of the business. We managed to ask him some questions elaborating on the key features of SCA metrics comparison and his contributions to the Linux Foundation SCA Evaluation guide. 

What, in your opinion, is most important to keep in mind when comparing the metrics?

It is vital to always start from a customer perspective even when working with data. The usage of our tool goes down a lot when we present too many false positives! This is something customers particularly care a lot about and vocalise their concerns during customer interviews. Therefore, that’s how the journey of analyzing the precision and recall of our system started. 

How did you choose the metrics you contributed with?

We started from the bottom-up approach in terms of choosing and evaluating the metrics. Right now the industry is still working a lot with the first layer of precision – finding correct dependencies and correctly building out of your dependency files. We noticed that in some languages we have some issues of optimal performance in terms of the first level of precision, that we might mismatch between different dependencies, for instance depending on their names.

This had a lot to do with how CPE’s are formulated, how they are not very specific and alike (e.g. package names, repositories). Thus, there is sometimes a mismatch between how you perform CPE- and package matching. The metrics themselves – including the different levels of precision – are defined both in terms of customer value and the level of challenge to solve that precision problem.

Is there an easy way to compare different competitors?

The Linux Foundation is working on benchmark repositories where you can integrate different providers and test them. However, that is currently only available for static analysis. Now is the time when it is crucial to push for some industry benchmark where we can compare different providers. Debricked has its own internal standard that can be presented in enclosed sales meetings but that is something we intend to release in open blog posts when it is more finalized. 

Understand the key metrics when evaluating SCA Tools

Open Source compliance and levels of implemented security are continuously challenged by the troublesome process of assessment and comparison of SCA tools available in the market, while the industry is undergoing continuous dynamic changes. Understanding the key metrics can assist in filling in the gap of the lacking standard of source code scanning evaluation. You want to have a low noise rate, you should remember to benchmark different tools/vendors on the same source code and compare the results. Picking one can be a daunting task and it should not be underestimated.

However, we hope that this article can help shine some light on what to look for when evaluating an SCA tool and just maybe make it a tad easier to choose. You can also check out the top five SCA questions to learn more about how the analysis work.

  1. Pingback: Interview with Ibrahim Haddad | Debricked
  2. Pingback: SCA Tools Overview | Debricked