There are a lot of tools, or a lot of categories of tools, that developers need to incorporate into their DevOps environment. Anyone in the AppSec field knows, automation tools such as static application security testing (SAST), dynamic security testing (DAST), and Software Composition Analysis (SCA) have a lot of challenges.
In this post, we discuss three of the biggest challenges with automation tools and what can be done about them.
Challenge 1: The tools produce too many false positives
One of the main challenges with inexpensive SAST, DAST, and SCA scanners is that they produce a lot of false positives. This overwhelms the developer because there appears to be hundreds of issues. Some may be false positives while others may be real issues, but there’s no easy way to know. The developer doesn’t have the time, nor expertise, to be able to make sense of the reports and triage them appropriately.
Buying a commercial tool that produces more meaningful results helps reduce the false positives, however, as we will see, other issues arise.
Challenge 2: The tools don’t have context of your business
Even with tools that produce fewer false positives, there are still challenges. Namely, the tools don’t have context of your business. Since the developers often lack security expertise, it’s difficult for them to prioritize all the issues
The common mistake is to just use the severity ratings. However, that severity rating is a generic rating that was put into the tool, and it applies to every application regardless of the type of data it handles or functions it provides. So, it’s not meaningful.
If you don’t augment your tool with proper support, it’s sort of like throwing a person into the ocean and asking them to swim. You need to help them out in the beginning, you must ensure they have water wings or a life jacket that can help them float. Otherwise, they’re going to sink before they even start learning to swim.
Threat modeling is another big part of it.
A lot of organizations don’t do threat modeling, so they get reports from their scanners and then they become overwhelmed because there a quite a few issues they need to fix and are not sure how to prioritize.
Some of the better tools produce more detailed descriptions or provide links to help the developer understand what they are. However, while this makes it easier, the developer has lots to do. They are busy building software and don’t have a lot of free time.
Challenge 3: Each tool produces their own report
Another big challenge is that each tool produces their own report. Vendors love to have their own dashboards because they want you to live in their dashboard. It’s important for them from a business perspective, but developers end up many reports and dashboards.
You have at least seven reports, from:
- Infrastructure-as-code scanning
- Cloud scanning
Often those reports aren’t normalized so they are not presenting the information in the same way. If the boss asks the developer, ‘how many issues do you have, and can I see a summary?’ That’s not an easy question to answer because you 6 or 7 different reports. The team then has to normalize them, which is the lot of manual work, or they just have to deal with it.
Some larger tool vendors are moving in the direction of offering a central platform that does everything from SAST to DAST. However, the challenge with that is you get vendor lock in. This doesn’t work if you want to use a SAST from one tool and a DAST form another.
To resolve the first challenge of having too many false positives, we recommend using high-quality (often commercial) tools, which produce fewer false positives.
The address the second challenge about the tool lacking context of your business and your developers not having the right expertise, we recommend asking questions to put context around the issues. For instance, some good questions to ask are:
- Where is this issue occurring?
- What is the impact to the business?
- Is there a threat scenario that could actually realize this particular vulnerability?
Once you ask some fundamental questions to get some contextual information about the impact of an attack to a given system, you can then perform threat modelling and risk assessment using the issues reported by the scanner as well the contextual information to determine the level of risk:
- High risk – fix immediately
- Medium risk – can delay for 6 to 12 months
- Low risk – fix after addressing the high and medium risks
These risk levels reflect the impact of an exploit to the business as well as the likelihood of it taking place and can be used to prioritize issues and focus on where it matters most, as opposed to using generic severity levels reported by a tool that do not take the specific context into account.
Regarding the lack of expertise, one of the things we do with our clients is offer them support. When they go down the journey of buying a tool, we often support them for the first 12 to 24 months to ensure they are successful. We review the results with them and help them make sense of their reports.
We work as a natural extension of your development team to provide proper training and mentorship of these tools and reports. This takes the onus off the development team who may not have the time or expertise to deal with these challenges.
Addressing the third challenge – at Forward Security, we’re building a platform called Eureka which helps aggregate reports across multiple scanners of the same, or different kinds. This will help centralize and normalize reports.
- Embedding Security Software into Software during Development
- Application Security for Busy Tech Execs
- SAST SCA DAST IAST RASP: What are they and how you can Automate Application Security