School taught us that making mistakes is bad. Teachers literally punished us for making mistakes on homework, quizzes, and tests.
But that’s not the real world. Mistakes happen, and that’s OK. The very nature of software development is that mistakes will be made and vulnerabilities will be introduced. Again, that’s OK—as long as you do something about it.
What’s not OK is failing to fix mistakes. What’s not OK is failing to learn from mistakes. What’s not OK is continuing to make the same mistakes over and over.
A security consultant’s job is not done until the client gets better. For that reason, I’m fond of having meetings at the conclusion of security assessments to review the reports and ensure that our customers understand the issues. I want them to take action.
Remediations fix your vulnerabilities. They come either as resolutions (which completely fix the flaw) or as mitigations (which minimize exploitability or reduce severity). Either way, there are three things you’ll want to do:
Once you’ve done each of these steps, you get a better, more secure system. You also get a sales benefit, too: you can prove it. While your competitors struggle to deliver the security that your customers demand, you now have a competitive advantage.
Here’s how to fix your vulnerabilities:
Once you’ve found your issues, you need to understand how severe they are. Vulnerabilities are not all the same; some are catastrophic, whereas others are not. Severity is a combination of exposure and impact.
Vulnerability severity balances many factors, including attacker skill, motivation, access, and resources. It accounts for the complexity of both the system and the attack. It considers how easy the vulnerability would be to exploit and how catastrophic the outcome would be if that happened. It helps you figure out what to remediate first.
This phase of the effort is done collaboratively between your in-house teams and your security partner. Usually, what happens is your security partner assigns the severity rating, and you work with them to adjust if needed. For example, if there’s a mitigating factor in your business that you forgot to mention, which would alter how severe an issue is, this is when that might come up.
Grading severity is an imperfect science. It’s highly dependent on your specific situation, and security professionals may vary slightly in how they define or measure it. No matter what, though, severity ratings should be customized to the system evaluated. (However, note that automated tools usually don’t customize severity ratings at all.)
Irrespective of how severity is determined, vulnerabilities typically fall into categories: critical, high, medium, and low (we also use a category for informational issues).
Use severity ratings to prioritize your remediation efforts. For example:
Fix the most dangerous vulnerabilities first, then plan how to address the rest. That’s how you manage the security workload into your other development priorities.
Next, you need to actually fix the vulnerabilities. If you don’t, you waste the time, money, and effort you invested in finding them. This phase of effort is usually done in-house by your developers. (In reality, you could have your security partner do this if they have software development capabilities, but you’d be paying consulting rates to do work that you already have in-house capabilities for. So it doesn’t usually make sense to outsource this part, unless your business model already includes outsourcing development, too.)
You’d be surprised how often people skip this and literally don’t fix their vulnerabilities. It may be hard, take time, and divert attention, but it needs to be done. Otherwise, what was the point?
As far as how to do the remediations, that advice is pretty straightforward: follow the guidance outlined in your security assessment report! Assuming you got the right partner and the right kind of testing, this part is as simple as it gets. The instructions are literally right there for you in the report deliverable, and your security partner can guide you if you get confused. You’ve already prioritized the vulnerabilities by severity; now you just need to work through remediating them.
Once you’ve fixed your issues, you need to ensure the remediations work. This is an effort known as remediation testing (or sometimes casually referred to as mitigation testing). This phase of effort is performed by your security partner, who checks the work your developers did to fix the issues.
Remediation testing confirms a few things:
Remediation testing delivers the ultimate payoff for all your hard work: an updated report that shows vulnerabilities as resolved. This is valuable because it confirms—in writing—which issues are fixed. That gives you a super-powerful tool for your sales process because it shows your customers two things. First, that you go deep enough to find important vulnerabilities. Second, that you fix them. They’ll absolutely love this.
As you think about fixing your vulnerabilities, it’s important to understand their nature. Where they came from impacts what you need to do to fix them.
Your vulnerabilities result either from how you designed the system or from how you implemented that design.
Implementation flaws are when the system works differently than intended. For example, you designed an authentication model that allows access for some users and prevents access for everyone else. A vulnerability like XSS enables an attacker to bypass that protection. You obviously didn’t mean it to work that way, but nevertheless, it did. Issues like this happen when the design is fine, but you just made a mistake in how you executed it. Fixing these issues means correcting those mistakes.
By contrast, design flaws are issues with the design itself. They happen when the system works exactly as intended, and yet the attacker can use that intended functionality to exploit the system anyway. For example, you might implement rate limiting to lock an account that receives too many failed login attempts. However, if poorly designed, it could provide an attacker a way to intentionally trigger it across all users, making the system unusable. Fixing design-level issues requires you to adjust the design itself. Depending on the issue, that could be a tremendous undertaking.
Content adapted from: