How to Hack Your Own System

Aug 20, 2021 12:02:00 PM / by Ted Harrington

hacker with hood

“Think bad thoughts and ask hard questions.” 

This is the wise advice of one of our security analysts. By “bad thoughts,” he means that you need to think like an attacker. By “hard questions,” he means that you need to identify the assumptions made by the developer and then undermine those assumptions. 

“Where the two meet,” he told me, “is where you find security vulnerabilities that matter.” 

 

Too many companies approach security without either. You must do both. Here’s how.

Who Does the Hacking?

All of the methods mentioned in this blog post are executed by your external security partner. This is pretty standard practice, but for the sake of clarity, let me explain why. 

You want an unbiased, objective view. They didn’t build your code, so they have no attachment to it. You want to capitalize on subject matter expertise that you probably don’t have in-house (or even if you do have ethical hackers in-house—which few companies do—you can augment those capabilities with the breadth and depth of an external, multidimensional team). You want someone to expose your blind spots, which, by definition, you are blind to. You want someone to find flaws and help you get better. Your customers can trust your security claims because it’s an objective, unbiased authority stating facts, as opposed to you stating opinions

All that said, nothing prevents your in-house personnel from doing as many of these elements as they’re capable of, too. In fact, if they can, they should! More security is better than less security. Always.

Just note that any testing done by in-house teams would be in addition to the testing done by your external partner. In-house personnel wouldn’t be replacing the need for an external partner nor even reducing the scope of what your partner does (because that would make them less effective, and it would undermine the independence they deliver). The real value in this method is going to come from external experts. 

If you have valuable assets to protect, you want to make absolutely sure that you’re getting all of these techniques from your security partner. If you are not, find out why. If they can’t do these things, find a new partner. If they can do these things but they’re not, find out why. If you have assets worth protecting, your testing absolutely needs to entail all of what you’re about to learn.

 

Analyze Design

 

To understand how to break the system, your partner first needs to understand how it’s supposed to work. They should learn the fundamentals of the app: the features, how users navigate through it, how access is provisioned, and where users can input values. They need to understand why it exists, what business problems it solves, and what it protects. 

However—and quite disconcertingly—many security-testing services don’t care how the app works. The cheap, commodity scans that flood your Google searches all skip this step. It doesn’t matter to a scanner what the system is used for or how it works. But to do the important steps that come later, you can’t skip this one. Your partner needs to know how the app works so they can figure out how to abuse it. This is why we hire so many computer scientists, because they know how to build things, which helps them understand where the things might be broken.

Another crucial element of this stage is evaluating for design flaws, which are vulnerabilities inherent in the way you designed the system. A design vulnerability is when the system works exactly how it’s supposed to and yet enables an attacker to exploit the system anyway.

Run Automated Scanner

Scans are efficient and inexpensive and provide you with information that helps in later assessment stages. They quickly reveal the obvious issues that would require enormous effort to do manually. Most attackers run scans first, so it’s a good idea for you to do this, too. You want to see what they’ll see. Just remember that scanning is not a comprehensive effort to find your security vulnerabilities. It’s just one piece of the overall puzzle. 

Look for Known Vulnerabilities

Many apps suffer the same mistakes. Yours probably does, too. Your attackers know this. They’re just like you and me; they want the best results for the effort they invest, so the logical place to start is by looking where most people make mistakes. They seek these out as a shortcut to their success. To defend successfully, your testing must check for common issues. Examples include Cross-Site Scripting (XSS), which enables attackers to inject malicious scripts into web pages viewed by other users; Cross-Site Request Forgery (CSRF), where a third-party web page can trick a user’s browser into sending unauthorized commands to a web application; Broken Authentication, which is a failure to verify user identity; and Broken Access Control, which is a failure to enforce user permissions.

The combination of these first three ideas makes up the fundamentals of your security-testing program. All of it is performed by your security partner, some of which your in-house team can help with, too. 

However, you’re not done yet. The valuable part is about to begin. 

There’s a Capability Gap

There’s a dramatic capabilities gap that separates the fundamentals from the advanced tactics. The testing we’ve discussed so far requires minimal to moderate skill and experience and can be performed with heavy emphasis on automated tools. But what comes next—the stuff that really matters—requires high skill, deep experience, and a manual emphasis. It’s incredibly difficult to do the things on the other side of this gap. You can’t automate them; there’s no tool for it. You need someone with the kind of deep expertise in manual assessment that you already learned is in short supply. 

To achieve your security mission, you must live on the other side of that capabilities gap. Reject the hype: you can’t do that with tools alone. It takes time, effort, and money to do things manually with deep subject matter expertise—but it’s the only way to find your most critical security vulnerabilities.

Let’s jump across the capability gap now.

Abuse Functionality 

Hacking is making something behave differently than it was intended to. A powerful technique to do that is by abusing functionality, which uses an application’s own features in an attack. 

Bad assumptions are commonly made about what users will (or won’t) do. Over the years, I’ve heard some head-scratchers such as, “Oh, the user will never do it that way,” or “This will always be safe,” and my all-time favorite, “No one would think of that” (which is hilarious, because it’s said in response to us literally thinking of exactly “that” and asking about it). These are absurd. Yet people say them. All. The. Time.

The reason for this is a simple and human one: most people don’t think like attackers. They see the good in the world and how things are supposed to be, not the bad in the world and how to break things. If that sounds like you, too, that’s OK! That’s why you work with external security experts who do think those bad thoughts every day. Nevertheless, assumptions about user behavior are the core of your security model. Bad assumptions severely weaken it. To be secure, you must identify those assumptions. You must understand how they’ll be undermined. Abusing functionality is about asking “What if?” in order to turn assumptions upside down. Some examples of good “what-if” questions include:

  • The username field is expecting up to twenty characters; what if I input two thousand?
  • The input field is expecting alphanumeric characters; what if I input a command?
  • The web app is forcing me to log in; what if I manually point the URL to a different page in the web app?
  • The input field is expecting data; what if I input no data and click the button anyway?

Chain Exploits

Exploit chaining is combining two or more vulnerabilities in order to multiply impact.

In a recent security assessment of an application that manages delivery of maintenance services (such as plumbing and electrical), we found a way to chain three vulnerabilities:

  1. Sequential identifiers
  2. Broken authorization
  3. Cross-site scripting (XSS)

Issue #1: “Sequential identifiers” is a strategic weakness that makes it easy to predict sensitive account information: account IDs are numbered in order, rather than being randomized. For example, my account number is 0001, yours is 0002, your friend’s is 0003, and so on. This means that an attacker can predict account IDs. Predictability makes it easier for an attacker to achieve widespread compromise across the entire user base of an application.

Issue #2: Broken authorization is a vulnerability where an application fails to properly verify a user’s permissions. In this case, the system didn’t enforce authorization on the API used to open maintenance tickets. This means that an attacker can create maintenance tickets for any account, including those that the attacker is not a member of. Because the attacker can predict account IDs (as noted in issue #1), the attacker can do this to every user of the application.

Issue #3: On the web page where maintenance tickets are viewed, the application failed to sanitize user input, a mechanism that prevents an attacker from entering malicious data. This means that when creating false tickets for victim accounts, if an attacker includes XSS payloads in those tickets, the system won’t stop it. The attack payloads are delivered to unwitting victims. 

Because an attacker can predict every account ID and create false tickets for them that include attack payloads, which the system fails to prevent, it means that any user of the application can be attacked successfully.

  • The attacker can target a specific company or target every company.
  • It means that simply by licensing an application, a user can be attacked by other users.
  • It means that a user’s confidential data—which the app is supposed to protect—might be accessed by other users of the app, including especially malicious users. 

Nightmare scenarios like this are exactly what most enterprise buyers fear when licensing applications. 

In isolation, each of these vulnerabilities is bad. In combination, they’re catastrophic. Vulnerabilities must be considered in the context of each other, rather than in isolation. Attackers seek to chain exploits, and you should, too. There’s no tool for this. You can’t automate it. You must do it manually. 

Unknown Unknowns

In 1955, American psychologists developed the idea of “unknown unknowns.” It has entrenched itself in security vocabulary ever since. The idea is that there are three types of issues:

  • Known knowns: flaws that you know about and that impact you. These are the vulnerabilities you’ve discovered through security assessments.
  • Known unknowns: flaws you know exist but may or may not affect you. These are the common classes of vulnerabilities that are persistently found in the world, such as XSS, CSRF, or broken authentication. You’re not sure yet if they exist in your system. Also included in this group are widespread vulnerabilities that have a patch (a set of changes to a computer system that fixes a flaw in it), but you haven’t implemented the patch (whether because you’re unaware it exists or you just haven’t gotten around to it. Both cases are extremely common).
  • Unknown unknowns: flaws so unexpected you don’t even consider them. This comes in numerous forms, including novel versions of common vulnerabilities, zero-days in the supply chain, and previously unknown attack methods.

A CTO once humorously said to me, “Ted, I don’t like monsters. And I don’t like getting bitten in the butt. But I don’t even know what the monsters are or when they’d jump up and bite me in the butt.” This memory always brings a smile to my face, not just because it’s ridiculous phrasing, but also because it’s the best description I’ve ever heard of a common fear: “I don’t know what I don’t know.” Even if you haven’t admitted it out loud to anyone, I’m guessing you’ve felt that fear too at some point. 

You resolve that concern by turning unknowns into knowns. 

Dealing with unknown unknowns is the absolute pinnacle of security testing. It entails the most important issues you’ll face. It’s where your focus needs to be.

Unfortunately, most security testing actually doesn’t focus on these issues. Tool-based approaches all settle for the easily discoverable, known issues and go no further. Even most manual approaches lack the skill, experience, or sophistication to help you resolve unknown unknowns. To find the unknown unknowns requires skilled manual investigation. It is the only way to solve this part of the security puzzle. 

Request Smuggling Example

To illustrate, consider request smuggling. It’s an attack technique that abuses discrepancies in how different pieces of software process inputs. Web apps get lots of requests and need a way to handle them.

There are two ways to do this: with one expensive server or multiple cheap servers. It’s usually a better business decision to use multiple cheap servers. When you do that, you need load balancing, which is a process to distribute the requests. Imagine it like a traffic cop pointing some cars into the left lane and some cars into the right lane. As a result, this means you now have two types of software: the load balancing software and the server software. These are intended to process inputs the same way; however, sometimes their implementations differ slightly. For example, if a request is supposed to end in a specific format but it doesn’t, each system independently determines what to do. When there’s a mismatch in how the pieces of software handle those requests, attackers can smuggle in a malicious request alongside a legitimate one. The load balancer would submit both as a single, legitimate request, but the server would then execute it as two requests—one legitimate and one malicious. There are endless things an attacker could do with this, such as change permissions to admin rights or leak passwords of other users.

This vulnerability is incredibly specific to your system’s configuration and your application’s logic. Attacking it requires a custom exploit. It wasn’t even documented as an attack technique until sixteen years after it became possible. You simply cannot find complex yet catastrophic issues like this with a tool alone. Your investigation absolutely must be by a skilled human, solving problems manually.

This method helps you understand your problems so you can solve them. However, it only works if you go the whole way. Settling for half measures like tools alone is not going to cut it.

Does It Work, Though?

Here’s some data about outcomes this method delivers. It’s drawn from 51 security assessments spanning 7,514 hours in which 720 vulnerabilities were discovered.

The data shows that using this method, our analysts discovered critical vulnerabilities in 56 percent of assessments, plus high severity flaws in 96 percent of them. Medium and low severity issues are pretty much a guarantee, discovered 100 percent of the time. 

This method works. 

The takeaway is a simple one: If your goal is to find vulnerabilities so you can fix them and prove your app is secure, this is the way to do it.

The power of this method lies on the other side of that capability gap. The most important outcomes are delivered when you abuse functionality, chain exploits, and seek the unknown unknowns. Compared to basic methods like scanning alone, this method is harder, takes longer, and costs more. But those investments pale in comparison to the impact you’ll make on your security mission. This method brings the wrecking ball to the demolition. It helps you smash the system so you can get better. 

If your business lacks an external security partner capable of this methodology, talk to us today so we can secure your application.

Content adapted from: 

hackable-twitter-bestseller-banner

 

Subscribe to Our Blog

Stay up-to-date on the latest ISE and cybersecurity news.

We're committed to your privacy. ISE uses the information you provide to us to contact you about our relevant content, products, and services. You may unsubscribe from these communications at any time. For more information, check out our privacy policy.