Web applications can be tested manually or automated, as a blackbox or a whitebox, with static or dynamic analysis. In this post we compare the advantages and disadvantages of a variety of approaches and solutions.
The following table lists a side-by-side comparison of different application security testing approaches. Additional rating details are available when hovering over each column. In the following, each approach is introduced.
Category | Automated Security Testing | Manual Security Testing | ||||
---|---|---|---|---|---|---|
Approach | Static Application Security Testing (SAST) / Static Code Analysis | Dynamic Application Security Testing (DAST) / Blackbox Tools | Whitebox / Code Audit | Blackbox / Pentest | ||
Taint Analysis | Pattern Matching | |||||
Language Specific (RIPS) | Language Generic | |||||
![]() | ![]() | ![]() | ![]() | ![]() | ![]() | |
Code Coverage | ||||||
Early Bug Detection | ||||||
Detect Complex Issues | ||||||
Detect Logical Flaws | ||||||
Result Accuracy | ||||||
Remediation Details | ||||||
Initial Costs | ||||||
Setup Costs | ||||||
Verification Costs | ||||||
Remediation Costs |
An automated security test of an application can be performed in two disparate ways. Either the source code files of the application that is written in a specific programming language are automatically scanned (static analysis), or the URL/IP of an already setup and running application is tested from remote (dynamic analysis).
Static analysis is performed solely on the source code of an application without executing it. This has the great advantage that the source code must not be running or be functional such that SAST tools can be directly integrated into the development process and detect security issues as early as possible, when the code is written. The available source code is scanned and all issues are pinpointed to the exact line of code for quick remediation. There are fundamentally different approaches regarding the complexity of a code analysis though and the amount of false positives.
For taint analysis, the complete source code of the application is transformed into an abstract graph model that enables efficient data flow analysis. The data flow of user input (sources such as a GET and POST parameter) is traced throughout the complete code base, including the boundaries of files, functions, classes, and methods. Whenever user input flows into a security sensitive operation (e.g. a SQL query or a file access) an attacker could manipulate this operation and thus a security vulnerability is reported (e.g. a SQL injection or path traversal vulnerability). Input sanitization or validation is recognized to prevent false positives.
Language-Specific Analysis
A taint analysis can only work as precise as its underlying abstract model is. Different programming languages have different features, behaviors, and pitfalls. RIPS tailors its awarded analysis algorithms specifically to each programming language. It simulates all language-specific features and characteristics in order to generate the most precise and efficient model possible. Specifically dynamic programming languages such as PHP are known for their typing issues and pitfall-prone built-in features that are precisely simulated. As a result, RIPS detects even complex and subtle security issues that generic solutions miss.
Language-Generic Analysis
Other SAST vendors use one generic model for multiple programming languages. While this can work for similar languages such as C and Java - the languages originally targeted by the vendors - this fails when dynamic scripting languages such as PHP are added later to the same model. The language-specific details which are often the root cause of modern vulnerabilities are lost in the generic abstraction layer used for fundamentally different languages. As a result, critical security vulnerabilities are missed and many false positives occur. Further, the analysis takes multiple hours or even days to complete which is impractical for continuous testing.
There are static analysis tools that do not perform data flow analysis but only fingerprint for certain keywords or patterns in source code. For example, a tentative security report is issued whenever the function eval()
is found without actually verifying if an attacker could influence the evaluated code or not. While this works to find simple code quality issues, this approach fails to find real and exploitable security issues. A report for every echo()
or *query()
to detect possible Cross-Site Scripting or SQL injection vulnerabilities leads to thousands of false positives while more complex vulnerability types remain undetected.
DAST or blackbox tools perform a lightweight scan from the client-side of a given web application that is deployed and running. Multiple malicious input patterns for common web attacks are automatically send to the URL of the application while its responses are evaluated for abnormal behavior (e.g. SQL error messages or time delays) that could indicate a vulnerability. It is recommended to use an additional test setup to prevent interference with real user data. This fuzzing approach is very slow and only scratches on the surface of an application without crawling all features deep enough. For example, vulnerabilities are missed that require a specific combination of actions (e.g. login first, activate mode 1, use feature 5). As a result, blackbox tools have a limited code coverage, a lack of support for many vulnerability types, and miss many security issues. DAST tools are often used for assistance in manual penetration tests.
Similar to automated application security testing, a manual test can be performed for two different scenarios: find as many bugs as possible in a given source code (audit) or simulate an attack against a running application (pentest).
The most thorough way of finding and eliminating all possible security vulnerabilities is to perform a manual review of the source code. A team of code auditors is hired which manually inspect relevant parts of the source code for security issues on-site or from remote. This enables skilled experts to find subtle security bugs that automated tools and developers missed, for example logical issues or complex crypto weaknesses. A final report will list all findings of the audit with remediation advices. This approach fits best when the code is highly business critical and not rapidly changing. However, for modern applications with thousands or millions of lines of code a manual code audit can be infeasable in a limited time frame or get very expensive. Static analysis tools can be used for assistance.
More commonly, web applications are tested from the outside (blackbox) and without the source code (whitebox). The goal is to simulate an attack and to get a snapshot of how successful an attacker could be. A team of penetration tester is hired that attacks a production or test setup of the application in a realistic scenario: only with access to the URL/IP and without further knowledge about the internals. A crucial factor is how much time is given to the testers. The final report can only list what was found in the limited time frame. This time frame should reflect the resources of a real attacker which varies from a few days for script kiddies to several weeks for motivated adversaries.
As a personal side note I would always recommend to hire a small boutique company with a strong team specialized in your technology stack, preferably with a recommendation or a list of renowned experts. There is a huge difference in what a team of skilled security experts can find manually in your site than a team that only uses automated blackbox tools.
In this post we looked at different approaches for application security testing. Each approach has its own advantages and disadvantages. Clearly, there is no ultimate approach that fits to all company requirements and attacker models. It is rather helpful to find a combination of different approaches that fits best to a company’s setup, attacker model, and budget.