burp-vs-netsparker

In this series I’m going to be taking a look at how various web scanners perform at finding web application security issues.

This installment compares Burpsuite Pro with Netsparker.

Settings

I am going to be using default settings for the most part with the following caveats:

  1. Netsparker is an automated scanner and thus it performs content discovery and spidering on its own
  2. Burp is a manual testing tool, and these steps are done separately within it
  3. For this reason, I performed a basic (default) spider of the site in question using Burp. The only setting changed was disabling the submission of login forms
  4. No authentication was set in Netsparker either
  5. I have a number of extensions installed in Burp, but nothing was done with those extensions. In other words, if they worked automatically or passively then they worked, but nothing manual was done with them
  6. I also ran a Content Discovery session using Burp from the Engagement Tools. I let it run for about 10 minutes, which is roughly how long it took Netsparker to finish its scan. This is not a direct comparison because, like I said, the tools handle this type of activity quite differently

The test site

The test site is an updated WordPress installation with around 50 articles in it. I used something that was updated and relatively small so that the scans could complete quickly.

I also wanted to use a real site rather than a test/vulnerable install because it’s more representative of reality. I am aware that WordPress sites don’t represent enterprise applications, or custom applications, or MANY other things.

This what I chose to use, however, as a matter of practicality, and the amount of surface area (allowing the tools to highlight their checks) is better than one might expect.

The test site is on a fast host and happens to be quite responsive, so the tests completed in around 10 minutes.

Methodology

First, don’t get too excited. This isn’t a formal test. I’m not keeping strict logs. It’s pretty loose.

Configuration

Results

Here are the Netsparker results:

Screen Shot 2015-12-29 at 11.55.25 PM

And here are the Burp results:

Screen Shot 2015-12-30 at 12.32.04 AM

Analysis

Here are the major takeaways for me:

  1. Both tools found roughly the same number of findings (which is a mostly useless metric)
  2. Burp missed a cookie not marked as secure
  3. Both tools were fooled by a fake Apache Server header
  4. Both tools found significant CSRF issues
  5. Netsparker highlighted a forbidden resource (403), and Burp did not
  6. Netspaker correctly identified the site was running WordPress, Burp did not
  7. Burp identified cacheable SSL responses, Netsparker did not
  8. Burp identified a user-agent varied response (standard browser vs. mobile browser), and highlighted the difference.
  9. Burp highlighted an email address disclosure
  10. Burp highlighted cross-domain script includes, Netsparker did not
  11. Burp highlighted cross-domain referer leakage, Netsparker did not
  12. Burp highlighted a password field with autocomplete enabled, Netsparker did not
  13. Burp found HTML5 cross-origin resource sharing, Netsparker did not

As you might have guessed, it’d have been nice to have both sets of results. Both found (and/or alerted on) things that the other did not.

A couple of things I noticed:

Summary

This was purposely a test of scanner technology. If I were performing an actual assessment (be sure not to confuse the two) the results would massively favor Burp because it’s a tool for helping testers do manual work, where Netsparker is not.

My preference for the results leans toward Burp. I think the HTML5 result was a big one. I like how quickly it found what it found. And I generally prefer using the automated functionality of Burp to other tools because it’s all happening within a manual testing tool.

That being said, I think Netsparker did really well. It found extremely similar content and did it with virtually no configuration.

Anyway, hope this helps someone. I’m going to be doing more of these comparisons using various tools, and many of the comparisons will include manual vs. automated findings, results after optimizing settings (as opposed to using defaults), etc.

Notes

  1. I speak regularly with Daf (the author of Burp), and he’s aware of the various points in this piece. No surprises here. If you think you’re telling him some new and great feature, he’s probably had it on the backlog for several years. It takes time to put new features in, and prioritization is a constant challenge. I for one think he and the team are doing a spectacular job.
  2. Here are the Burp extensions I had installed: Autorize, Identity Crisis, Image Location Scanner, Intruder File Payload Generator, Retire.js, Site Map Fetcher, J2EEScan, HTML5 Auditor, Bypass WAF, Content Type Converter, .NET Beautifier, Error Message Checks, Authz, JS Beautifier, Session Auth, Active Scan++, Additional Scanner Checks, SQLiPy, JSON Decoder, Reflected Parameters, Software Version Reporter, Headers Analyzer, Session Timeout Test, CSRF Scanner
  3. I’m not going to be going overboard with these analysis pieces, but if you have something you’d really like to see in here let me know. I’m keeping them very short and quick, but am happy to add an analysis section if it’s important to people.
源链接

Hacking more

...