Tuesday, January 11, 2011

Follow Up & Clarifications

I’ve been pretty busy trying to contact the various vendors and deliver materials that they can use for QA & development, and I must mention that so far every vendor / developer that I have contacted responded kindly, and many of them responded with excitement and already started enhancing their tool (which is GREAT news for all of us).

I managed to find the time to contact about 18 vendors, and hopefully I’ll manage to contact more in the following weeks (25 left to go). This process requires me to analyze the benefits of the tools of each vendor, and as a result, is more time consuming then I originally thought; however, thanks to this process, I believe that soon it will lead me to some additional interesting conclusions and insights, which I’ll publish separately.

In the process of contacting the vendors, I realized that I have neglected some of my duties and forgot to publish some important clarifications:

  • Although the test cases implemented as “False Positives” are by no means vulnerable to SQL Injection or Cross Site Scripting, some of the test cases still fall into a category of information that should be presented in the report under the context of another type of exposure:

o Pages that disclose sensitive information / exceptions (some SQL Injection False Positive test cases that are meant to simulate SQL errors that do not derive from user originating input, such as connection failures, etc).

o Pages that fall under the category of insecure coding practices (some of the False RXSS & SQLi pages).

  • Some tools are still in early beta, and some didn’t even publish an official alpha version (aidSQL, iScan, and some of the other tools that had zero accuracy); the accuracy of these tools was not really audited, due to limitations or bugs that will surely be mitigated in the future versions. The benchmark will be updated as soon as the tool vendors release a new stable version.
  • The execution of certain tools which were reported as having zero accuracy failed due to bugs or configuration flaws, and not accuracy related issues; These tools include SQLMap, aidSQL, VulnDetector, and a couple of more; I’m currently working with the various vendors to figure out how to execute them properly (or how to work around the specific bugs), so the test will actually reflect their accuracy level.

As a result, I believe that the next benchmark is going to be performed sooner then I planned;

It will probably include the same results alongside the corrected scans of the tools that had execution issues (particularly SQL tools), and maybe additional enhancements (under discussion).

I wish you all a Happy New Year :)

No comments:

Post a Comment