Tuesday, December 13, 2011

Artificial Shine for Chrome - Google Buys “Test” Report


I know some of you haven’t seen the “behind-the-scenes” shenanigans that can happen when a company pays another company to test their product. There are definitely some cases where this works and the results are not biased, but this post shows some tell-tale signs of a seriously biased report that undermines the credibility of the buyer (Google in this case) and the tester (Accuvant Labs in this case). There are more egregious examples of poorly executed paid grades, but like the test for Google’s Chrome browser, the example I have in mind involved Internet Explorer. Browser testing seems to bring out the worst in test bodies.

Back in 2009 Steve Regan at The Tech Herald asked “Can you trust the NSS Labs report touting the benefits of IE8?” http://www.thetechherald.com/articles/Can-you-trust-the-NSS-Labs-report-touting-the-benefits-of-IE8. NSS Labs made the cardinal error of failing to disclose that Microsoft had paid for the test. In the report I am writing about, Accuvant didn’t make that mistake. Kudos for admitting up front that Google is their client.

Evidently Accuvant got a nice chunk of money to try to find any possible test that Google’s Chrome browser might come out first in. OK, quite possibly Google did the research and asked Accuvant to independently test it and report that the criteria meant Chrome was the best.
Accuvant did the requested paid testing and issued a text book “I got a marketing degree and I follow the template” type of press release. http://www.accuvant.com/news/2011/12/09/accuvant-releases-web-browser-security-research-findings

The first part of the first sentence in the press release is a dead giveaway that this is marketing-driven and not research-driven. “Accuvant, the only research-driven information security partner delivering alignment, clarity and confidence to enterprise and government clients,…” The only? Yeah, right. Is there any research-driven evidence to substantiate that obviously marketing-driven claim? A word to the wise... When a testing organization has to fabricate their stature to grant validity to their skills, be very, very, skeptical.  So we start the press release with a grandiose claim that is best used to enhance the growth of plants. You don’t have to get past the first paragraph to run into more conclusions that are not substantiated by the testing. The last sentence in the first paragraph is “Conducted by the renowned Accuvant LABS research team, the research finds that Google Chrome is currently the browser that is most secured against attacks.” Whether or not Accuvant Labs is renowned is a subjective argument, but a review of the testing shows that the conclusion that Chrome is most secured against attacks is not validated by the study. The study measures resilience to a narrow range of attacks.

A bright spot for Accuvant customers is that while Accuvant may be marketing-driven, rather than research –driven, they are very customer focused!

The press release touts the report as “Browser Security Comparison: A Quantitative Approach” but by the time you finish reading the 102 page report one of the glaring omissions is “How Much”? Yes, if you are telling me that one browser is safer than another in a “quantitative” report, then I do expect to be told how much safer. In this case the word quantitative is reflective of the marketing nature of the report.

While the press release calls the testing comprehensive, the actual report goes to great lengths to explain why Accuvant (and presumably others) are unable to test important metrics. Once the metrics are described as unable to be accurately measured then the report tries to dismiss them as unimportant. The test focuses on a very narrow range of testing and then Accuvant misrepresents the testing as comprehensive. This is NOT a comprehensive test.

The press release claims “We compared web browsers from a layered perspective,” yet the actual report explains why layers of protection were not included in testing. Notably… “Microsoft’s application reputation component and Chrome’s malicious executable detection were not included in the comparison.”

The report is full of conclusions and unsupported assertions that undermine its credibility and demonstrate a project undertaken with the goal of supporting a foregone conclusion.

Here is what you can conclude from the testing. On one operating system (Windows 7 32-bit) one version of Google Chrome performed better in the limited scope of testing.

The report completely fails to demonstrate that the better performance has actually resulted in better protection as substantiated by the results of real world attacks. The test is not valid for 64-bit Windows, and the test does not show that any of the browsers offer any better protection than one another in a layered defense, such as when one uses SandBoxIE (www.sandboxie.com).

The two most critical elements in browser security are the user and making sure you are using a current browser.

It would be nice to know if one browser is more secure than another, how much more secure, and how consistently, but the Accuvant test was not designed to answer these questions, it was designed to deliver the product that Google paid for. Yeah, Chrome does look pretty shiny under artificial light!

Randy Abrams
Independent Security Analyst

No comments:

Post a Comment