Posted: 5 Min ReadExpert Perspectives

Why Real-World Testing Matters

Our perfect scores on the latest SE LABS Ⓡ tests reveal something even more important than perfect scores

  • SE Labs awarded AAA ratings to Symantec Endpoint Security Complete and Carbon Black Cloud for perfect real-world ransomware defense, with zero false positives.
  • These solutions were tested in a rigorous environment that accurately mimicked real-life attacks, assessing detection, prevention, response and remediation.
  • Unlike conventional cybersecurity tests, SE Labs does not allow tuning for better results, so outcomes reflect true performance—not lab-optimized conditions.
  • Real-world testing helps security professionals make informed decisions when investing in EDR and other solutions.

Everybody loves to ace a test. But for a cybersecurity vendor, the celebration is all the sweeter when you know you’ve aced a test that not only is exceptionally difficult, but one that is modeled after the real-world threats organizations face every minute of every day. It’s even better when you’ve aced a test you can’t tune for (more on that later).

I speak from experience on this. Last week, SE LABS Ⓡ released two reports that deserve a close look by anyone in the market for endpoint detection and response (EDR) protection. Both Symantec Endpoint Security Complete (SES-C) and Carbon Black Cloud earned SE Labs’ coveted AAA award for Advanced Security EDR Protection. In the largest known public test of its kind, each solution achieved top marks for exceptional performance in detecting, blocking and neutralizing ransomware attacks with zero false positives. Despite being pummeled with hundreds of attacks, both solutions performed perfectly, stopping 100% of all attacks and preventing every attempted threat from executing. 

What makes this remarkable is not only how well these solutions performed—SE Labs CEO Simon Edwards called their results “exceptional”—but the conditions under which they were tested. SE Labs tested these products in an environment that more closely mirrors actual working conditions than any testing environment we’ve seen. 

And according to Edwards, no cybersecurity product has ever been subjected to a more demanding test.

The toughest test out there

Unlike other evaluations that fail to factor in every element of real-world attacks, the SE Labs methodology uses a comprehensive testing model. SE Labs sets up real networks and hacks them the same way threat actors do, while monitoring how successful those attacks are (or in this case, aren’t) against the security solutions they’re testing. The tests score solutions based on detection, prevention, response and remediation, along with their ability to neutralize threats, including living-off-the-land (LOTL) attacks. Just like real life.

In our case, both solutions were subjected to real-world attack scenarios under conditions that can only be described as harsh. SE Labs mounted attacks by 15 known ransomware families—not just two or three well-known threats as other testing regimens do. They assaulted both products with 556 ransomware payload files with both known and unknown variants. (In fact, two-thirds of files were unknown malware variants.) 

The attacks fell into two categories: Direct Attacks and Deep Attacks. Direct Attacks pit the product against a wide distribution of malware within a relatively short attack chain—a realistic representation of many attacks that target organizations of all sizes, and a strong test of prevention tools for stopping known and unknown threats. Deep Attacks are modeled after more sophisticated approaches in which attackers attempt to infiltrate an environment and move laterally to deliver their ransomware payload, roost unseen within the environment or inflict their damage in other ways. Defending against these scenarios requires a product to use detection and response capabilities to track an attacker’s movement throughout the entire attack chain; successful products shut down the attack before it does damage.

SE Labs examines not only whether a product detected the ransomware, but also whether it had deep insight into the entire process of how the network was hacked. “This level of visibility,” Edwards observes, “would be a significant advantage for a security professional who is battling a persistent attacker in real time.” If that doesn’t sound like real life for a security analyst, I don’t know what does.

What’s more, SE Labs measures how accurately a product classifies legitimate applications and URLs, while factoring in the interactions the product has with users. A perfect score goes only to those solutions that properly classify legitimate objects as safe—or that properly choose not to classify them at all. “In neither case,” notes the SE Labs reports, “should it bother the user.”  For alert-weary SecOps teams, that kind of silence really is golden.

No easy A (or AAA)

One interesting dynamic about SE Labs tests is that it’s up to the vendor whether or not the results will be published. “Most products fail this exceptionally difficult test,” notes Edwards, “so you never hear about them because the vendors don’t want to publicize the results. Only the very best performing solutions will earn a AAA rating, period.” 

This helps explain why some in the industry believe that every vendor seems to earn AAA ratings from SE Labs. (It’s a perception, but it’s wrong.) The testing lab evaluates many more products than the public is aware of. Because vendors only want to publish stellar results, those are the only results the public at large ever sees.

The trouble with tuning

Some cybersecurity product evaluations produce results that are a bit like the miles-per-gallon (MPG) ratings that automakers love to advertise: They’re enticing, but to achieve those numbers in real life, you often have to drive like your grandma going to church. “Your mileage may vary” didn’t become a catchphrase for nothing.

Now apply that same idea to testing cybersecurity products. Certain tests encourage and reward vendors for tweaking their configurations in ways that will assure them high scores. For instance, in one popular testing platform, vendors commonly configure their systems to alert on literally anything that could possibly be construed as a threat. That’s fine in a lab. But in the real world, that highly tuned configuration would crater productivity because security team analysts would spend their days chasing down threats that the solution would otherwise never flag. 

This is called “tuning for the test.” Vendors get a great score because they configured their solution precisely to get a great score—and that configuration is utterly unworkable in a real-world environment. So what good, really, is a high score on a test that isn’t relevant to the lived experience of security teams?

SE Labs works differently. There is no tuning. They test only software in configurations that any organization can deploy today. This is of enormous value.

The real winner in all this? You.

The point of product testing is to help prospective buyers make informed decisions. When security professionals assess their options for EDR or other solutions, they deserve to know how a realistic configuration will behave in their environment—not some highly tuned version that would never work in the world you live in. The results of those tuned-for-the-lab evaluations should have little credibility for organizations that rely on these solutions to stop real-world threats from attacking their real-world networks, assets and data.

We’re excited to see Symantec Endpoint Security Complete (SES-C) and Carbon Black Cloud earn SE Labs’ prestigious AAA rating for Advanced Security EDR Protection. It’s exciting because we know how incredibly challenging the SE Labs test is, and we’re thrilled to see our solutions perform perfectly under these daunting real-world conditions. Our products stood up without fail to the worst that Edwards and his team could throw at them.

And the best part? Because security teams like yours must defend their environments in the real world, these test results offer confidence that both solutions will perform spectacularly. For real. 

Learn more about these tests—and how these products fared—by registering for this webinar: Why Cybersecurity Tools Need To Be Attacked.

Symantec Enterprise Blogs
You might also enjoy
4 Min Read

Advanced EDR Protection: Symantec and Carbon Black Earn Perfect Scores in SE Labs Test

Symantec Endpoint Security Complete and Carbon Black Cloud earn coveted AAA rating by scoring 100% for detecting and blocking hundreds of ransomware attacks

Symantec Enterprise Blogs
You might also enjoy
5 Min Read

The EDR That Became Legend

From humble beginnings to unparalleled hits, Carbon Black helped create a chart-topping category

About the Author

Nate Fitzgerald

Head of Product Management, Enterprise Security Group, Broadcom

Nate has been a cloud security product leader for over 20 years.

Want to comment on this post?

We encourage you to share your thoughts on your favorite social platform.