The introduction of the MITRE ATT&CK evaluations is a welcomed addition to the third-party testing arena. The ATT&CK framework, and the evaluations in particular, have gone such a long way in helping advance the security industry as a whole, and the individual security products serving the market.
The insight garnered from these evaluations is incredibly useful. But let’s admit, for everyone except those steeped in the analysis, it can be hard to understand. The information is valuable, but dense. There are multiple ways to look at the data and even more ways to interpret and present the results (as no doubt you’ve already come to realize after reading all the vendor blogs and industry articles!) We have been looking at the data for the past week since it published, and still have more to examine over the coming days and weeks.
The more we assess the information, the clearer the story becomes, so we wanted to share with you Trend Micro’s 10 key takeaways for our results:
1. Looking at the results of the first run of the evaluation is important:
- Trend Micro ranked first in initial overall detection. We are the leader in detections based on initial product configurations. This evaluation enabled vendors to make product adjustments after a first run of the test to boost detection rates on a re-test. The MITRE results show the final results after all product changes. If you assess what the product could detect as originally provided, we had the best detection coverage among the pool of 21 vendors.
- This is important to consider because product adjustments can vary in significance and may or may not be immediately available in vendors’ current product. We also believe it is easier to do better, once you know what the attacker was doing – in the real world, customers don’t get a second try against an attack.
- Having said that, we too took advantage of the retest opportunity since it allows us to identify product improvements, but our overall detections were so high, that even removing those associated with a configuration change, we still ranked first overall.
- And so no one thinks we are just spinning… without making any kind of exclusions to the data at all, and just taking the MITRE results in their entirety, Trend Micro had the second highest detection rate, with 91+% detection coverage.
2. There is a hierarchy in the type of main detections – Techniques is most significant
- There is a natural hierarchy in the value of the different types of main detections.
- A general detection indicates that something was deemed suspicious but it was not assigned to a specific tactic or technique.
- A detection on tactic means the detection can be attributed to a tactical goal (e.g. credential access).
- Finally, a detection on technique means the detection can be attributed to a specific adversarial action (e.g. credential dumping).
- We have strong detection on techniques, which is a better detection measure. With the individual MITRE technique identified, the associated tactic can be determined, as typically, there are only a handful of tactics that would apply to a specific technique. When comparing results, you can see that vendors had lower tactic detections on the whole, demonstrating a general acknowledgement of where the priority should lie.
- Likewise, the fact that we had lower general detections compared to technique detections is a positive. General detections are typically associated with a signature; as such, this proves that we have a low reliance on AV.
- It is also important to note that we did well in telemetry which gives security analysts access to the type and depth of visibility they need when looking into detailed attacker activity across assets.
3. More alerts does not equal better alerting – quite the opposite
- At first glance, some may expect one should have the same number of alerts as detections. But not all detections are created equal, and not everything should have an alert (remember, these detections are for low level attack steps, not for separate attacks.)
- Too many alerts can lead to alert fatigue and add to the difficulty of sorting through the noise to what is most important.
- When you consider the alerts associated with our higher-fidelity detections (e.g. detection on technique), you can see that the results show that Trend Micro did very well at reducing the noise of all of the detections into a minimal volume of meaningful/actionable alerts.
4. Managed Service detections are not exclusive
- Our MDR analysts contributed to the “delayed detection” category. This is where the detection involved human action and may not have been initiated automatically.
- Our results shows the strength of our MDR service as one way for detection and enrichment. If an MDR service was included in this evaluation, we believe you would want to see it provide good coverage, as it demonstrates that the team is able to detect based on the telemetry collected.
- What is important to note though is that the numbers for the delayed detection don’t necessarily mean it was the only way a detection was/could be made; the same detection could be identified by other means. There are overlaps between detection categories.
- Our detection coverage results would have remained strong without this human involvement – approximately 86% detection coverage (with MDR, it boosted it up to 91%).
5. Let’s not forget about the effectiveness and need for blocking!
- This MITRE evaluation did not test for a product’s ability to block/protect from an attack, but rather exclusively looks at how effective a product is at detecting an event that has happened, so there is no measure of prevention efficacy included.
- This is significant for Trend, as our philosophy is to block and prevent as much as you can so customers have less to clean up/mitigate.
6. We need to look through more than the Windows
- This evaluation looked at Windows endpoints and servers only; it did not look at Linux for example, where of course Trend has a great deal of strength in capability.
- We look forward to the expansion of the operating systems in scope. Mitre has already announced that the next round will include a linux system.
7. The evaluation shows where our product is going
- We believe the first priority for this evaluation is the main detections (for example, detecting on techniques as discussed above). Correlation falls into the modifier detection category, which looks at what happens above and beyond an initial detection.
- We are happy with our main detections, and see great opportunity to boost our correlation capabilities with Trend Micro XDR, which we have been investing in heavily and is at the core of the capabilities we will be delivering in product to customers as of late June 2020.
- This evaluation did not assess our correlation across email security; so there is correlation value we can deliver to customers beyond what is represented here.
8. This evaluation is helping us make our product better
- The insight this evaluation has provided us has been invaluable and has helped us identify areas for improvement and we have initiate product updates as a result.
- As well, having a product with a “detection only” mode option helps augment the SOC intel, so our participation in this evaluation has enabled us to make our product even more flexible to configure; and therefore, a more powerful tool for the SOC.
- While some vendors try to use it against us, our extra detections after config change show that we can adapt to the changing threat landscape quickly when needed.
9. MITRE is more than the evaluation
- While the evaluation is important, it is important to recognize MITRE ATT&CK as an important knowledge base that the security industry can both align and contribute to.
- Having a common language and framework to better explain how adversaries behave, what they are trying to do, and how they are trying to do it, makes the entire industry more powerful.
- Among the many things we do with or around MITRE, Trend has and continues to contribute new techniques to the framework matrices and is leveraging it within our products using ATT&CK as a common language for alerts and detection descriptions, and for searching parameters.
10. It is hard not to get confused by the fud!
- MITRE does not score, rank or provide side by side comparison of products, so unlike other tests or industry analyst reports, there is no set of “leaders” identified.
- As this evaluation assesses multiple factors, there are many different ways to view, interpret and present the results (as we did here in this blog).
- It is important that individual organizations understand the framework, the evaluation, and most importantly what their own priorities and needs are, as this is the only way to map the results to the individual use cases.
- Look to your vendors to help explain the results, in the context that makes sense for you. It should be our responsibility to help educate, not exploit.
The post Trend Micro’s Top Ten MITRE Evaluation Considerations appeared first on .