Can you trust threat intelligence from threat sharing communities? | AT&T ThreatTraq

Every week the AT&T Chief Security Office produces a series called ThreatTraq with helpful information and news commentary for InfoSec practitioners and researchers.  I really enjoy them; you can subscribe to the Youtube channel to stay updated. This is a transcript of a recent feature on ThreatTraq.  The video features Jaime Blasco, VP and Chief Scientist, AT&T Cybersecurity, Stan Nurilov, Lead Member of Technical Staff, AT&T,  and Joe Harten, Director Technical Security.

Stan: Jaime. I think you have a very interesting topic today about threat intelligence. 

Jaime: Yes, we want to talk about how threat intelligence is critical for threat detection and incident response, but then when this threat intelligence and the threat actors try to match those indicators and that information that is being shared, it can actually be bad for companies. So we are going to share some of the experiences we have had with managing the Open Threat Exchange (OTX) - one of the biggest threat sharing communities out there.

Stan: Jaime mentioned that they have so many threat indicators and so much threat intelligence as part of OTX, the platform. 

Jaime: We know attackers monitor these platforms and are adjusting tactics and techniques and probably the infrastructure based on public reaction to cyber security companies sharing their activities in blog posts and other reporting.

An example is in September 2017, we saw APT28, and it became harder to track because we were using some of the infrastructure and some of the techniques that were publicly known. And another cyber security company published content about that and then APT28 became much more difficult to track.

The other example is APT1. If you remember the APT1 report in 2013 that Mandiant published, that made the group basically disappear from the face of earth, right? We didn't see them for a while and then they changed the infrastructure and they changed a lot of the tools that they were using, and then they came back in 2014. So we can see that that threat actor disappeared for a while, changed and rebuilt, and then they came back. We also know that attackers can try to publish false information in this platform, so that's why it's important that not only those platforms are automated, but also there are human analysts that can verify that information. 

Joe: It seems like you have to have a process of validating the intelligence, right? I think part of it is you don't want to take this intelligence at face value without having some expertise of your own that asks, is this valid? Is this a false positive? Is this planted by the adversary in order to throw off the scent?

I think it's one of those things where you can't automatically trust - threat intelligence. You have to do some of your own diligence to validate the intelligence, make sure it makes sense, make sure it's still fresh, it's still good. This is something we're working on internally - creating those other layers to validate and create better value of our threat intelligence.

Jaime: The other issue I wanted to bring to the table is what we call false flag operations - that's when an adversary or a threat actor studies another threat actor and tries to emulate their behavior. So when companies try to do attribution, it's much harder, right? We saw this in some of the Lazarus campaigns. Some of the campaigns that we saw that were targeting banks. They were trying to look like they were Russian, but it was clear that Lazarus was behind that. They were trying to confuse cyber security companies, planting some false flags here and there. 

Joe: So, Jaime, are there any techniques that you could refer to or recommend for finding false flags? Is it usually easy to find something over time, you get better at. What could you share in that area? 

Jaime: It's extremely hard and, as we always say, attribution is really hard and no one can really do attribution really well. And there is no one thing you can do. What I have seen from the past is that as you are analyzing a certain adversary, if you see something... there is a red flag in your head saying, "Wait a minute, I haven't seen that before. It's really weird that this actor in particular is using that technique or this piece of infrastructure." So it's more of an art than a science. 

Stan: There's an old trick that detectives use when they're like analyzing a crime scene or analyzing a crime, is when they report about it publicly, they don't actually disclose all of the things about the crime. So that when they do catch the bad guy, only the bad guy can confirm what happened or what really transpired and nobody else can kind of claim credit for it. I know from analyzing malware for a long time there are some things that you kind of tend to see, different adversaries use certain tactics, but they're not like worthy of publishing. You can't really talk about them or describe them. But when you see it, as Jaime mentioned, they don't align. This adversary - they're trying to use the same tactic but it's not quite the same as what you saw before. So, that's probably a good way to think about the false flags. 

Joe: Cool, thanks. I think it's an interesting discussion to understand the motivations behind threat intelligence and also what factors go into to validating it.

Share this with others

Get price Free trial