Hackers Using AI? An Increase in the FUD Factor

January 29, 2018  |  Jason Kichen

It’s hard to envision hackers, whether skiddies, APTs, or anything in between, using any sort of artificial intelligence (AI) or machine learning (ML) to attack a target network. Despite the availability of these sophisticated technologies, the most simplistic attack tactics continue to work. Enterprises aren’t patching known vulnerabilities; freely available malware can run in memory un-detected; users continue to click on links they receive in email or allow macros on that innocent-looking office document; and internal network logs are often not collected and even more rarely kept for any period. if these methods work, why would adversaries turn to more complex solutions like AI or ML?

Looking back on 2017, perhaps the biggest takeaway is that the most obvious methods still work. Adversaries seek the greatest mission gain with the lowest amount of resources expended and equities exposed. For example, Equifax wasn’t pwned by a fancy ZeroDay exploit or an insider with a USB drive; PII on millions of consumers wasn’t culled from S3 buckets because Amazon’s infrastructure was hacked by an APT; WannaCry wasn’t the result of a ZeroDay vulnerability; and people (amazingly) clicked Yes to download an update to Adobe Flash, giving us BadRabbit! Sticking with what works continues to pay off for all adversaries, irrespective of their resources, motives or intent. So, what’s with the fear mongering over hackers using AI and ML to attack their targets?

AI (by which I mean both Machine Learning and AI in general) is the gift that keeps on giving. Most in the InfoSec community agree that AI has its place in the defense of the enterprise. The problem is that few people understand how AI works or how to best apply it, and many cybersecurity companies take advantage of this situation by making fancy sounding claims about the number of models they apply to the data or the types of mathematics they use to generate results. These claims generally go hand-in-hand with a dark-themed user interface with some sort of spinning globe or pew-pew map. And while defenders work to sift through the marketing blather and outrageous claims about cybersecurity products that use AI, some in the security world take further advantage, and extend the FUD further: what could be better to sow fear and confusion than claiming that hackers are now using AI to attack your network? The more observant in the InfoSec community have noticed that this language tends to originate with companies that stand to profit on the very same FUD that permeates the market.

This FUD spreading takes on a few different forms, often by way of polls, as in, how many people believe hackers will use AI. There’s been a few of these polls where more than 50 percent of the respondents agree that this is a real threat. For the life of me, I can’t understand why. The other way is through companies that make the claim. This comes in the form of sponsored posts on various InfoSec news sites, or interviews with company executives. There have been claims made about adversaries detected and intrusions executed using AI; while this may come to pass in the future, it’s incredibly unlikely any time soon. There are simply too many ways for adversaries to attack networks and accomplish their objectives using far more simplistic and less risky tactics. An adversary who has mastered the use of AI in their operations would only use it for the hardest of the hard targets, and even then, they’re likely to find an easier way to achieve their objective.

Yet, it’s important to note that the academic and security-minded research into hackers use of AI is real, and important. Adversarial machine learning is one angle. This work is important; it helps understand the capabilities and limitations of various machine learning strategies. Security research, presented at security conferences and in articles, about how hackers could use AI in furtherance of offensive operations, is fascinating and important to get a sense of how strategies and capabilities could develop over time.

Where does this leave us? For defenders, it’s important to understand both your (potential) adversary and your threat model. For the time being, the best strategies continue to be the ones that experts across the InfoSec world continue to harp on: have sufficient network visibility; collect and store relevant log data; patch and update often; use sufficiently complex passwords; don’t use known vulnerable software; don’t click on links in emails; don’t download files from sketchy websites; and don’t install programs you don’t know or trust. Meanwhile, deploy various security tools to build out an in-depth defense strategy, layering your defensive capabilities such that you don’t create a hard outside with a squishy inside. Log and instrument appropriately so that (defensive) AI can be deployed where it provides the greatest value: inside the network, maintaining an unblinking and ever analytical eye on the voluminous logs that the network is generating. Stop worrying about the far-fetched and yet unproven what ifs and focus on the core fundamental issues that are far more likely to get you pwned today, tomorrow, or next month.

Share this with others

Get price Free trial