Recently, the Defense Advanced Research Project Agency (DARPA) announced a multi-year investment of more than $2 billion in new and existing programs in artificial intelligence called the “AI Next campaign. Agency director, Dr. Steven Walker, explained the implications of the initiative: “we want to explore how machines can acquire human-like communication and reasoning capabilities, with the ability to recognize new situations and environments and adapt to them.”
Indeed, artificial intelligence (AI) and correlating machine learning (ML) applications have emerged as hot topics in the emerging technology and cybersecurity communities. They are about recognizing “new situations and environments” and adapting to them. According to KPMG, in 2017, AI was a major focus areas of global VC investments -over $12B and doubling the volume of 2016. Many of those investments included aspects relating to information security. Now that the DARPA investment (that is directed to much more than cybersecurity uses) has been added to the investment money trail, there is no doubt AI will be part of our cybersecurity future.
There is evidence that AI and ML can be valuable tools to help us navigate the cybersecurity landscape. Specifically it is being used to help protect against increasingly sophisticated and malicious malware, ransomware, and social engineering attacks. AI’s capabilities in contextual reasoning can be used for synthesizing data and predicting threats.
AI and ML may become new paradigms for automation in cybersecurity. They enable predictive analytics to draw statistical inferences to mitigate threats with fewer resources.
In a cybersecurity context, AI and ML can provide a faster means to identify new attacks, draw statistical inferences and push that information to endpoint security platforms. This is especially important because of the major shortage of skilled cybersecurity workers and growing attack surface. According to Cybersecurity Ventures CEO Steve Morgan, the Human attack surface is to reach 6 billion people by 2022 and Cyber-crime damage costs to hit $6 trillion annually by 2021, AI and ML cybersecurity capabilities are very important and increasingly valuable.
Former White House Cybersecurity Coordinator Rob Joyce said in a 2016 presentation at USENIX: “If you really want to protect your network,” he advised, “you have to know your network, including all the devices and technology in it.” A successful attacker will often “know networks better than the people who designed and run them.” With the right combination of data, computing power, and algorithms, artificial intelligence can help defenders gain far greater mastery over their own data and networks, detect anomalous changes (whether from insider threats or from external hackers), and quickly address configuration errors and other vulnerabilities.”
To provide more depth to his insights, Both AI and ML can be integral aspects of automation and adaptive networks. Applications for automated network security include self-encrypting and self-healing drives to protect data and applications. Cognitive automation can also allow for horizon scanning and monitoring of networks that can report on deviations and anomalies in real time. That automation includes automatic updating of defense framework layers (network, payload, endpoint, firewalls and anti-virus) and diagnostic and forensics analysis for cybersecurity. With those capabilities, it is no wonder why venture capitalists and government agencies are interested in AI.
While AI and ML can be important tools for cyber-defense, they can also be a two edged sword. While they can be used to rapidly identify threat anomalies and enhance cyber defense capabilities, they can also be used by threat actors. Adversarial nations and nefarious hackers are already using AI and MI as tools to find and exploit vulnerabilities in threat detection models. They do this through a variety of methods. Their preferred ways are often via automated phishing attacks that mimic humans, and with malware that self-modifies itself to fool or even subvert cyber-defense systems and programs.
Cyber criminals are already using AI and ML tools to attack and explore victims’ networks. Small business, organizations, and especially healthcare institutions who cannot afford significant investments in defensive emerging cybersecurity tech such as AI are the most vulnerable. Extortion by hackers using ransomware and demanding payment by cryptocurrencies may become and more persistent and evolving threat. The growth of the Internet of Things will create many new targets for the bad guys to exploit. There is urgency for both industry and government to understand the implications of the emerging morphing cyber threat tools that include AI and ML and fortify against attacks.
Combating these machine-driven hacker threats requires being proactive by constantly updating and testing cybersecurity capabilities. Using AI and ML to recognize and predict anomalies associated with the data-base of behavioral patterns of malicious threats is a countermeasure. Also employing adaptive data deception technologies to evade or fool potential hacker attacks can be effective.
When it comes to adapting to new, sophisticated digital environments, AI and ML become key tools or innovative chess pieces in a cybersecurity strategy game. It likely will depend on the accuracy, speed, and the quality of the algorithms and supporting technologies to survive and thrive. To be competitive in a sophisticated game we need to be vigilant, innovative, and one step ahead. Thankfully, there are investment trends showing it is going in that direction.