News & Insights.

The latest industry innovations and insights.

AI and Machine Learning in cyber defence and attack.

Neil Lonergan

Neil Lonergan

Co-Founder at FlexFibre

With worldwide business spend on Internet and network security forecast by Fortune to exceed the $100 billion mark by 2020, new technologies are being pressed into service by concerned CTOs who want to ensure they are at the forefront of cyber safety. It should therefore be no surprise that AI and Machine Learning are two key areas that are being considered to help transform the cyber security industry.

And transformation is needed. Cyber crime is the fastest growing form of crime in the US, as of 2017. In our increasingly cashless society, data is becoming a more valuable proxy than our wallets or handbags. And the pool of targets is always growing: in 2010, 2 billion people were online. In 2018 it is more than 4 billion. Very soon, it is estimated that there will be more ‘things’ connected to the Internet than people, generating data from individuals to that of entire cities and states. As our lives and spending habits move from the physical to the virtual, so too do the criminals.

With the amount of data in the world growing, security teams are being swamped in managing it. Attacks will continue. And sometimes they will succeed.

As a network supplier FlexFibre see this concern expressed ever more frequently amongst our clients. Our SDN controllers on our leased lines do have security advantages over more remote forms of network design, but it seems AI and Machine Learning are the new paradigms that we are moving toward.

With open source AI also becoming available for use by malicious actors, bringing AI into play for the defence is a necessity. There are two areas where AI and Machine Learning can help in cyber defence: Software, and hardware.

AI’s role in software to combat cyber attacks looks set to reduce both the numbers of the attacks and the damage they can do. Detecting the threat of the attack will be more likely with AI and Machine Learning that will have a vast library of patterned behaviour to work with. It is also conceivable that when one attack is carried out, that data is shared across an AI network which links multiple companies together in a layer of security: so an attack made against one target will be picked up and then anticipated by many others, making any future attack via the same technique harder to pull off. This will only be truly possible if the AI has the power to respond appropriately, perhaps via a Software Defined Network controller that can configure the network hardware to limit any attack. AI and Machine Learning will also have the ability to offer in-depth post attack forensic analysis as well, (which is something many companies don’t have time to do presently unless the attack us severe). The AI could compare it to other attacks, and help law enforcement agencies gather enough information for prosecution.

But one element of AI and Machine Learning that will perhaps aid cyber security the most in software terms is in reducing the human factor. The human tendency toward laziness and improper cyber hygiene is what leaves many networks open to exploitation. How many of us are guilty of not updating our software promptly enough? Or of taking the time to think of an original and different password than our previous ones? Furthermore, the chance of a worker falling victim to a social engineering attempt to gather confidential information which can be used for an attack later on can be reduced by AI monitoring that worker’s habits and identifying vulnerabilities in their behaviour. Is that a recognised number an enquiry has called in on to request a change of address for their credit card? No? A red flag appears. Or perhaps, if chatbots start to deal in more sensitive call requests, they won’t be emotionally susceptible to sob stories or a recording of a screaming child in the background as a ‘young mother’ needs to change the mobile phone number on her bank account, but is so ‘stressed’ she has forgotten her password.

The second broad advantage of employing AI and Machine Learning in the fight against cyber attacks is focused on the hardware element of routers and chips. A new school of thought on winning the war against hackers is not to continually update with one software patch after another, but to actually redesign the underlying electronics that run the systems themselves. A recent article in New Scientist magazine by Sally Adee (11th August 2018), stated that 43% of hacks are actually exploited hardware vulnerabilities: from buffer overflows that allows the original information to be rewritten, or tricking a chip into carrying out a ‘speculative execution’ and hence ignore the usual checks it would carry out.

Perhaps the most well known of these hardware vulnerabilities is the Intel CPU, with bugs existing in their design for the last two decades. According to Google, virtually every Intel processor released since 1995 is vulnerable to what have been termed ‘Meltdown’ and ‘Spectre’ attacks. Furthermore, researchers at the University of Michigan, presenting their findings at 2016’s IEEE Symposium on Privacy and Security (which won them ‘Best Paper’), detailed the creation of a microscopic hardware back door in a CPU that is virtually impossible to detect with any present hardware security analysis.

AI and Machine Learning could be brought into service at the manufacturing and quality checking stage of production before these chips are added to any device. They would be ideally suited to checking the chips at a microscopic level against the original designs of the chip, to highlight and locate any tampering in the actual build stage, which is where such vulnerabilities can be added. The chip might have a secure design, but if there are malicious actors in the build stage, then only a powerful quality assurance process can pick that up. In a sense, AI and Machine Learning could be employed to provide a provenance for the chips, perhaps at a location outside the original factory but before they are installed.

It is also obvious that current chip design suffers from security oversights, as the Intel CPU case proves. What is next to impossible for a human designer and tester to identify might be much easier for AI and Machine Learning: chip designs can be tested to destruction much faster with this technology over traditional means, and this should identify any inherent weaknesses that could be exploited by hardware hacks.

But let’s examine the view from the opposition. In anticipating cyber crime, we need to understand how AI and Machine Learning will benefit the attackers too.

In our view, there will be two types of perpetrators here: the cyber criminal seeking to extort money and information from an organisation, and the state actor, those foreign regimes who try to steal scientific research and economic and military secrets.

The former will probably see only a limited benefit from AI and Machine Learning. If these technologies are to be efficient, then they will likely need vast processing power and servers behind them to allow them to react in real time and continually learn. If AI programs are purchased off the Dark Web by a criminal from a criminal (much as phishing software is done today), then it is unlikely that the software will come with the most recent learning updates that it needs to penetrate an up to date, AI monitored target network. It would almost certainly not have the server resources needed to launch an effective attack on a well protected network from various locations, where latency would be more of an issue for the attacker than the defender too, giving the defender that extra microsecond advantage.

This scenario changes with state actors. These groups would have the resources needed to carry out an attack on academic institutions and government networks with well maintained AI programs, possibly complemented by human intelligence about the target. Here, it is an old-fashioned arms race.

In conclusion, we at FlexFibre see a near future where AI sits alongside SDN controllers to monitor traffic and anticipate threats. We see it collaborating with other networks about suspect origin points, and being able to change hardware configurations to mitigate attacks in near real time. We also think that huge potential security benefits can be gained from bringing AI and Machine Learning into the design and testing of next generation chips.

How do you see AI and Machine Learning changing the face of network security in the next few years?