Search

Staying Ahead of Machines (1/3)

Why you haven’t been replaced by a robot.

From Skynet to reality

Artificial Intelligence. Think about those two words and what does it bring to mind?  If you (like me) are like the vast majority of people, you think immediately of Skynet, of various Hollywood movies and of machines. Dangerous, people killing machines.

I wonder if this will change as the kids growing up now are introduced to AI and Machine Learning as part of reality, rather than the way I was introduced to it, which was as part of a dystopian future vision.  An idea implanted by Hollywood and like most ideas implanted in such a way, using every bit of their “artistic license” compared with scientific fact.

Let’s start with the term AI.  Artificial Intelligence, as science defines it, is not the complete recreation of the human brain in machine form but instead “the theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.”[1]

The keyword here is tasks. And so, when we look at Artificial Intelligence and its use in cybersecurity, we have to understand what those tasks could be and – more importantly – when those tasks cannot be.  Related to that Hollywood perception, my own opinion is that many companies are guilty of taking advantage (for their marketing purposes) of that perception and using phraseology that suggests they’re no longer supplying computer software but instead, a virtual team member who will be there to support your security operations and work independently. That is simply not true.

AI, your new co-worker?

And if only it were. Unlike some industries which feel threatened by automation (think jobs such as waiters, shelf fillers and bar staff) the cybersecurity industry has surely one of the highest skills shortages of any sector. Read any independent study on the subject and a bleak picture is painted, with unfulfilled job roles for security numbering anywhere from the hundreds of thousands to the millions (depending on the study and some regional variation).

And when we look at studies such as the ISC² Cybersecurity Workforce Study 2019[2] (which by the way estimates a shortage of around 291,000 cybersecurity professionals) we see that the areas we have a skills shortage focus on complex areas such as:

  • Risk assessment, analysis and management
  • Governance, risk management and compliance
  • Security and threat intelligence analysis

These are three areas that inherently involve human input to weigh up sets of data and make decisions, based upon that data by applying the context of operating within the business they are part of.  So to put it down to a machine that is largely going to work on statistical data just would not work (sorry folks, the AI is just not there yet).

People make “tough calls” every single day when it comes to cybersecurity and they do so with a sense of responsibility and sometimes, you just have to “go with your gut”.  Machines do not have this option and so what is it that makes humans this way?  If we knew, we’d be able to create machines that could “think outside the box”.

A matter of consciousness

What inspired my research into this area was not actually cybersecurity itself but listening to a podcast by Joe Rogan.  Joe is a stand-up comedian, mixed martial arts commentator and generally a cool dude.  He also gets smart people on his podcast so he can explore subjects he is interested in and try to help the layperson understand such concepts.  One such episode featured Sir Roger Penrose.  Sir Roger worked with Stephen Hawking on black hole theories and now one of his major focuses as part of The Penrose Institute is to study human consciousness.  What makes us different from machines?  To sum it up[3]: “We will use mathematicians to look at the nature of complex creative tasks, and devise puzzles that require human creative intelligence. These creative tasks should be non-computable so that we are sure we are looking at types of thinking that could not be performed by a computer. We will publish these non-computable puzzles in newspapers, just as Alan Turing did to find the code breakers of Bletchley Park. Our objective is to find the areas, linked regions and mechanisms that provide humans their creative power by using modern imaging techniques from neuroscience, such as MEG, fMRI and multichannel EEG.”

This included a posting in a number of academic journals, a chess position designed to defeat computers but solvable by humans.  The idea is then to scan the brains of people who can solve it to try and understand where these “eureka” moments come from. And what happens to computers who try to solve it?

“A chess computer struggles because it looks like an impossible position, even though it is perfectly legal. The three bishops force the computer to perform a massive search of possible positions that will rapidly expand to something that exceeds all the computational power on planet earth.”[4]

This was the bit that blew my mind but also brought into sharp focus just why we cannot use AI to solve similar complex problems in our everyday lives in cybersecurity.  This is a single chess problem.  Now imagine all the variables involved in a complex cybersecurity incident?

There is no doubt then, that machines aren’t ready to replace us yet. So when that company comes along with their shiny new tech that claims to replace whole teams of us, we need to be able to see through it.  So what should we be asking them?  And where can we use AI in our everyday operations?

We’ll explore this in part 2 and part 3 of the blog series.


[1] Dictionnaire anglais Oxford

[2] ISC2

[3] Penrose Institute

[4] ChessBase

Incident Response Hotline

Facing cyber incidents right now?

Contact our 24/7/365 world wide service incident response hotline.

CSIRT