Less Machine Learning and More Human Learning please

Learning from Cybersecurity Failures. Lesson Learned are not always learned in Cybersecurity incidents. You can learn now using Cyberfailures.com

Less Machine Learning and More Human Learning please

Cybersecurity - Comical or Depressing?

The Information Security industry can be comical at times and at other times it can be quite depressing.   We hear of breach after breach, and if you’re like me, your phone never stops ringing with vendors promising the next “must have”, greatest product that you need to protect the network.  All this in the background of technical jargon such as threat vectors, APTs, cryptominers, SQLi attacks, and social engineering.

Whilst value adding security vendors are essential to have on your team, not all vendors are true value add partners. The last straw for me is the number of security products that are now claiming expertise in machine learning and artificial intelligence (let’s throw blockchain and IoT in there also, just because we can). Whilst I welcome the sensible application of these technologies there is so much we can do to advance information security with ... well… human intelligence and human learning.  After all cybersecurity is largely a people issue, so why are we ignoring this human element of learning from failure.

A Security Evolution or Infinite Loop?

Throughout my working life I have always believed in continuous improvement from learning.  I recall a new product launch back in 2005 that took 18 months to launch the first version.  Over the coming year I worked to optimise the process to the extent that implementation time-frames reduced to 12 weeks!   This was only possible because of a cold hard look at what didn’t work so well, and what areas of the deployment were dogged with time delays, identifying how these could be optimised to shorten the timeframe.   Each subsequent deployment improved, leveraging lessons learned in the previous deployments. It was a gradual but persistent evolution.

Programmers are familiar with the concept of an infinite loop.   A programatic error that turns a software program into a zombie application, it gets stuck, will never advance, consumes intensive resources in the process, starving other legitimate process and ultimately can de-stabilise the entire system.   I suggest unless we start learning from failures the information security sector is firmly stuck within an infinite loop.

The 2017 Equifax breach was a result of incomplete patching.  The 2015 Talk Talk breach was as a result lack of input validation. These are not new causes.  Security incidents have been caused by these gaps for years, but still organisations fall victim to what should be inherent practice to protect networks.

Some Science Please

The last time I checked, Computer Science was the dicipline we work within.   So where is the science in Cybersecurity?   If you recall your school days of science, it was heavily focused on science experiments, adjusting your hypothesis based on data from the outcome of an experiment and repeating until new learning or validation arose. Each experience was not seen as a failure, it was seen as a basis for gathering data so you can make a better hypothesis and then repeat the process.

In Cybersecurity we have no such approach. We have standards and techniques and good practice will apply a risk based approach, which is usually qualitative. What if we were to apply a more scientific approach, analysing incidents that occur, adjusting our approach to defence based on new data and then repeat - progressing the evolution.

This approach to learning is highly mature in the aviation industry.  As a result of this scientific approach airline travel is today statistically safer than other forms of transport, but it wasn’t always that way, indeed at one point in time (in particular the 1970’s) aviation travel was considerably more risky that it is today. How did the airline industry achieve this level of safety?   By meticulously learning from every incident that occurred in the industry, sharing those lessons publically and most importantly fostering a culture of learning from failures.

In his book, “Black Box Thinking”, Matthew Syed describes the principles of open loop thinking, and how each failure is an opportunity to improve safety in the airline industry.  We can, and should, do the same in the cybersecurity world.   Unless we adopt a more scientific approach such as the approach taken in the airline industry the Cyber security industry will be stuck in an infinite loop.

Start Here

There are many high quality reports published each year outlining the various cybersecurity threats and incidents.  The Verizon Data Breach Investigation report and the ENISA Threat Landscape report are two must-read reports that everyone should read.  These reports are published annually and although are fantastic in content they don’t necessarily have a direct learning "call to action" based in incidents.   This is why I have created CyberFailures reports.

Enter CyberFailures Reports

My goal with Publishing CyberFailures Reports is to embrace learning from security incidents.  We can all learn from security incidents, even if they occur in other organisations.  In fact, it is better to learn from incidents that occur in other organisations rather that solely relying on incidents within your own organisation.  By assessing all incidents that are publicaly disclosed we can accelerate learning in our industry, you can take these lessons and help improve your own organisations defences.  

In my first release of Cyberfailures I focus on training directly applicable to various incidents that have occurred using the Pluralsight platform.  Take all the publicaly disclosed incidents, look at the root cause, and ensure your team is trained to help reduce the risk of your organisation falling victim to the same root causes.   Please take a look, and help to start the process of sustainable evolution for cybersecurity whlist improving the security of your own organisation.