In this two part series, elevenM’s Tessa Loftus looks at ways to measure, and improve the measurement of, security awareness programs.
Part One: metrics
As privacy and cyber security become increasingly important to the successful management of any organisation, there is a corresponding increase in the importance of security awareness programs.
But while there are a range of ways to measure privacy and cyber security programs, measuring the impact of awareness programs is considerably more difficult. As privacy or cyber security can be affected by either behavioural or external factors, that makes it difficult to assess what is working — no data breaches might indicate excellent awareness among your employees, or it may simply mean that you haven’t been targeted by cyber attackers (yet).
So, how do we measure the effectiveness of what we implement? Measurement is necessary for proving the value of any program, as well as improving it. But while it may be difficult, it’s not impossible.
Metrics (which we will deal with here) are one component of measuring effectiveness. However, they are not the whole — other components that are part of measuring effectiveness are less data-based, and focussed more on trends, culture, and tracking maturity over time.
When it comes to awareness programs, it’s hard to talk about measurement without talking about ways to improve effectiveness — but what we’re trying to focus on here is how changes to approach may improve your ability to identify and assess impact, improvement and, most importantly, changes in behaviour.
Participation vs impact
One of the most commonly used metrics for measuring a security awareness program, and for demonstrating compliance with regulatory requirements, is participation. Participation primarily measures:
- completion of training modules
- participating in a phishing drill
- attending a workshop or awareness event.
Obviously, participation does not measure impact — a person may participate in a mandatory training by randomly ticking boxes on the multiple-choice quiz until they eventually achieve a pass mark. However, participation is a foundational component of other less measurable impacts and should not be dismissed — it tells you reach and it can be combined with other factors to become a more nuanced metric. Essentially, participation metrics give you a base to build from.
Making it more complex
There are ways to improve the data you receive from traditionally participation-based metrics.
Training modules
Multiple-choice assessments at the end of a training module allow for getting lucky, and in fact many organisations use the quiz as an opportunity to reinforce the message, rather than as a way to assess knowledge, making it deliberately simple. This is a valid approach and speaks to the fact that people will often need many repetitions to truly learn something new. However, while this may help with learning, it doesn’t help us to measure impact.
There are a few methods that can be used to improve your ability to assess the effectiveness of training modules, including:
- Get rid of the multiple-choice
Asking more open-ended questions with a free text field, where the person has to come up with an answer, means that if they get the question right, they knew the answer. - Make the questions more complex
Asking slightly more complex questions that require reasoning will demonstrate which staff knew how to get to the correct answer (like showing your working in a maths test). - Timed responses
Timing how long it takes to respond can indicate the level of confidence in the answer (although timing can also add stress that decreases speed and reasoning, so should be used cautiously). - Self-rating for confidence
Asking respondents to answer the questions, but also to rate their level of confidence in their answers before they are told whether it is right or wrong. People with low confidence (or high confidence and wrong answers) have not absorbed their education as fully as people with high confidence and correct answers. - Pre- and post-testing
Asking people to complete awareness quizzes before and after a training module will give a baseline and enable you to see how much they’ve learnt from the training.
Obviously, all of these changes add time, money and complexity to the training process. Whether you feel it is necessary to implement these kinds of changes will depend on a range of factors, such as level of risk of cyber/privacy issues and your pre-existing level of security culture (and any number of other organisationally-specific issues).
Reporting suspicious emails
Phishing drills are one of the few awareness activities that can easily provide metrics. Broadly speaking, a gradual decrease in click rate and a gradual increase in report rate is indicative of an effective awareness program around phishing. However, there are other factors to consider.
Click rate vs report rate
Click rate shows the failure rate, so obviously you want a low click rate. However, click rate can be highly influenced by a range of things, including work patterns and errors — teams that receive a high volume of emails regularly vs those that don’t, teams that regularly engage with external organisations vs those that don’t, these are all issues that will impact click rate and may make direct comparisons between teams less scientifically accurate than they appear.
However, a high report rate is a positive indicator of awareness. A low report rate combined with a low click rate shows that people are not only failing a drill, but they are also not taking active steps to report — maybe they thought it was a phish, maybe they just didn’t bother to read the email. On the other hand, a high report rate shows good organisational awareness of phishing and the risks it presents.
It also follows that a high and/or increasing rate of people reporting suspicious emails, even if they are not attacks, indicates improvement in organisational awareness of phishing risk.
The second instalment of this piece will look at some of the harder to measure awareness activities, the importance of a security culture, and using trends and multiple data-points to get more information out of the data you have.
Photo credit: patricia serna on Unsplash.