Pete’s a hardened security professional. He’s been in this game for over 20 years and has the battle scars of many cyber uplift and remediation programs. He feels the pain of CISOs fighting the good fight.
I’m a former journo. Even though I’m no longer actively reporting, the craft matters to me and I get defensive and outraged in equal measures about the quality of print news.
Pete and I butted heads in recent days over the reporting of the NSW Auditor-General’s (AG) report into Transport for NSW and Sydney Trains.
The AG report, and much of the media coverage, was overtly critical. A “litany of cyber security weaknesses [identified in] a scathing review” was how the SMH described it.
Pete wasn’t overly happy about the coverage, feeling both the media reporting and the AG report to be unfair, particularly in the context of how organisations actually go about improving their cyber posture (more on this later).
While I defended the reporting as – for the most part – being a fair and straight write-up of the AG report, I have to concede that on the bigger point Pete was dead right.
There’s a problem in our industry, and in broader society, with how we often talk about and respond to reviews of organisations’ security programs. It’s not that we’re quick to talk about deficiencies that don’t exist or that aren’t serious, but that the way these reviews are presented drives a response and conversation that is counterproductive to good security outcomes.
There are no perfect security programs, or organisations with perfect security. The threat landscape changes daily and security uplifts require complex whole-of-organisation changes that necessarily take a long time. Even in the most mature organisations, “good security” is a work in progress, with gaps continuously identified and addressed. Any detailed review will find holes and inadequacies.
To be clear, this is not an argument to take lightly the adverse findings of a review. Arguably, these findings are a large part of why we do reviews, so that a measured and constructive response to them can lead to improved security.
But too often in our journeys at elevenM we see instances where the supercharged or out-of-proportion delivery of adverse findings leads to an equally supercharged response (sometimes in the form of its own large remediation program) that sees a sizeable redirection of resources, and ultimately the deferral or interruption of well-considered strategies or critical uplift projects.
We found it particularly uncomfortable that the AG report was based on a red team exercise. A red team exercise – where “authorised attackers” (in the words of the AG report) are given permission to try and break into systems – will always find security flaws. These exercises are typically conducted expressly to provide insights to security teams that they can learn from. To broadly publish those findings in the tone and manner that the AG report has done didn’t strike us as constructive.