Let’s take this seriously

Why would it be offensive when someone tells you they care about the very thing you want them to care about?  When your behaviour harms another because you overlooked something important, isn’t it good to convey that you do in fact care about that thing?

This might seem intuitive in the context of personal relationships, but often falls flat when organisations talk about privacy and cyber security. This week – in Privacy Awareness Week – we remind ourselves that demonstrating a commitment to privacy goes beyond soundbites and snappy one-liners.

[Insert company name] takes privacy and security seriously” is increasingly one of the more jarring (and ill-advised) things a company can say today, especially in the wake of a breach.

It doesn’t sit well with journalists. You can almost hear their collective sigh every time a media statement containing that phrase is launched from corporate HQ.

Yet companies do put it in there, and usually at the very top.

Earlier this year, TechCrunch journalist Zack Whittaker scoured every data breach notification in California and found a third of companies had some variation of this “common trope”.

Whittaker wasn’t impressed: “The truth is, most companies don’t care about the privacy or security of your data. They care about having to explain to their customers that their data was stolen.”

For years, companies adopted a cloak-and-dagger attitude to any public commentary about privacy and security. “We don’t discuss matters of security” was a handy way for corporate affairs teams to bat away pesky tech and infosec journos, much like they might say “the matter is before the courts” in other awkward contexts.

This approach began to fray as companies realised cyber security and privacy issues weren’t purely technical stories. Breached data impacted real people today. Vulnerable systems could affect people tomorrow. And the community was becoming more vocal and aware.

We began to see companies eager to show they cared. And so … “We take privacy and security very seriously.”

But why should that rankle so much?

Simply because we intuitively detect something’s not right when a company or a person in our life glibly tells us they hold a position that contrasts with the evidence. In fact, it’s awkward.

Ask Mark Zuckerberg. Earlier this month, standing under a banner that read “the future is private”, the Facebook CEO proclaimed privacy was at the heart of Facebook’s new strategy. The awkwardness was so intense that Zuckerberg even sought to dissolve it with humour, rather unsuccessfully.

The gap between messages of care and diligence for data protection and what consumers actually experience doesn’t only relate to Facebook. 

A number of breaches are the result of insufficient regard by a company for how customer data is used – such as unauthorised sharing with third parties – or the result of an avoidable mistake – like failing to fix a security flaw in a server where the patch has been available for months. And when companies insist they care while simultaneously trying to evade their responsibilities, tempering a sense of cynicism becomes even harder.

The state of the cyber landscape contributes too. Threats are intensifying, more breaches are happening and there’s mandatory reporting requirements. Pick up a newspaper and odds on there’s a breach story in there. It’s not unreasonable for consumers to think there’s an epidemic of businesses losing sensitive data, yet somehow they’re all identically proclaiming to take data protection very seriously. It doesn’t add up.

At the same time, it should be possible for an organisation to affirm a commitment to data protection, even in the wake of a breach. Because it’s possible for a company to care deeply about privacy and security, to have invested greatly in these areas, and still be breached.  Attackers are more skilled and determined, and its challenging to protect data that is everywhere thanks to the use of cloud technologies and third parties.

So we can cut organisations a little slack. But the way forward is not reverting to a catchy set of words alone.

As we learned from the 12-month review of the Notifiable Data Breaches scheme published this week by the Office of the Australian Information Commissioner, consumers and regulators want (and deserve) to see actions and responses that reflect empathy, accountability and transparency. They expect to see organisations show a genuine commitment to reducing harm, such in the assistance they provide victims after breach. A willingness to continuously update the public about the key details of a breach, and simple advice on what to do about it, also shows a genuine focus on the issue and a willingness to be transparent. And when company leaders are visible and take responsibility,  it tells customers they will be accountable for putting things right.

Do these things, and there’s a better chance customers will take your commitment to privacy and security seriously.

Anti-Automation

You may think from the title we’re about to say how we oppose automation or think IT spend should be directed somewhere else. We are not. We love automation and consider it a strategic imperative for most organsiations. But there is a problem: the benefits of automation apply to criminals just as much as they do to legitimate organisations.

Why criminals love automation

Success in cybercrime generally rests on two things. Having a more advanced capability than those who are defending and having the ability to scale your operation. Automation helps both of these. Cybercriminals use automated bots (we term these ‘bad bots’) to attack their victims, meaning a small number of human criminals can deliver a large return. For the criminals, fewer people means fewer people to share in the rewards and a lower risk of someone revealing the operation to the authorities or its secrets to rival criminals. Coupled with machine learning and criminals can rapidly adapt how their bots attack victims based on the experiences of attacking their other victims. As victims improve their security, so the bots are able to learn from other cases how resume their attacks.

What attacks are typically automated?

Attacks take many forms but two stand out: financial fraud and form filling. For financial fraud, bad bots will exploit organisations’ payment gateways to wash through transactions using stolen credit card details. For major retailers, the transactions will typically be small (often $10.00) to test which card details are valid and working versus others. The criminals then use successful details to commit larger frauds until the card details no longer work. For form filling, bad bots will exploit websites that have forms for users to provide information. Depending on the site and the attack vector of the bot, the form filling attacks could be used for a number of outcomes such as filling a CRM system with dummy ‘new customer’ data, content scraping and advanced DDoS attacks that, due to automation, can be configured to reverse engineer WAF rules to work out how to get through undetected.

Real business impact

The reason we at elevenM feel strongly about this is that we are seeing real business impact from these attacks.  Simple metrics like OPEX costs for web infrastructure. We have seen businesses who are dealing with such automated traffic have their infrastructure cost increase by 35%. There are clear productivity impacts from managing customer complaints from password lockouts. This can be crippling to high volume low workforce businesses. And then there is fraud, something that not only impacts the business but the market and society as a whole.

How can we defend against them?

Traditional methods of blocking attack traffic such as IP based blocking, traffic rate controls, signatures and domain-based reputation are no longer effective. The bots are learning and adapting too quickly. Instead, anti-automation products work by sitting between the public internet and the organisation’s digital assets. These products have their own algorithms to detect non-human traffic. The algorithms look at a variety of characteristics of the traffic such as what browser and devices the traffic is coming from and they can even assess the movement of the device to determine if it looks human. And if it is not sure, it can send issue challenges (such as a reCaptcha-style request) to confirm.  Once the traffic has been evaluated; human traffic is allowed through and automated traffic is blocked.  

How can we deploy these defences?

elevenM has worked with our clients to deploy anti-automation tools.  The market is still new and as such the tools have a spectrum of effectiveness as well as architectural impacts that require time and effort to work through.  In an environment where time is short, this poses a significant transformation challenge.  Having done this before and being familiar with the products out there, we can work with you to identify and deploy anti-automation protection tools with the supporting processes.  The key first step, as always with Cybersecurity, is to look at your attack surface and the vectors that are most vulnerable to automated attacks, subject to risk and cost assessment of what happens if attacks are successful.  From there we work with you to design a protection approach that we can work with you to implement.

Conclusion

Everyone is rightly focussing on automation and machine learning, but so are the criminals. It is crucial to look at your attack surface and identify where automated attacks are happening. There are now tools available to help significantly reduce the risks associated with automated cybercrime.

If you would like to discuss this further, please contact us using the details below.

Happy birthday Notifiable Data Breaches Scheme. How have you performed?

A year ago today, Australian businesses became subject to a mandatory data breach reporting scheme. Angst and anticipation came with its introduction – angst for the disruption it might have on unprepared businesses and anticipation of the positive impact it would have for privacy.

Twelve months on, consumers are arguably more troubled about the lack of safeguards for privacy, while businesses face the prospect of further regulation and oversight. Without a fundamental shift in how privacy is addressed, the cycle of heightened concern followed by further regulation looks set to continue.

It would be folly to pin all our problems on the Notifiable Data Breaches (NDB) scheme. Some of the headline events that exacerbated community privacy concerns in the past year fell outside its remit. The Facebook / Cambridge Analytica scandal stands out as a striking example.

The NDB scheme has also made its mark. For one, it has heralded a more transparent view of the state of breaches. More than 800 data breaches have been reported in the first year of the scheme.

The data also tells us more about how breaches are happening. Malicious attacks are behind the majority of breaches, though humans play a substantial role. Not only do about a third of breaches involve a human error, such as sending a customer’s personal information to the wrong person, but a large portion of malicious attacks directly involve human factors such as convincing someone to give away their password.

And for the most part, businesses got on with the task of complying. In many organisations, the dialogue has shifted from preventing breaches to being well prepared to manage and respond to them. This is a fundamentally positive outcome – as data collection grows and cyber threats get more pernicious, breaches will become more likely and businesses, as they do with the risk of fire, ought to have plans and drills to respond effectively.

And still, the jury is out on whether consumers feel more protected. Despite the number of data breach notifications in the past year, events suggest it would be difficult to say transparency alone had improved the way businesses handle personal information.

The sufficiency of our legislative regime is an open question. The ACCC is signalling it will play a stronger role in privacy, beginning with recommending a strengthening of protections under the Privacy Act. Last May, the Senate also passed a motion to bring Australia’s privacy regime in line with Europe’s General Data Protection Regulation (GDPR), a much more stringent and far-reaching set of protections.

Australian businesses ought not be surprised. The Senate’s intent aligns to what is occurring internationally. In the US, where Facebook’s repeated breaches have catalysed the public and polity, moves are afoot towards new federal privacy legislation. States like California have already brought in GDPR-like legislation, while Asian countries are similarly strengthening their data protection regimes. With digital protections sharpening as a public concern, a federal election in Australia this year further adds to the possibility of a strengthened approach to privacy by authorities.

Businesses will want to free themselves of chasing the tail of compliance to an ever-moving regulatory landscape. Given the public focus on issues of trust, privacy also emerges as a potential competitive differentiator.

A more proactive and embedded approach to privacy addresses both these outcomes. Privacy by design is emerging as a growing discipline by which privacy practices are embedded at an early stage. In short, with privacy in mind at an early stage, new business initiatives can be designed to meet privacy requirements before they are locked into a particular course of action.

We also need to look to the horizon, and it’s not as far away as we think. Artificial intelligence (AI) is already pressing deep within many organisations, and raises fundamental questions about whether current day privacy approaches are sufficient. AI represents a paradigm shift that challenges our ability to know in advance why we are collecting data and how we intend to use it.

And so, while new laws introduced in the past 12 months were a major step forward in the collective journey to better privacy, in many ways the conversation is just starting.

The difference between NIST CSF maturity and managing cyber risk

Yesterday marked the fifth anniversary of what we here at elevenM think is the best cyber security framework in the world, the NIST Cybersecurity Framework (CSF). While we could be writing about how helpful the framework has been in mapping current and desired cyber capabilities or prioritising investment, we thought it important to tackle a problem we are seeing more and more with the CSF: The use of the CSF as an empirical measurement of an organisation’s cyber risk posture.

Use versus intention

Let’s start with a quick fact. The CSF was never designed to provide a quantitative measurement of cyber risk mitigation. Instead, it was designed as a capability guide. A tool to help organisations map out their current cyber capability to a set of capabilities which NIST consider to be best practice.

NIST CSF ’Maturity’

Over the past five years, consultancies and cyber security teams have used the CSF as a way to demonstrate to those not familiar with cyber capabilities, that they have the right ones in place. Most have done this by assigning a maturity score to each subcategory of the CSF. Just to be clear, we consider a NIST CSF maturity assessment to be a worthwhile exercise. We have even built a platform to help our clients to do just that. What we do not support however, is the use of maturity ratings as a measurement of cyber risk mitigation.

NIST CSF versus NIST 800-53

This is where the devil truly is in the detail. For those unfamiliar, NIST CSF maturity is measured using a set maturity statements (note that NIST have never produced their own so most organisations or consultancies have developed proprietary statements: elevenM included) against the Capability Maturity Model (CMM). As you can therefore imagine, the assessment that would be performed to determine one maturity level against another is often highly subjective, usually via interview and document review. In addition to this, these maturity statements do not address the specific cyber threats or risks to the organisation but are designed to determine if the organisation has the capability in place.

NIST 800-53 on the other hand is NIST’s cyber security controls library. A set of best practice controls which can be formally assessed for both design and operating effectiveness as part of an assurance program. Not subjective, rather an empirical and evidence-based assessment that can be aligned to the CSF (NIST has provided this mapping) or aligned to a specific organisational threat. Do you see what we are getting at here?

Which is the correct approach?

Like most things, it depends on your objective. If you want to demonstrate to those unfamiliar with cyber operations that you have considered all that you should, or if you want to build a capability, CSF is the way to go. (Noting that doing the CSF maturity assessment without assessing the underlying controls limits the amount of trust stakeholders can place on the maturity rating)

If however, you want to demonstrate that you are actively managing the cyber risk of your organisation, we advise our clients to assess the design and operating effectiveness of their cyber security controls. How do you know if you have the right controls to manage the cyber risks your organisation faces? We will get to that soon. Stay tuned.

Our thoughts on the year ahead

At elevenM, we love shooting the breeze about all things work and play. We recently got together as a team to kick off the new year, share what we’d been up to and the thoughts inspiring us as we kick off 2019. Here’s a summary…

Early in the new year, under a beating sun at the Sydney Cricket Ground, our principal Arjun Ramachandran found himself thinking about cyber risk.

“Indian batsman Cheteshwar Pujara was piling on the runs and I realised – ‘I’m watching a masterclass in managing risk’. He’s not the fanciest or most talented batsman going around, but what Pujara has is total command over his own strengths and weaknesses. He knows when to be aggressive and when to let the ball go. In the face of complex external threats, I was struck by how much confidence comes from knowing your own capabilities and posture.”

A geeky thought to have at the cricket? No doubt. But professional parallels emerge when you least expect them. Particularly after a frantic year in which threats intensified, breaches got bigger, and major new privacy regulations came into force.

Is there privacy in the Home?

Far away from the cricket, our principal Melanie Marks was also having what she describes as a “summer quandary”. Like many people, Melanie this summer had her first extended experience of a virtual assistant (Google Home) over the break.

“These AI assistants are a lot of fun to engage with and offer endless trivia, convenience and integrated home entertainment without having to leave the comfort of the couch,” Melanie says. “However, it’s easy to forget they’re there and it’s hard to understand their collection practices, retention policies and deletion procedures (not to mention how they de-identify data, or the third parties they rely upon).”

Melanie has a challenge for Google in 2019: empower your virtual assistant to answer the question: “Hey Google – how long do you keep my data?” as quickly and clearly as it answers “How do you make an Old Fashioned?”.

Another of our principals and privacy stars Sheila Fitzpatrick has also been pondering the growing tension between new technologies and privacy. Sheila expects emerging technologies like AI and machine learning to keep pushing the boundaries of privacy rights in 2019.

“Many of these technologies have the ‘cool’ factor but do not embrace the fundamental right to privacy,” Sheila says. “They believe the more data they have to work with, the more they can expand the capabilities of their products without considering the negative impact on privacy rights.”

The consumer issue of our time

We expect to see the continued elevation of privacy as a public issue in 2019.  Watch for Australia’s consumer watchdog, the Australian Competition and Consumer Commission, to get more involved in privacy, Melanie says. The ACCC foreshadowed in December via its preliminary report into digital platforms.

Business will also latch onto the idea of privacy as a core consumer issue, says our Head of Product Development Alistair Macleod. Some are already using it as a competitive differentiator, Alistair notes, pointing to manufacturers promoting privacy-enhancing features in new products and Apple’s hard-to-miss pro-privacy billboard at the CES conference just this week.

We’ll also see further international expansion of privacy laws in 2019, Sheila says. Particularly in Asia Pacific and Canada, where some requirements (such as around data localisation) will even exceed provisions under GDPR, widely considered a high watermark for privacy when introduced last May.

Cyber security regulations have their turn

But don’t forget cyber security regulation. Our principal Alan Ligertwood expects the introduction of the Australian Prudential Regulation Authority’s new information security standard CPS 234 in July 2019 to have a significant impact.

CPS 234 applies to financial services companies and their suppliers and Alan predicts the standard’s shift to a “trust but verify” approach, in which policy and control frameworks are actually tested, could herald a broader shift to more substantive approach by regulators to oversight of regulatory and policy compliance.

There’s also a federal election in 2019. We’d be naïve not to expect jobs and national security to dominate the campaign, but the policy focus given to critical “new economy” issues like cyber security and privacy In the lead-up to the polls will be worth watching. In recent years cyber security as a portfolio has been shuffled around and dropped like a hot potato at ministerial level.

Will the Government that forms after the election – of whichever colour – show it more love and attention?

New age digital risks

At the very least, let’s hope cyber security agencies and services keep running. Ever dedicated, over the break Alan paid a visit to the National Institute of Standards and Technology’s website – the US standards body that creates the respected Cybersecurity Framework – only to find it unavailable due the US government shutdown.

“It didn’t quite ruin my holiday, but it did get me thinking about unintended consequences and third party risk. A squabble over border wall funding has resulted in a global cyber security resource being taken offline indefinitely.”

It points to a bigger issue. Third parties and supply chains, and poor governance over them, will again be a major contributor to security and privacy risk this year, reckons Principal Matt Smith.

“The problem is proving too hard for people to manage correctly. Even companies with budgets which extend to managing supplier risk are often not able to get it right – too many suppliers and not enough money or capacity to perform adequate assurance.”

If the growing use of third parties demands that businesses re-think security, our Senior Project Manager Mike Wood sees the same trend in cloud adoption.

“Cloud is the de-facto way of running technology for most businesses.  Many are still transitioning but have traditional security thinking still in place.  A cloud transition must come with a fully thought through security mindset.”

Mike’s expecting to see even stronger uptake of controls like Cloud Access Security Brokers in 2019.

But is this the silver bullet?

We wonder if growing interest in cyber risk insurance in 2019 could be the catalyst for uplifted controls and governance across the economy. After all, organisations will need to have the right controls and processes in place in order to qualify for insurance in line with underwriting requirements.

But questions linger over the maturity of these underwriting methodologies, Alan notes.

“Organisations themselves find it extremely difficult to quantify and adequately mitigate cyber threats, yet insurance companies sell policies to hedge against such an incident.”

The likely lesson here is for organisations not to treat cyber insurance as a silver bullet. Instead, do the hard yards and prioritise a risk-based approach built on strong executive sponsorship, effective governance, and actively engaging your people in the journey.

It’s all about trust

If there was a common theme in our team’s readings and reflections after the break, it was probably over the intricacies of trust in the digital age.

When the waves stopped breaking on Manly beach, Principal Peter Quigley spent time following the work of Renee DiResta, who has published insightful research into the use of disinformation and malign narratives in social media. There’s growing awareness of how digital platforms are being used to sow distrust in society. In a similar vein, Arjun has been studying the work of Peter Singer, whose research into how social media is being weaponised could have insights for organisations wanting to use social media to enhance trust, particularly in the wake of a breach.

Alistair notes how some technology companies have begun to prioritise digital wellbeing. For example, new features in Android and iOS that help users manage their screen time – and thus minimise harm – reflect the potential for a more trusting, collaborative digital ecosystem.

At the end of the day, much of our work as a team goes towards helping organisations mitigate digital risk in order to increase digital trust – among customers, staff and partners. The challenges are aplenty but exciting, and we look forward to working on them with many of you in 2019.

End of year wrap

The year started with a meltdown. Literally.

New Year’s Eve hangovers had barely cleared when security researchers announced they had discovered security flaws that would impact “virtually every user of a personal computer”. “Happy new year” to you too. Dubbed “Meltdown” and “Spectre”, the flaws in popular computer processors would allow hackers to access sensitive information from memory – certainly no small thing. Chipmakers urgently released updates. Users were urged to patch. Fortunately, the sky didn’t fall in.

If all this was meant to jolt us into taking notice of data security and privacy in 2018 … well, that seemed unnecessary. With formidable new data protection regulations coming into force, many organisations were already stepping into this year with a much sharper focus on digital risk.

The first of these new regulatory regimes took effect in February, when Australia finally introduced mandatory data breach reporting. Under the Notifiable Data Breaches (NDB) scheme, overseen by the Office of the Australian Information Commissioner, applicable organisations must now disclose any breaches of personal information likely to result in serious harm.

In May, the world also welcomed the EU’s General Data Protection Regulation (GDPR). Kind of hard to miss, with an onslaught of updated privacy policies flooding user inboxes from companies keen to show compliance.

The promise of GDPR is to increase consumers’ consent and control over their data and place a greater emphasis on transparency.  Its extra-territorial nature (GDPR applies to any organisation servicing customers based in Europe) meant companies all around the world worked fast to comply, updating privacy policies, implementing privacy by design and creating data breach response plans. A nice reward for these proactive companies was evidence that GDPR is emerging as a template for new privacy regulations around the world. GDPR-compliance gets you ahead of the game.

With these regimes in place, anticipation built around who would be first to test them out. For the local NDB scheme, the honour fell to PageUp. In May, the Australian HR service company detected an unknown attacker had gained access to job applicants’ personal details and usernames and passwords of PageUp employees.

It wasn’t the first breach reported under NDB but was arguably the first big one – not least because of who else it dragged into the fray. It was a veritable who’s who of big Aussie brands – Commonwealth Bank, Australia Post, Coles, Telstra and Jetstar, to name a few. For these PageUp clients, their own data had been caught up in a breach of a service provider, shining a bright light on what could be the security lesson of 2018: manage your supplier risks.

By July we were all bouncing off the walls. Commencement of the My Health Record (MHR) three month opt-out period heralded an almighty nationwide brouhaha. The scheme’s privacy provisions came under heavy fire, most particularly the fact the scheme was opt-out by default, loose provisions around law enforcement access to health records, and a lack of faith in how well-versed those accessing the records were in good privacy and security practices. Things unravelled so much that the Prime Minister had to step in, momentarily taking a break from more important national duties such as fighting those coming for his job.

Amendments to the MHR legislation were eventually passed (addressing some, but not all of these issues), but not before public trust in the project was severely tarnished. MHR stands as a stark lesson for any organisation delivering major projects and transformations – proactively managing the privacy and security risks is critical to success.

If not enough attention was given to data concerns in the design of MHR, security considerations thoroughly dominated the conversation about another national-level digital project – the build out of Australia’s 5G networks. After months of speculation, the Australian government in August banned Chinese telecommunications company Huawei from taking part in the 5G rollout, citing national security concerns. Despite multiple assurances from the company about its independence from the Chinese government and offers of greater oversight, Australia still said ‘no way’ to Huawei.

China responded frostily. Some now fear we’re in the early stages of a tech cold war in which retaliatory bans and invasive security provisions will be levelled at western businesses by China (where local cyber security laws should already be a concern for businesses with operations in China).

Putting aside the geopolitical ramifications, the sobering reminder for any business from the Huwaei ban is the heightened concern about supply chain risks. With supply chain attacks on the rise, managing vendor and third-party security risks requires the same energy as attending to risks in your own infrastructure.

Ask Facebook. A lax attitude towards its third-party partners brought the social media giant intense pain in 2018. The Cambridge Analytica scandal proved to be one of the most egregious misuses of data and abuses of user trust in recent memory, with the data of almost 90 million Facebook users harvested by a data mining company to influence elections. The global public reacted furiously. Many users would delete their Facebook accounts in anger. Schadenfreude enthusiasts had much to feast on when Facebook founder and CEO Mark Zuckerberg’s uncomfortably testified in front of the US Senate.

The social network would find itself under the pump on various privacy and security issues throughout 2018, including the millions of fake accounts on its platform, the high profile departure of security chief Alex Stamos and news of further data breaches.

But when it came to brands battling breaches, Facebook hardly went it alone in 2018. In the first full reporting quarter after the commencement of the NDB scheme, the OAIC received 242 data breach notifications, followed by 245 notifications for the subsequent quarter.

The scale of global data breaches has been eye-watering. Breaches involving Marriott International, Exactis, Aadhar and Quora all eclipsed 100 million affected customers.

With breaches on the rise, it becomes ever more important that businesses be well prepared to respond. The maxim that organisations will increasingly be judged not on the fact they had a breach, but on how they respond, grew strong legs this year.

But we needn’t succumb to defeatism. Passionate security and privacy communities continue to try to reduce the likelihood or impact of breaches and other cyber incidents. Technologies and solutions useful in mitigating common threats gained traction. For instance, multi-factor authentication had more moments in the sun this year, not least because we became more attuned to the flimsiness of relying on passwords alone (thanks Ye!). Security solutions supporting other key digital trends also continue to gain favour – tools like Cloud Access Security Brokers enjoyed strong momentum this year as businesses look to manage the risks of moving towards cloud.

Even finger-pointing was deployed in the fight against hackers. This year, the Australian government and its allies began to publicly attribute a number of major cyber campaigns to state-sponsored actors. A gentle step towards deterrence, the attributions signalled a more overt and more public pro-security posture from the Government. Regrettably, some of this good work may have been undone late in the year with the passage of an “encryption bill”, seen by many as weakening the security of the overall digital ecosystem and damaging to local technology companies.

In many ways, in 2018 we were given the chance to step into a more mature conversation about digital risk and the challenges of data protection, privacy and cyber security. Sensationalist FUD in earlier years about cyber-attacks or crippling GDPR compliance largely gave way to a more pragmatic acceptance of the likelihood of breaches, high public expectations and the need to be well prepared to respond and protect customers.

At a strategic level, a more mature and business-aligned approach is also evident. Both the Australian government and US governments introduced initiatives that emphasise the value of a risk-based approach to cyber security, which is also taking hold in the private sector. The discipline of cyber risk management is helping security executives better understand their security posture and have more engaging conversations with their boards.

All this progress, and we still have the grand promise that AI and blockchain will one day solve all our problems.  Maybe in 2019 ….

Till then, we wish you a happy festive season and a great new year.

From the team at elevenM.

Lessons on managing a data breach crisis (from an amateur conference organiser)

Tim de Sousa

It’s been a big year for elevenM – we’ve grown rapidly, taking on new people, developing new products and tackling new challenges.

One of my biggest challenges was actually an extracurricular one – the Annual Summit of the ANZ chapter of the International Association of Privacy Professionals (iappANZ). As specialist privacy and cyber security professionals, we have a close relationship with iappANZ, not to mention that one of our founders, Melanie Marks, is the current iappANZ President, and I’m on the Board. Which is how I ended up as the co-chair of this year’s Summit.

Law schools don’t really offer courses in event management, so I approached this completely, utterly blind. Ultimately, as a consequence of a great deal of hard work by many people, the Summit was a resounding success. But for me, the actual day was rather stressful and frantic as I tore around the place trying to do everything at once.

Basking in the relief of a completed job, it occurred to me that there were a lot of parallels between running a conference as a rank amateur and managing a data breach – high stakes, many moving parts, a lot of stakeholders, and limited time. I’ve dealt with literally hundreds of data breaches – they hold no fear for me. But this was entirely new territory. So, gin and tonic in hand, I jotted down a few of the more important takeaways.

  1. Bring in the pros, and do it early

I didn’t know anything about managing conferences. But we brought in some expert help – the good people at Essential Solutions. They’ve produced dozens of conferences. They understood all the likely friction points, had connections with suppliers and pre-existing relationships that they could leverage. This was a level of experience and expertise I didn’t have and couldn’t acquire quickly.

Having pros on the team meant they could help identify issues and problems while they were still molehills, and we were able to deal with them before they became mountains. This left me more able to focus on strategy and key decisions.

  1. Don’t be afraid to ask for help

On the day, there were a lot of small details and moving parts that had to be dealt with. Because I was frazzled and anxious, I insisted on managing all of this largely by myself so I could sure it got done – everything from making sure speakers got miked up, to timekeeping, to moving chairs on stage. This was, in fact, way too much for one person to do. Like data breach management, event management is a team sport.

I actually had numerous people throughout the day – iappANZ Board members – ask me if there was anything they could help with. And I smiled and thanked them and said we had it all under control. I think I did this largely on autopilot – my mind was so occupied with my lengthy to do list, I didn’t have the mental capacity to delegate. Which brings me to my next point…

  1. Plan ahead and allocate responsibilities

If you can’t think clearly enough in the thick of it to delegate, you need to do it before the crisis arrives. If I had known what would have to be done, asked for volunteers and allocated tasks in the lead up to the Summit, I would have been much better able to spread the workload.

A good data breach response plan can help you do all of this – it can include the contact details for pre-vetted expert support, set out the key steps of your organisation’s data breach response so you don’t have to scramble to work out what to do next in the heat of the moment, and clearly set out roles and responsibilities to avoid uncertainty over who should do what.

We weren’t able to do a dry run on the conference, but you can run simulated data breaches and other training to ensure that your breach response team understands the plan, and their part in it.

And when you’ve successfully managed the breach and the dust has settled, don’t forget to pour yourself a gin and tonic.

If you need help developing a data breach response process, or advice on managing a breach, you can call us at 1300 003 922 or email us at hello@elevenM.com.au.  

APRA gets $60m in new funding: CPS 234 just got very real

We have previously talked about APRA’s new information security regulation and how global fines will influence the enforcement of this new regulation.

Today we saw a clear statement of intent from the government in the form of $58.7 million of new funding for APRA to focus on the identification of new and emerging risks such as cyber and fintech.

As previously stated, if you are in line of sight for CPS 234 either as a regulated entity or a supplier to one, we advise you to have a clear plan in place on how you will meet your obligations. No one wants to be the Tesco of Australia.

If you would like to talk to someone from elevenM about getting ready for CPS 234, please drop us a note at hello@elevenM.com.au or call us on 1300 003 922.


If you enjoyed this and would like to be notified of future elevenM blog posts, please subscribe below.

Up close and personal with the Singaporean Cybersecurity Act

Due to a recent engagement we carried out an in-depth review of the new Singaporean Cybersecurity Act.

What do we think?

The Act is a bold approach to ensuring the security of a nation’s most critical infrastructure, which we think will be copied by other countries and may even be a model for large enterprises.

Why bold?

A fundamental challenge is that the level of cybersecurity protecting any piece of infrastructure at any given time is usually heavily dependent on a Chief Information Security Officer’s (CISO) ability to present cyber risk to those controlling the purse strings. The result is a varied levels of control and capability across some very important infrastructure.

So what is the answer? Like most things, depends who you ask. Singapore has taken the bold approach to regulate the cybersecurity of the technology infrastructure that the country needs to run smoothly.

Our key takeaways

  • The Act introduces a Cyber Commissioner who will “respond to cybersecurity incidents that threaten the national security, defence, economy, foreign relations, public health, public order or public safety, or any essential services, of Singapore, whether such cybersecurity incidents occur in or outside Singapore” – Interesting to see how this works in practice. Many global companies in this framework will be hesitant to provide that level of access to a foreign state.
  • The Act creates Critical Information Infrastructure (CII) in Singapore meaning “the computer or computer system which is necessary for the continuous delivery of an essential service, and the loss or compromise of the computer or computer system will have a debilitating effect on the availability of the essential service in Singapore” – These CIIs span most industries across both the public and private sector. It will be very interesting to see what they determine to be CIIs and how private companies deal with this. Even from an investment perspective, who pays to increase the security posture or the rewrite of the supporting business processes?
  • Each designated CII will have an owner who will be appointed statutory duties specific to the cybersecurity of the CII. – Yeah, these owners will be held to account by the Commissioner. Failure to fulfil their role will result in personal fines up to $100,000 or imprisonment for a term not exceeding 2 years. Given most companies already struggle defining the ‘owner’ of a system, will this push the ownership of these business/operational systems to CISOs?
  • The Act introduces a licencing framework for suppliers where “No person is to provide licensable cybersecurity service without licence”. – A very interesting one. The suppliers of cybersecurity services to the CIIs will need to have a license issued by the Commissioner. A sign of things to come in the supplier risk space perhaps?

The Act can be found here:  Singapore Cybersecurity Act 2018


If you enjoyed this and would like to be notified of future elevenM blog posts, please subscribe below.

Introducing our free data breach notification tool

When we previously looked at the trends emerging from the mandatory notifiable data breaches scheme, we observed that organisations seem to be playing it safe and reporting when in doubt, possibly leading to overreporting.

We’re big supporters of mandatory notification, and we agree that when there’s doubt, it’s safer to report. But we also think it’s important that we all get better at understanding and managing data breaches, so that individuals and organisations don’t become overwhelmed by notifications.

That’s why we’ve prepared a free, fast and simple tool to help you consider all of the relevant matters when deciding whether a data breach needs to be notified.

Download here

Keep in mind that this is just a summary of relevant considerations – it’s not legal advice, and it only addresses Australian requirements. If your organisation handles personal information or personal data outside of Australia, you might need to consider the notification obligations in other jurisdictions.

Also remember that notification is just one aspect of a comprehensive data breach response plan. If your organisation handles personal information, you should consider adopting a holistic plan for identifying, mitigating and managing data breaches and other incidents.

Please let us know if you find this tool useful or if you any feedback or suggestions.


If you enjoyed this and would like to be notified of future elevenM blog posts, please subscribe below.

The journey toward trust – Part 3: Trust through reputational management

This is the third and final article in a three-part series that explores the notion of trust in today’s digital economy, and how organisations can practically build trust. In part 1 we took a deeper look at the meaning and underlying principles of trust. Part two explored best practice approaches to using regulatory compliance to build trust.

In this piece, we look at the role of reputation management in building trust on privacy and security issues. 

Reputation management

The way an organisation manages its reputation is unsurprisingly tightly bound up with trust.

While there are many aspects to reputation management, an effective public response is one of, if not the most, critical requirements.

In the era of fast-paced digital media, a poorly managed communications response to a cyber or privacy incident can rapidly damage trust. With a vocal and influential community of highly informed security and privacy experts active on social media, corporate responses that don’t meet the mark get pulled apart very quickly.

Accordingly, a bad response produces significantly bad outcomes, including serious financial impacts, executive scalps, and broader repercussions like government and regulatory inquiries and class actions.

A google search will quickly uncover examples of organisations that mishandled their public response. Just in recent weeks we learned Uber will pay US $148m in fines over a 2016 breach, largely because of failures in how it went about disclosing the breach.

Typically, examples of poor public responses to breaches include one or more of the following characteristics:

  • The organisation was slow to reveal the incident to customers (ie. not prioritising truth, safety and reliability)
  • The organisation was legalistic or defensive (ie. not prioritising the protection of customers)
  • The organisation pointed the finger at others (ie. not prioritising reliability or accountability)
  • The organisation provided incorrect or inadequate technical details (ie. not prioritising a show of competence)

As we can see courtesy of the analyses in the brackets, the reason public responses often unravel as they do is that they feature statements that violate the key principles of trust that we outlined in part one of this series.

Achieving a high-quality, trust-building response that reflects and positively communicates principles of trust is not necessarily easy, especially in the intensity of managing an incident.

An organisation’s best chance of getting things right is to build communications plans in advance that embed the right messages and behaviours.

Plans and messages will always need to be adapted to suit specific incidents, of course, but this proactive approach allows organisation to develop a foundation of clear, trust-building messages in a calmer context.

It’s equally critical to run exercises and simulations around these plans, to ensure the key staff are aware of their roles and are aligned to the objectives of a good public crisis response and that hiccups are addressed before a real crisis occurs.


If you enjoyed this and would like to be notified of future elevenM blog posts, please subscribe below.

The journey toward trust – Part 2: Trust through regulatory compliance

This is the second article in a three-part series that explores the notion of trust in today’s digital economy, and how organisations can practically build trust. In part 1 we took a deeper look at what trust means, and uncovered some guiding principles organisations can work towards when seeking to build trust.

In this piece, we look at best practice approaches to using regulatory compliance to build trust.

Privacy laws and regulatory guidance provide a pretty good framework for doing the right thing when it comes to trusted privacy practices (otherwise known as, the proper collection, use and disclosure of personal information).

We are the first to advocate for a compliance-based framework.  Every entity bound by the Privacy Act 1988 and equivalent laws should be taking proactive steps to establish and maintain internal practices, procedures and systems that ensure compliance with the Australian Privacy Principles.  They should be able to demonstrate appropriate accountabilities, governance and resourcing.

But compliance alone won’t build trust.

For one, the majority of Australian businesses are not bound by the Privacy Act because they fall under its $3m threshold. This is one of several reasons why Australian regulation is considered inadequate by EU data protection standards.

Secondly, there is variability in the ways that entities operationalise privacy. The regulator has published guidance and tooling for the public sector to help create some common benchmarks and uplift maturity recognising that some entities are applying the bare minimum. No such guidance exists for the private sector – yet.

Consumer expectations are also higher than the law. It may once have been acceptable for businesses to use and share data to suit their own purposes whilst burying their notices in screeds of legalise. However, the furore over Facebook Cambridge / Analytica shows that sentiment has changed (and also raises a whole bucket of governance issues).  Similarly, increasingly global consumers expect to be protected by the high standards set by the GDPR and other stringent frameworks wherever they are, which include rights such as the right to be forgotten and the right to data portability.

Lastly, current compliance frameworks do not help organisations to determine what is ethical when it comes to using and repurposing personal information. In short, an organisation can comply with the Privacy Act and still fall into an ethical hole with its data uses.

Your organisation should be thinking about its approach to building and protecting trust through privacy frameworks.  Start with compliance, then seek to bolster weak spots with an ethical framework; a statement of boundaries to which your organisation should adhere. 


In the third and final part of this series, we detail how an organisation’s approach to reputation management for privacy and cyber security issues can build or damage trust.


If you enjoyed this and would like to be notified of future elevenM blog posts, please subscribe below.