Nine steps to a successful privacy and cyber security capability uplift

Most organisations today understand the critical importance of cyber security and privacy protection to their business. Many are commencing major uplift programs, or at least considering how they should get started.

These projects inevitably carry high expectations because of what’s at stake. They’re also inherently complex and impact many parts of the organisation. Converting the effort and funding that goes into these projects into success and sustained improvement to business-as-usual practices is rarely straightforward.

Drawing on our collective experiences working on significant cyber security and privacy uplift programs across the globe, in a variety of industries, here’s what we believe are key elements to success.

1. Secure a clear executive mandate

Your uplift program is dealing with critical risks to your organisation. The changes you will seek to drive through these programs will require cooperation across may parts of your organisation and potentially partners and third parties too. A mandate and sponsorship from your executive is critical.

Think strategically about who else you need on-side, beyond your board and executive committee. Build an influence map and identify potential enablers and detractors, and engage early. Empower your program leadership team and business leadership from affected areas to make timely decisions and deliver their mandate.

2. Adopt a customer and human-centric approach

Uplift programs need to focus on people change as well as changes to processes and technology. Success in this space very often comes down to changing behaviours and ensuring the organisation has sufficient capacity to manage the new technology and process outputs (eg how to deal with incidents).

We therefore suggest that you adopt a customer and human-centric approach. Give serious time, attention and resourcing to areas including communications planning, organisational change management, stakeholder engagement, training and awareness.

3. Know the business value of what you are going to deliver and articulate it

An opaque or misaligned understanding of what a security or privacy program is meant to deliver is often the source of its undoing. It is crucial to ensure scope is clear and aligned to the executive mandate.

Define the value and benefits of your uplift program early, communicate them appropriately and find a way to demonstrate this value over time. Be sure to speak in terms the business understands, not just new technologies or capabilities you will roll-out for instance, what risks have you mitigated?

You can’t afford to be shy. Ramp up the PR to build recognition about your program and its value among staff, executive and board members. Think about branding.

4. Prioritise the foundational elements

If you’re in an organisation where security and privacy risks have been neglected, but now have a mandate for broad change, you can fall into the trap of trying to do too much at once.

Think of this as being your opportunity to get the groundwork in place for your future vision. Regardless of whether the foundational elements are technology or process related, most with tenure in your organisation know which of them need work. From our experience, those same people will also understand the importance of getting them right and in most cases would be willing to help you fix them.

As a friendly warning, don’t be lured down the path of purchasing expensive solutions without having the right groundwork in place. Most, if not all of these solutions rely on such foundations.

5. Deliver your uplift as a program

For the best results, deliver your uplift as a dedicated change program rather than through BAU.

Your program will of course need to work closely with BAU teams to ensure the sustained success of the program. Have clear and agreed criteria with those teams on the transition to BAU. Monitor BAU teams’ preparation and readiness as part of your program.

6. Introduce an efficient governance and decision making process

Robust and disciplined governance is critical. Involve key stakeholders, implement clear KPIs and methods of measurement, and create an efficient and responsive decision-making process to drive your program.

Governance can be light touch provided the right people are involved and the executive supports them. Ensure you limit the involvement of “passengers” on steering groups who aren’t able to contribute and make sure representatives from BAU are included

7. Have a ruthless focus on your strategic priorities

These programs operate in the context of a fast-moving threat and regulatory landscape. Things change rapidly and there will be unforeseen challenges.

It’s important to be brave and assured in holding to your strategic priorities. Avoid temptation to succumb to tactical “quick fixes” that solve short-term problems but bring long-term pain.

8. Build a high-performance culture and mindset for those delivering the program

These programs are hard but can be immensely satisfying and career-defining for those involved. Investing in the positivity, pride and engagement of your delivery team will pay immense dividends.

Seek to foster a high-performance culture, enthusiasm, tolerance and collaboration. Create an environment that is accepting of creativity and experimentation.

9. Be cognisant of the skills shortage and plan accordingly

While your project may be well funded, don’t be complacent about the difficulties accessing skilled people to achieve the goals of your project. Globally, the security and privacy industries continue to suffer severe short-ages in skilled professionals. Build these into your forecasts and expectations, and think laterally about the use of partners.


If you enjoyed this and would like to be notified of future elevenM blog posts, please subscribe below.

In Privacy Awareness Week, will Australia follow the GDPR?

Last week, the headlines told us that the senate backs GDPR style laws in Australia.

But what does this really mean in terms of the government’s commitment to reviewing privacy in Australia?

This does not (necessarily) mean the law will be reviewed

In short, it means very little.  The senate’s support of senator Jordon Steele-John’s notice of motion calling on the Government to consider the impact of our current privacy laws on Australians and look to the GDPR as a potential model for privacy protections for Australians holds no commitment as the senate cannot commit the government to action.

What it does signify is something very big and that is, a shift in the willingness of the senate to stand behind the Greens’ position that Australian privacy laws must be scrutinised.  Just two months ago, senator Steele-John put forward a very similar notice of motion and it was shut down, as were a couple of other privacy related motions.

Why did this one pass? (What has changed)

There are a few likely reasons why this one passed.  Putting aside matters of semantics and the politics of calling on government to subject itself to tighter scrutiny, (which was the case in motions no 749 and no 786), there is one material reason why this motion passed.

In the last two months, consumers have started to wake up to something we privacy professionals have worried about for a while – and that legal compliance is not enough and can, in fact, be damaging if ethical behaviours and transparent practices are perceived to be lacking.

There has been an enormous groundswell in Australia over the last two months, with both Facebook Cambridge Analytica and Commonwealth Bank blitzing the press with actions they have taken – or not taken – which although arguably lawful, have not met public perceptions of fairness and ethics.  Put simply, community expectations have surpassed legal standards.

So, senator Steele-John had his day, and time will tell whether this will serve as a prompt for government to call for a review of Australian privacy law in view of the GDPR.

There are plenty of other reasons why GDPR compliance makes sense, but we’ll leave that to a future blog.

Happy Privacy Awareness Week!


If you enjoyed this and would like to be notified of future elevenM blog posts, please subscribe below.

Facebook and Cambridge Analytica: Would the GDPR have helped?

It’s a modern-day truism that when you use a “free” online service, you’re still paying – not with your money, but with your personal information. This is simply the reality for many of the services we’ve come to rely on in our daily lives, and for most it’s an acceptable (if sometimes creepy) bargain.

But what if you’re paying for online services not just with your own personal information, but with that of your friends and family? And what if the information you’re handing over is being shared with others who might use it for purposes you didn’t consider when you signed up – maybe for research purposes, maybe to advertise to you, or maybe even to influence the way you vote?

Last week it emerged that an organisation called Cambridge Analytica may have used personal information scraped from Facebook to carry out targeted political advertising. The information was obtained when Facebook users accessed a psychometric profiling app called thisisyourdigitallife – but the data that was collected wasn’t just about app users, it was also about their Facebook friends (more on that below).

It’s what we’re now seeing from consumers that’s interesting.  People are rightfully asking for an explanation. Whilst we seem to have been asleep at the wheel over the last few years, as data empires around the world have pushed the boundaries, the current Facebook debacle is leading us to ask questions about the value of these so-called “free” services, and where the lines should be drawn.  The next few weeks will be telling, in terms of whether this really is the “tipping point” as many media commentators are calling it, or just another blip, soon forgotten.

In any case, with only a few months until the EU General Data Protection Regulation (GDPR), comes into force, this blog post asks:  If GDPR was operational now, would consumers be better protected?

First, some background

There’s plenty of news coverage out there covering the details, so we’ll just provide a quick summary of what happened.

A UK-based firm called Global Science Research (GSR) published thisisyourdigitallife and used the app to gather data about its users. Because GSR claimed this data was to be used for academic purposes, Facebook policies at the time allowed it to also collect limited information about friends of app users. All up, this meant that GSR collected the personal information of more than 50 million people – many more than the 270,000 people who used the app.

GSR then used the personal information to create psychometric profiles of the included individuals, apparently without their informed consent. These profiles were then allegedly passed on to Cambridge Analytica (possibly in breach of Facebook’s rules), which used the data to target, market to – and perhaps manipulate – individuals.

Was this a breach?

There’s been some debate over whether this incident can be fairly labelled a “breach”. Based on what we know, it certainly doesn’t appear that any personal information has been lost or disclosed by means of an accident or a security vulnerability, which is something many consider a necessary element of a “data breach”.

Facebook’s initial response was to hit back at claims it was a “data breach”, saying users willingly handed over their information, and the information of their friends. “Everyone involved gave their consent. People knowingly provided their information, no systems were infiltrated, and no passwords or sensitive pieces of information were stolen or hacked” it allegedly said.

Facebook has since hired a digital forensics firm to audit Cambridge Analytica and has stated that if the data still exists, it would be a “grave violation of Facebook’s policies and an unacceptable violation of trust and the commitments these groups made.”

In more recent days, Mark Zuckerberg has made something of a concession, apologising for the  “major breach of trust”.   We love this line from the man that told us that privacy is dead.

GDPR – would it have helped?

We at elevenM are supporters of the GDPR, arguably the most extensive and far reaching privacy reforms of the last 25 years. The GDPR raises the benchmark for businesses and government and brings us closer to one global framework for privacy.   But would the GDPR have prevented this situation from occurring? Would the individuals whose data has been caught up by Cambridge Analytica be in a better position if the GDPR applied?

Let’s imagine that GDPR is in force and it applies to the acts of all the parties in this case, and that Facebook still allowed apps to access information about friends of users (which it no longer does). Here is the lowdown:

  1. Facebook would have to inform its users in “clear and plain” language that their personal information (aka personal data under GDPR) could (among other things) be shared with third party apps used by their friends.
  2. Because the personal data may have been used to reveal political opinions, users would likely also need to provide consent. The notification and consent would have to be written in “clear and plain” language, and consent would have to be “freely given” via a “clear affirmative act” – implied consent or pre-ticked boxes would not be acceptable.
  3. The same requirements relating to notification and consent would apply to GSR and Cambridge Analytica when they collected and processed the data.
  4. Individuals would also have the right to withdraw their consent at any time, and to request that their personal data be erased (under the new “right to be forgotten”). If GSR or Cambridge Analytics were unable to find another lawful justification for collecting and processing the data (and it’s difficult to imagine what that justification could be), they would be required to comply with those requests.
  5. If Facebook, GSR or Cambridge Analytica were found to be in breach of the above requirements (although again, this is purely hypothetical because GDPR is not in force at the time of writing), they could each face fines up to 20 million EUR, or 4% of worldwide annual turnover (revenue), whichever is higher. Those figures represent the maximum penalty and would only be applied in the most extreme cases – but they make clear that GDPR is no toothless tiger.

So, there it is.  We think that GDPR would have made it far more likely that EU residents were made aware of what was happening with their personal data and would have given them effective control over it.

Some lessons

With so many recent data incidents resulting from outsourcing and supply chain, regulators around the world are focussing increasingly on supplier risk.  Just last week here in Australia, we saw the financial services regulator APRA’s new cyber security regulation littered with references to supplier risk.   The Cambridge Analytica situation is another reminder that we are only as strong as our weakest link.  The reputations of our businesses and the government departments for whom we work will often hinge on the control environments of third parties.  Therefore, organisations need to clearly assess third party risks and take commensurate steps to assure themselves that the risks and controls are reasonable and appropriate.

As for individuals – regardless of what regulatory action is taken in Australia and abroad, there are simple steps that we all can and should be taking.  This episode should prompt people to think again about the types of personal information they share online, and who they share it with. Reviewing your Facebook apps is a good start – you might be surprised by some of the apps you’ve granted access to, and how many of them you’d totally forgotten about (Candy Crush was so 2015).

What’s next

We expect this issue to receive more attention in the coming weeks and months.

Regulators around the world (including the Australian Privacy Commissioner, the UK Information Commissioner (ICO), the Canadian Privacy Commissioner and the EU Parliament) are looking into these issues now. Just over the weekend we saw images of ICO personnel allegedly raiding the premises of Cambridge Analytica, Law & Order style.

The Australian Competition and Consumer Commission (ACCC) also has been preparing to conduct a “Digital Platforms Inquiry” which, among other things, may consider “the extent to which consumers are aware of the amount of data they provide to digital platforms, the value of the data provided, and how that data is used…”

Meanwhile, we await the consumer backlash.  Consumers will likely expect increasingly higher standards from the organisations they share their data with and will seek out those organisations that are transparent and trustworthy, and which can demonstrate good governance over privacy and data protection practices.   Will you be one of them?


If you enjoyed this and would like to be notified of future elevenM blog posts, please subscribe below.