Introducing our free data breach notification tool

When we previously looked at the trends emerging from the mandatory notifiable data breaches scheme, we observed that organisations seem to be playing it safe and reporting when in doubt, possibly leading to overreporting.

We’re big supporters of mandatory notification, and we agree that when there’s doubt, it’s safer to report. But we also think it’s important that we all get better at understanding and managing data breaches, so that individuals and organisations don’t become overwhelmed by notifications.

That’s why we’ve prepared a free, fast and simple tool to help you consider all of the relevant matters when deciding whether a data breach needs to be notified.

Download here

Keep in mind that this is just a summary of relevant considerations – it’s not legal advice, and it only addresses Australian requirements. If your organisation handles personal information or personal data outside of Australia, you might need to consider the notification obligations in other jurisdictions.

Also remember that notification is just one aspect of a comprehensive data breach response plan. If your organisation handles personal information, you should consider adopting a holistic plan for identifying, mitigating and managing data breaches and other incidents.

Please let us know if you find this tool useful or if you any feedback or suggestions.


If you enjoyed this and would like to be notified of future elevenM blog posts, please subscribe below.

The journey toward trust – Part 3: Trust through reputational management

This is the third and final article in a three-part series that explores the notion of trust in today’s digital economy, and how organisations can practically build trust. In part 1 we took a deeper look at the meaning and underlying principles of trust. Part two explored best practice approaches to using regulatory compliance to build trust.

In this piece, we look at the role of reputation management in building trust on privacy and security issues. 

Reputation management

The way an organisation manages its reputation is unsurprisingly tightly bound up with trust.

While there are many aspects to reputation management, an effective public response is one of, if not the most, critical requirements.

In the era of fast-paced digital media, a poorly managed communications response to a cyber or privacy incident can rapidly damage trust. With a vocal and influential community of highly informed security and privacy experts active on social media, corporate responses that don’t meet the mark get pulled apart very quickly.

Accordingly, a bad response produces significantly bad outcomes, including serious financial impacts, executive scalps, and broader repercussions like government and regulatory inquiries and class actions.

A google search will quickly uncover examples of organisations that mishandled their public response. Just in recent weeks we learned Uber will pay US $148m in fines over a 2016 breach, largely because of failures in how it went about disclosing the breach.

Typically, examples of poor public responses to breaches include one or more of the following characteristics:

  • The organisation was slow to reveal the incident to customers (ie. not prioritising truth, safety and reliability)
  • The organisation was legalistic or defensive (ie. not prioritising the protection of customers)
  • The organisation pointed the finger at others (ie. not prioritising reliability or accountability)
  • The organisation provided incorrect or inadequate technical details (ie. not prioritising a show of competence)

As we can see courtesy of the analyses in the brackets, the reason public responses often unravel as they do is that they feature statements that violate the key principles of trust that we outlined in part one of this series.

Achieving a high-quality, trust-building response that reflects and positively communicates principles of trust is not necessarily easy, especially in the intensity of managing an incident.

An organisation’s best chance of getting things right is to build communications plans in advance that embed the right messages and behaviours.

Plans and messages will always need to be adapted to suit specific incidents, of course, but this proactive approach allows organisation to develop a foundation of clear, trust-building messages in a calmer context.

It’s equally critical to run exercises and simulations around these plans, to ensure the key staff are aware of their roles and are aligned to the objectives of a good public crisis response and that hiccups are addressed before a real crisis occurs.


If you enjoyed this and would like to be notified of future elevenM blog posts, please subscribe below.

The journey toward trust – Part 2: Trust through regulatory compliance

This is the second article in a three-part series that explores the notion of trust in today’s digital economy, and how organisations can practically build trust. In part 1 we took a deeper look at what trust means, and uncovered some guiding principles organisations can work towards when seeking to build trust.

In this piece, we look at best practice approaches to using regulatory compliance to build trust.

Privacy laws and regulatory guidance provide a pretty good framework for doing the right thing when it comes to trusted privacy practices (otherwise known as, the proper collection, use and disclosure of personal information).

We are the first to advocate for a compliance-based framework.  Every entity bound by the Privacy Act 1988 and equivalent laws should be taking proactive steps to establish and maintain internal practices, procedures and systems that ensure compliance with the Australian Privacy Principles.  They should be able to demonstrate appropriate accountabilities, governance and resourcing.

But compliance alone won’t build trust.

For one, the majority of Australian businesses are not bound by the Privacy Act because they fall under its $3m threshold. This is one of several reasons why Australian regulation is considered inadequate by EU data protection standards.

Secondly, there is variability in the ways that entities operationalise privacy. The regulator has published guidance and tooling for the public sector to help create some common benchmarks and uplift maturity recognising that some entities are applying the bare minimum. No such guidance exists for the private sector – yet.

Consumer expectations are also higher than the law. It may once have been acceptable for businesses to use and share data to suit their own purposes whilst burying their notices in screeds of legalise. However, the furore over Facebook Cambridge / Analytica shows that sentiment has changed (and also raises a whole bucket of governance issues).  Similarly, increasingly global consumers expect to be protected by the high standards set by the GDPR and other stringent frameworks wherever they are, which include rights such as the right to be forgotten and the right to data portability.

Lastly, current compliance frameworks do not help organisations to determine what is ethical when it comes to using and repurposing personal information. In short, an organisation can comply with the Privacy Act and still fall into an ethical hole with its data uses.

Your organisation should be thinking about its approach to building and protecting trust through privacy frameworks.  Start with compliance, then seek to bolster weak spots with an ethical framework; a statement of boundaries to which your organisation should adhere. 


In the third and final part of this series, we detail how an organisation’s approach to reputation management for privacy and cyber security issues can build or damage trust.


If you enjoyed this and would like to be notified of future elevenM blog posts, please subscribe below.

The journey toward trust – Part 1: Understanding trust

Join us for a three-part series that explores the notion of trust in today’s digital economy, and how organisations practically can build trust. We also focus on the role of regulatory compliance and reputation management in building trust, and outline best practice approaches.

Be-it users stepping away from the world’s biggest social media platform after repeated privacy scandals, a major airline’s share price plummeting after a large data breach, or Australia’s largest bank issuing a stronger commitment to a stronger focus on privacy and security in rebuilding its image – events in recent weeks provide a strong reminder of the fragility and critical importance of trust to businesses seeking success in the digital economy.

Bodies as illustrious as the World Economic Forum and OECD have written at length about the pivotal role of trust as a driving factor for success today.

But what does trust actually mean in the context of your organisation? And how do you practically go about building it?

At elevenM, we spend considerable time discussing and researching these questions from the perspectives of our skills and experiences across privacy, cyber security, risk, strategy and communications.

A good starting point for any organisation wanting to make trust a competitive differentiator is to gain a deeper understanding of what trust actually means, and specifically, what it means for it.

Trust is a layered concept, and different things are required in different contexts to build trust.

Some basic tenets of trust become obvious when we look to popular dictionaries. Ideas like safety, reliability, truth, competence and consistency stand out as fundamental principles.

Another way to learn what trust means in a practical sense is to look at why brands are trusted. For instance, the most recent Roy Morgan survey listed supermarket ALDI as the most trusted brand in Australia. Roy Morgan explains this is built on ALDI’s reputation for reliability and meeting customer needs.

Importantly, the dictionary definitions also emphasise an ethical aspect – trust is built by doing good and protecting customers from harm.

Digging a little deeper, we look to the work of trust expert and business lecturer Rachel Botsman, who describes trust as “a confident relationship with the unknown”.  This moves us into the digital space in which organisations operate today, and towards a more nuanced understanding.

We can infer that consumers want new digital experiences, and an important part of building trust is for organisations to innovate and help customers step into the novel and unknown, but with safety and confidence.

So, how do we implement these ideas about trust in a practical sense?

With these definitions in mind, organisations should ask themselves some practical and instructive questions that illuminate whether they are building trust.

  • Do customers feel their data is safe with you?
  • Can customers see that you seek to protect them from harm?
  • Are you accurate and transparent in your representations?
  • Do your behaviours, statements, products and services convey a sense of competence and consistency?
  • Do you meet expectations of your customers (and not just clear the bar set by regulators)?
  • Are you innovative and helping customers towards new experiences?

In part two of this series, we will explore how regulatory compliance can be used to build trust.


If you enjoyed this and would like to be notified of future elevenM blog posts, please subscribe below.

What does the record FCA cyber fine mean for Australia?

First, bit of context: The Financial Conduct Authority (FCA) is the conduct and prudential regulator for financial services in the UK. They are in-part an equivalent to the Australian Prudential Regulatory Authority (APRA).

Record cyber related fine

This week the FCA handed down a record cyber related fine to the banking arm of the UK’s largest supermarket chain Tesco for failing to protect account holders from a “foreseeable” cyber attack two years ago. The fine totalled £23.4 million but due to an agreed early stage discount, the fine was reduced by 30% to £16.4 million.

Cyber attack?

It could be argued that this was not a cyber attack in that it was not a breach of Tesco Bank’s network or software but rather a new twist on good old card fraud. But for clarity, the FCA defined the attack which lead to this fine as: “a mass algorithmic fraud attack which affected Tesco Bank’s personal current account and debit card customers from 5 to 8 November 2016.”

What cyber rules did Tesco break?

Interestingly, the FCA does not have any cyber specific regulation. The FCA exercised powers through provisions published in their Handbook. This Handbook has Principles, which are general statements of the fundamental obligations. Therefore Tesco’s fine was issued against the comfortably generic Principle 2: “A firm must conduct its business with due skill, care and diligence”

What does this mean for Australian financial services?

APRA, you may recall from our previous blog. has issued a draft information security regulation CPS 243. This new regulation sets out clear rules on how regulated Australian institutions should be managing their cyber risk.

If we use the Tesco Bank incident as an example, here is how APRA could use CPS 234:

Information security capability: “An APRA-regulated entity must actively maintain its information security capability with respect to changes in vulnerabilities and threats, including those resulting from changes to information assets or its business environment”. –  Visa provided Tesco Bank with threat intelligence as Visa had noted this threat occurring in Brazil and the US.  Whilst Tesco Bank actioned this intelligence against its credit cards, it failed to do so against debit cards which netted the threat actors £2.26 million.

Incident management: “An APRA-regulated entity must have robust mechanisms in place to detect and respond to information security incidents in a timely manner. An APRA-regulated entity must maintain plans to respond to information security incidents that the entity considers could plausibly occur (information security response plans)”.  – The following incident management failings were noted by the FCA:

  • Tesco Bank’s Financial Crime Operations team failed to follow written procedures;
  • The Fraud Strategy Team drafted a rule to block the fraudulent transactions, but coded the rule incorrectly.
  • The Fraud Strategy Team failed to monitor the rule’s operation and did not discover until several hours later, that the rule was not working.
  • The responsible managers should have invoked crisis management procedures earlier.

Do we think APRA will be handing out fines this size?

Short answer, yes. Post the Royal Commission into Misconduct in the Banking, Superannuation and Financial Services Industry, there is very little love for the financial services industry in Australia. Our sense is that politicians who want to remain politicians will need to be seen to be tough on financial services and therefore enforcement authorities like APRA will most likely see an increase in their budgets.

Unfortunately for those of you in cyber and risk teams in financial services, it is a bit of a perfect storm. The regulator has a new set of rules to enforce, the money to conduct the investigation and a precedence from within the Commonwealth.

What about the suppliers?

Something that not many are talking about but really should be, is the supplier landscape. Like it or not, the banks in Australia are some of the biggest businesses in the country. They use a lot of suppliers to deliver critical services including cyber security. Under the proposed APRA standard:

Implementation of controls: “Where information assets are managed by a related party or third party, an APRA-regulated entity must evaluate the design and operating effectiveness of that party’s information security controls”.

Banks are now clearly accountable for the effectiveness of the information security controls operated by their suppliers as they relate to a bank’s defences. If you are a supplier (major or otherwise) to the banks, given this new level of oversight from their regulator, we advise you to get your house in order because it is likely that your door will be knocked upon soon.


If you enjoyed this and would like to be notified of future elevenM blog posts, please subscribe below.

You get an Aadhaar! You get an Aadhaar! Everybody gets an Aadhaar!

On 26 September 2018, the Supreme Court of India handed down a landmark ruling on the constitutionality of the biggest biometric identity system in the world, India’s Aadhaar system.

The Aadhaar was implemented in 2016, and has since acquired a billion registered users. It’s a 12-digit number issued to each resident of India, linked to biometrics including all ten fingerprints, facial photo and iris scans, and basic demographic data, all held in a central database. Since being implemented, it’s been turned to a variety of uses, including everything from proof of identification, tracking of government employee attendance, ration distribution and fraud reduction, entitlements for subsidies, and distribution of welfare benefits. The Aadhaar has quickly become mandatory for access to essential services such as bank accounts, mobile phone SIMs and passports.

Beyond banks and telcos, other private companies have also been eager to use to the Aadhaar, spurring concerns about private sector access to the database.

In 2012, a series of challenges were levelled at the Aadhaar, including that the Aadhaar violated constitutionally protected privacy rights.

In a mammoth 1448 page judgement, the Court made several key rulings:

  • The Court ruled that the Aadhaar system does not in itself violate the fundamental right to privacy. However, the Court specifically called out a need for a ‘robust data protection framework’ to ensure pricy rights are protected.
  • However, the Aadhaar cannot be mandatory for some purposes, including access to mobile phone services and bank accounts, as well as access to some government services, particularly education. Aadhaar-authentication will still be required for tax administration (this resolves some uncertainty from a previous ruling).
  • The private sector cannot demand that an Aadhaar be provided, and private usage of the Aadhaar database is unconstitutional unless expressly authorised by law.
  • The Court also specified that law enforcement access to Aadhaar data will require judicial approval, and any national security-based requests will require consultation with High Court justices (i.e., the highest court in the relevant Indian state).
  • Indian citizens must be able to file complaints regarding data breaches involving the Aadhaar; prior to this judgment, the ability to file complaints regarding violations of the Aadhaar Act was limited to the government authority administering the Aadhaar system, the Unique ID Authority of India.

The Aadhaar will continue to be required for many essential government services, including welfare benefits and ration distribution – s7 of the Aadhaar Act makes Aadhaar-based authentication a pre-condition for accessing “subsidy, benefits or services” by the government. This has been one of the key concerns of Aadhaar opponents – that access to essential government services shouldn’t be dependant on Aadhaar verification. There have been allegations that people have been denied rations due to ineffective implementation of Aadhaar verification, leading to deaths.

It’s also unclear whether information collected under provisions which have now been ruled as unconstitutional – for example, Aadhaar data collected by Indian banks and telcos – will need to be deleted.

As Australia moves towards linking siloed government databases and creating its own digital identity system, India’s experience with the Aadhaar offers many lessons. A digital identity system offers many potential benefits, but all technology is a double-edged sword. Obviously, Australia will need to ensure that any digital identity system is secure but, beyond that, that the Australian public trusts the system. To obtain that trust, Australian governments will need ensure the system and the uses of the digital identity are transparent and ethical – that the system will be used in the interests of the Australian public, in accordance with clear ethical frameworks. Those frameworks will need to be flexible enough to enable interfaces with the private sector to reap the full benefits of the system, but robust enough to ensure those uses are in the public interest. Law enforcement access to government databases remains a major concern for Australians, and will need to be addressed. It’s a tightrope, and it will need to be walked very carefully indeed.


If you enjoyed this and would like to be notified of future elevenM blog posts, please subscribe below.

Don’t call me, I’ll call you

You’ve just pulled dinner out of the oven, the kids have been wrangled to the table, and you’re just about to sit down.

Suddenly, your miracle of domestic logistics is shattered by the klaxon  of your phone ringing. Juggling a hot plate of roast chicken and a small, wriggling child, you grab for the handset… only to be greeted by the forced enthusiasm of a long-suffering call centre worker who desperately wants you to tell you about simply fantastic savings on energy prices.

The Do Not Call Register has been in place since 2006. The DNCR Register allows Australians to place their phone number on a register indicating that they don’t wish to receive marketing calls or faxes, with fines applying for non-compliance.

The ACMA enables to organisations that want to conduct telemarketing campaigns subscribe to the Register and  ‘wash’ their calls lists against it. This helps organisation make sure they aren’t calling people who don’t want to hear from them.

Of course, that doesn’t help if you don’t bother to check the Register in the first place, like Lead My Way. Lead My Way received a record civil penalty of $285,600 today for making marketing calls to numbers on the DNCR Register. Lead My Way had actually subscribed to the DNCR Register, but for some reason hadn’t washed their call list against it. This led to numerous complaints to the ACMA, which commenced an investigation.

Lead My Way was calling people to test their interest in its clients’ products or services, then on selling that information as ‘leads’ – that is, as prospective customers. This kind of business model can also raise significant Privacy Act compliance issues. Do the people being called understand that their personal information is collected and will be sold? How are they notified of the collection (APP 5)? Have they consented to that use? Is that consent informed and valid? Is the sale of their personal information permissible (APP 6)? Are they able to opt out of receiving further marketing calls, and are those opt outs being respected (APP 7)?

Cutting corners on how you manage and use personal information may save you time and money in the short term. But, as Lead My Way discovered, in the long run it can create massive compliance risk, annoy your end users, and incur the wrath of the regulators. Were the (likely minuscule) savings of ignoring the DNCR Register worth a regulator investigation and the comprehensive trashing of Lead My Way’s brand?

Perhaps we should call them and ask.


If you enjoyed this and would like to be notified of future elevenM blog posts, please subscribe below.

Nine steps to a successful privacy and cyber security capability uplift

Most organisations today understand the critical importance of cyber security and privacy protection to their business. Many are commencing major uplift programs, or at least considering how they should get started.

These projects inevitably carry high expectations because of what’s at stake. They’re also inherently complex and impact many parts of the organisation. Converting the effort and funding that goes into these projects into success and sustained improvement to business-as-usual practices is rarely straightforward.

Drawing on our collective experiences working on significant cyber security and privacy uplift programs across the globe, in a variety of industries, here’s what we believe are key elements to success.

1. Secure a clear executive mandate

Your uplift program is dealing with critical risks to your organisation. The changes you will seek to drive through these programs will require cooperation across may parts of your organisation and potentially partners and third parties too. A mandate and sponsorship from your executive is critical.

Think strategically about who else you need on-side, beyond your board and executive committee. Build an influence map and identify potential enablers and detractors, and engage early. Empower your program leadership team and business leadership from affected areas to make timely decisions and deliver their mandate.

2. Adopt a customer and human-centric approach

Uplift programs need to focus on people change as well as changes to processes and technology. Success in this space very often comes down to changing behaviours and ensuring the organisation has sufficient capacity to manage the new technology and process outputs (eg how to deal with incidents).

We therefore suggest that you adopt a customer and human-centric approach. Give serious time, attention and resourcing to areas including communications planning, organisational change management, stakeholder engagement, training and awareness.

3. Know the business value of what you are going to deliver and articulate it

An opaque or misaligned understanding of what a security or privacy program is meant to deliver is often the source of its undoing. It is crucial to ensure scope is clear and aligned to the executive mandate.

Define the value and benefits of your uplift program early, communicate them appropriately and find a way to demonstrate this value over time. Be sure to speak in terms the business understands, not just new technologies or capabilities you will roll-out for instance, what risks have you mitigated?

You can’t afford to be shy. Ramp up the PR to build recognition about your program and its value among staff, executive and board members. Think about branding.

4. Prioritise the foundational elements

If you’re in an organisation where security and privacy risks have been neglected, but now have a mandate for broad change, you can fall into the trap of trying to do too much at once.

Think of this as being your opportunity to get the groundwork in place for your future vision. Regardless of whether the foundational elements are technology or process related, most with tenure in your organisation know which of them need work. From our experience, those same people will also understand the importance of getting them right and in most cases would be willing to help you fix them.

As a friendly warning, don’t be lured down the path of purchasing expensive solutions without having the right groundwork in place. Most, if not all of these solutions rely on such foundations.

5. Deliver your uplift as a program

For the best results, deliver your uplift as a dedicated change program rather than through BAU.

Your program will of course need to work closely with BAU teams to ensure the sustained success of the program. Have clear and agreed criteria with those teams on the transition to BAU. Monitor BAU teams’ preparation and readiness as part of your program.

6. Introduce an efficient governance and decision making process

Robust and disciplined governance is critical. Involve key stakeholders, implement clear KPIs and methods of measurement, and create an efficient and responsive decision-making process to drive your program.

Governance can be light touch provided the right people are involved and the executive supports them. Ensure you limit the involvement of “passengers” on steering groups who aren’t able to contribute and make sure representatives from BAU are included

7. Have a ruthless focus on your strategic priorities

These programs operate in the context of a fast-moving threat and regulatory landscape. Things change rapidly and there will be unforeseen challenges.

It’s important to be brave and assured in holding to your strategic priorities. Avoid temptation to succumb to tactical “quick fixes” that solve short-term problems but bring long-term pain.

8. Build a high-performance culture and mindset for those delivering the program

These programs are hard but can be immensely satisfying and career-defining for those involved. Investing in the positivity, pride and engagement of your delivery team will pay immense dividends.

Seek to foster a high-performance culture, enthusiasm, tolerance and collaboration. Create an environment that is accepting of creativity and experimentation.

9. Be cognisant of the skills shortage and plan accordingly

While your project may be well funded, don’t be complacent about the difficulties accessing skilled people to achieve the goals of your project. Globally, the security and privacy industries continue to suffer severe short-ages in skilled professionals. Build these into your forecasts and expectations, and think laterally about the use of partners.


If you enjoyed this and would like to be notified of future elevenM blog posts, please subscribe below.

Musings on the OAIC’s second Notifiable Data Breaches report

On 31 July, the Office of the Australian Information Commissioner (OAIC) released its second Notifiable Data Breaches Quarterly Statistics Report.

This report covers the first full quarter since the Notifiable Data Breaches scheme (NDB scheme) began on 22 February 2018, and the OAIC has clearly put some work into building out the report with detailed stats and breakdowns. Let’s take a look.

Going up, up, up!

This quarter there were 242 notifications overall, noting that multiple notifications relating to the same incident (including the infamous PageUp breach) were counted as a single breach.

The OAIC’s month by month breakdown shows a steady increase in notifications by month, going from 55 notifications in March to 90 notifications in June. Overall, accounting for the partial quarter in the first report, we’ve seen a doubling in the rate of notifications.

However, there are a lot of factors that may be affecting the notification rate. Since February, many companies and agencies have implemented new processes to make sure they comply with the NDB scheme, and this may be driving more notifications. On the other hand, in our experience a lot of companies and agencies are still unsure about their notification obligations and when to notify, so they might be over reporting – notifying breaches that may not meet the ‘likely risk of serious harm’ threshold just to be sure that they are limiting their compliance risk.

At this early stage of the scheme, we think it’s premature to draw any conclusions on rising notification rates. The rate may change significantly as companies and agencies come to grips with their obligations and what does and doesn’t need to be reported.

Teach your children staff well

59% of breaches this quarter were identified as being caused by malicious or criminal attacks. The vast majority (68%) of attacks were cyber incidents and, of those, over three quarters related to lost or stolen credentials. This includes attacks based on phishing, malware, and social engineering. Brute force attacks also featured significantly.

We think that the obvious conclusion here is that there’s an opportunity to significantly reduce the attack surface by training your staff to better protect their credentials. For example, teach them how to recognise phishing attempts, run drills, and enforce regular password changes.

There are also some system issues that could be addressed, such as multi-factor authentication, enforcing complex password requirements, and implementing rate limiting on credential submissions to prevent brute force attacks.

To err is human

Human error accounted for 36% of breaches this quarter. It was the leading cause in the first quarterly report, but again, there are a number of factors that could have caused this shift.

Notably, over half of the breaches caused by human error were scenarios in which personal information was sent to the wrong person – by email, mail, post, messenger pigeon or what have you, but especially email (29 notifications). Again, this suggests a prime opportunity to reduce your risk by training your staff. For example, it appears that at least 7 people this quarter didn’t know (or forgot) how to use the BCC/Blind Carbon Copy function in their email.

People make mistakes. And we know this, so it’s a known risk. We should be designing processes and systems to limit that risk, such as systems to prevent mistakes in addressing.

Doctors and bankers and super, oh my!

Much ink has been spilt over information governance in the health and finance sectors recently, and those sectors accounted for more notifications than any other this quarter (49 and 36 notifications respectively). These are pretty massive industry sectors – healthcare alone accounts for 13.5% of jobs in Australia – so scale is likely affecting the high number of notifications. Anyway, the OAIC has helpfully provided industry level breakdowns for each of them.

In the finance sector (including superannuation providers), human error accounted for 50% of all breaches, and malicious attacks for 47%. Interestingly, in the finance sector almost all the malicious attacks were based on lost or stolen credentials, so we’re back to staff training as a key step to reduce risk.

Bucking the trend, human error accounted for almost two thirds of breaches in the health sector – clearly there’s some work to be done in that sector in terms of processes and staff training. Of the breaches caused by the malicious attacks, 45% were theft of physical documents or devices. This isn’t particularly surprising, as it can be challenging for small medical practices that make up a large part of the sector to provide high levels of physical security. It’s important to note that these notifications only came from private health care providers – public providers are covered under state-based privacy legislation. Also, these statistics don’t cover notifications relating to the My Health Records system – the OAIC reports on those numbers separately in its annual report. So these stats don’t offer a full picture of the Australian health industry as a whole.

All in all, this quarter’s NDB scheme report contains some interesting insights, but as agencies and organisations become more familiar with the scheme (and continue to build their privacy maturity), we may see things shift a bit. Only time will tell.


If you enjoyed this and would like to be notified of future elevenM blog posts, please subscribe below.

GDPR is here

If the recent flurry of emails from organisations sending privacy policy updates didn’t tip you off, the new EU General Data Protection Regulation (GDPR) commences today.

Reading every one of those emails (something even those of us in the privacy world struggle with), might give you the impression that there’s a standard approach to GDPR compliance. But the truth is that how your organisation has (hopefully) prepared for GDPR, and how it will continue to improve its privacy practices, is highly variable.

We’ve covered the GDPR at length on this blog, and a collection of links to our various articles is at the bottom of this post– but first, we’d like to set out a few thoughts on what the GDPR’s commencement means in practice.

Remember the principles

If the GDPR applies to your organisation, you’ve presumably taken steps to prepare for the requirements that apply under the new privacy regime. Among these are new requirements relating to data breach notification, as well as new rights and freedoms for individuals whose personal data you may be processing.

One aspect of GDPR that has received plenty of attention is the new penalties, which can be up to 4% of an organisation’s annual turnover, or 20 million Euros (whichever is greater). Certainly, those numbers have been very effective in scaring plenty of people, and they may cause you to check once again whether your organisation fully meets the new requirements under the GDPR.

However, the reality isn’t quite so straightforward (or scary). Much of the GDPR is principles-based, meaning that there isn’t always a single way to comply with the law – you need to take account of your organisation’s circumstances and the types of personal data it processes to understand where you stand in relation to GDPR’s requirements.

Although we don’t expect EU supervisory authorities to provide an enforcement ‘grace period’, we’re also of the view that enforcement activities will ramp up gradually. The authorities understand that, for many organisations, GDPR compliance is a journey. Those organisations that can demonstrate they’ve taken every reasonable step to prepare for GDPR, and which have a plan for continuing to improve their privacy compliance and risk programs, will be far better placed than those that have done little or nothing to get ready for the new law.

If your organisation still has work to do to comply with the GDPR, or you want to continue improving your organisation’s compliance and risk program (and there is always more to do!), there is plenty of help available to help you navigate GDPR and understand how it applies to your organisation.

Our previous coverage of the GDPR

Tim compares Australia’s Privacy Act with the GDPR

Melanie spoke to Bloomberg about driving competitive advantage from GDPR compliance

Head to Head: the GDPR and the Australian Privacy Principles (Part 1 and Part 2)

A Lesson in Data Privacy: You Can’t Cram for GDPR

Facebook and Cambridge Analytica: Would the GDPR have helped?

5 things you need to know about GDPR’s Data Protection Officer requirement


If you enjoyed this and would like to be notified of future elevenM blog posts, please subscribe below.

5 things you need to know about GDPR’s Data Protection Officer requirement

This article was originally published in issue #83 of Privacy Unbound under the title ‘5 Questions about DPOs’. Privacy Unbound is the journal of the International Association of Privacy Professionals, Australia-New Zealand (iappANZ).

1. What is a ‘DPO’, anyway? What are they even supposed to do?

In a nutshell, the Data Protection Officer (DPO) is a senior advisor with oversight of how your organisation handles personal data.

Specifically, DPOs should be able to:

  • inform and advise your organisation and staff about their privacy compliance obligations (with respect to the GDPR and other data protection laws)
  • monitor privacy compliance, which includes managing internal data protection activities, advising on data protection impact assessments, training staff and conducting internal audits
  • act as a first point of contact for regulators and individuals whose data you are handling (such as users, customers, staff… etc.) (Art. 39(1)).

2. But we’re not based in Europe, so do we even need one?

Well, even if you aren’t required to have one, you should have one. If you’re processing, managing or storing personal data about EU residents, you’ll need to comply with the requirements of the GDPR – this is one of those requirements, whether you’re based in the EU or not.

Specifically, the GDPR requires that you appoint a DPO in certain circumstances (Art. 37(1)).

These include if you carry out ‘large scale’ systematic monitoring of individuals (such as online behavioural tracking).

You’ll also need to appoint a DPO if you carry out ‘large scale processing of personal data’, including:

  • ‘special categories of data’ as set out in article 9 – that is, personal data that reveals racial or ethnic origin, political opinions, religious or philosophical beliefs, trade union membership, genetic data, biometric identifiers, health information, or information about person’s sex life or sexual orientation.
  • data relating to criminal convictions and offences (per Art. 10).

The Article 29 Working Party has stated[1] that ‘large scale’ processing could include, for example, a hospital processing its patient data, a bank processing transactions, the analysis of online behavioural advertising data, or a telco processing the data required to provide phone or internet services.

Even if you don’t fit into one of these categories, you can still appoint a DPO in the spirit of best practice, and to ensure that your company is leading from the top when it comes to privacy.

In this respect, New Zealand is already ahead of the game. Entities covered by the New Zealand Privacy Act are already required to have a privacy officer, and they largely fulfil the same functions as a DPO.[2] However, they’ll still need to meet the other DPO requirements; see below.

While Australia hasn’t made having a privacy officer an express requirement for the private sector, the Office of the Australian Information Commissioner recommends that companies appoint a senior privacy officer as part of an effective privacy management framework.[3]

Government agencies aren’t off the hook

Being Public Service will not save you. Public authorities that collect the information of EU residents are also required to have a DPO (Art. 37(1)).

It’s worth noting that Australian Government agencies will need to appoint privacy officers and senior privacy ‘champions’ under the Australian Government Agencies Privacy Code,[4] which comes into force on 1 July 2018. Agency Privacy Champions may also be able to serve as the DPO.

As New Zealand Government agencies already have privacy officers, the only question they must answer is whether their privacy officer meets the other DPO requirements; see below.

3. OK, fine. We get it. We need a DPO. Who should we appoint?

The DPO needs to be someone that reports to the ‘highest management level’ of your organisation; that is, Board-level or Senior Executive (Art. 38(3)).

They’ll need to be suitably qualified, including having expert knowledge of the relevant data protection laws and practices (Art. 37(5)).

The DPO also needs to be independent; they can’t be directed to carry out their work as DPO in a certain way, or be penalised or fired for doing it (Art 38(3)). You’ll also need to ensure they’re appropriately resourced to do the work (Art. 38(2)).

If you’re a large organisation with multiple corporate subsidiaries, you can appoint a single DPO as long as they are easily accessible by each company (Art. 37(3)).

You can appoint one of your current staff as DPO (Art 37(6)), as long as their other work doesn’t conflict with their DPO responsibilities (Art. 38(6)). This means that you can’t have a DPO that works on anything that the DPO might be required to advise or intervene on. That is, they can’t also have operational responsibility for data handling. This means you can’t, for example, appoint your Chief Security Officer as your DPO.

4. But that means we can’t appoint any of our current staff. We can’t take on any new hires right now. Can we outsource this?

Yes, you can appoint an external DPO (Art 37(6)), but whoever you contract will still need to meet all of the above requirements.

Some smaller companies might not have enough work to justify a full-time DPO; an outsourced part-time DPO might be a good option for these organisations.

It might also be hard to find qualified DPOs, at least in the short term; IAPP has estimated that there will be a need for 28,000 DPOs in the EU.[5] A lot of companies in Australia and New Zealand are already having trouble finding qualified privacy staff, so some companies might have to share.

5. This all seems like a lot of trouble. Can we just wing it?

I mean, sure. If you really want to. But under the GDPR, failure to meet the DPO obligations may attract an administrative fine of up to €10 million, or up to 2% of your annual global turnover (Article 83(4)). Previous regulatory action in the EU on privacy issues has also gained substantial media attention. Is it really worth the risk? Especially given that, in the long run, having robust privacy practices will help you keep your users and your customers safe – having an effective DPO may well save you money.

[1] http://ec.europa.eu/information_society/newsroom/image/document/2016-51/wp243_annex_en_40856.pdf

[2] S23, Privacy Act 1993 (NZ); http://www.legislation.govt.nz/act/public/1993/0028/latest/DLM297074.html

[3] https://www.oaic.gov.au/agencies-and-organisations/guides/privacy-management-framework

[4] https://www.oaic.gov.au/privacy-law/privacy-registers/privacy-codes/privacy-australian-government-agencies-governance-app-code-2017

[5] https://iapp.org/news/a/study-at-least-28000-dpos-needed-to-meet-gdpr-requirements/


If you enjoyed this and would like to be notified of future elevenM blog posts, please subscribe below.