APRA gets $60m in new funding: CPS 234 just got very real

We have previously talked about APRA’s new information security regulation and how global fines will influence the enforcement of this new regulation.

Today we saw a clear statement of intent from the government in the form of $58.7 million of new funding for APRA to focus on the identification of new and emerging risks such as cyber and fintech.

As previously stated, if you are in line of sight for CPS 234 either as a regulated entity or a supplier to one, we advise you to have a clear plan in place on how you will meet your obligations. No one wants to be the Tesco of Australia.

If you would like to talk to someone from elevenM about getting ready for CPS 234, please drop us a note at hello@elevenM.com.au or call us on 1300 003 922.


If you enjoyed this and would like to be notified of future elevenM blog posts, please subscribe below.

Up close and personal with the Singaporean Cybersecurity Act

Due to a recent engagement we carried out an in-depth review of the new Singaporean Cybersecurity Act.

What do we think?

The Act is a bold approach to ensuring the security of a nation’s most critical infrastructure, which we think will be copied by other countries and may even be a model for large enterprises.

Why bold?

A fundamental challenge is that the level of cybersecurity protecting any piece of infrastructure at any given time is usually heavily dependent on a Chief Information Security Officer’s (CISO) ability to present cyber risk to those controlling the purse strings. The result is a varied levels of control and capability across some very important infrastructure.

So what is the answer? Like most things, depends who you ask. Singapore has taken the bold approach to regulate the cybersecurity of the technology infrastructure that the country needs to run smoothly.

Our key takeaways

  • The Act introduces a Cyber Commissioner who will “respond to cybersecurity incidents that threaten the national security, defence, economy, foreign relations, public health, public order or public safety, or any essential services, of Singapore, whether such cybersecurity incidents occur in or outside Singapore” – Interesting to see how this works in practice. Many global companies in this framework will be hesitant to provide that level of access to a foreign state.
  • The Act creates Critical Information Infrastructure (CII) in Singapore meaning “the computer or computer system which is necessary for the continuous delivery of an essential service, and the loss or compromise of the computer or computer system will have a debilitating effect on the availability of the essential service in Singapore” – These CIIs span most industries across both the public and private sector. It will be very interesting to see what they determine to be CIIs and how private companies deal with this. Even from an investment perspective, who pays to increase the security posture or the rewrite of the supporting business processes?
  • Each designated CII will have an owner who will be appointed statutory duties specific to the cybersecurity of the CII. – Yeah, these owners will be held to account by the Commissioner. Failure to fulfil their role will result in personal fines up to $100,000 or imprisonment for a term not exceeding 2 years. Given most companies already struggle defining the ‘owner’ of a system, will this push the ownership of these business/operational systems to CISOs?
  • The Act introduces a licencing framework for suppliers where “No person is to provide licensable cybersecurity service without licence”. – A very interesting one. The suppliers of cybersecurity services to the CIIs will need to have a license issued by the Commissioner. A sign of things to come in the supplier risk space perhaps?

The Act can be found here:  Singapore Cybersecurity Act 2018


If you enjoyed this and would like to be notified of future elevenM blog posts, please subscribe below.

Introducing our free data breach notification tool

When we previously looked at the trends emerging from the mandatory notifiable data breaches scheme, we observed that organisations seem to be playing it safe and reporting when in doubt, possibly leading to overreporting.

We’re big supporters of mandatory notification, and we agree that when there’s doubt, it’s safer to report. But we also think it’s important that we all get better at understanding and managing data breaches, so that individuals and organisations don’t become overwhelmed by notifications.

That’s why we’ve prepared a free, fast and simple tool to help you consider all of the relevant matters when deciding whether a data breach needs to be notified.

Download here

Keep in mind that this is just a summary of relevant considerations – it’s not legal advice, and it only addresses Australian requirements. If your organisation handles personal information or personal data outside of Australia, you might need to consider the notification obligations in other jurisdictions.

Also remember that notification is just one aspect of a comprehensive data breach response plan. If your organisation handles personal information, you should consider adopting a holistic plan for identifying, mitigating and managing data breaches and other incidents.

Please let us know if you find this tool useful or if you any feedback or suggestions.


If you enjoyed this and would like to be notified of future elevenM blog posts, please subscribe below.

The journey toward trust – Part 3: Trust through reputational management

This is the third and final article in a three-part series that explores the notion of trust in today’s digital economy, and how organisations can practically build trust. In part 1 we took a deeper look at the meaning and underlying principles of trust. Part two explored best practice approaches to using regulatory compliance to build trust.

In this piece, we look at the role of reputation management in building trust on privacy and security issues. 

Reputation management

The way an organisation manages its reputation is unsurprisingly tightly bound up with trust.

While there are many aspects to reputation management, an effective public response is one of, if not the most, critical requirements.

In the era of fast-paced digital media, a poorly managed communications response to a cyber or privacy incident can rapidly damage trust. With a vocal and influential community of highly informed security and privacy experts active on social media, corporate responses that don’t meet the mark get pulled apart very quickly.

Accordingly, a bad response produces significantly bad outcomes, including serious financial impacts, executive scalps, and broader repercussions like government and regulatory inquiries and class actions.

A google search will quickly uncover examples of organisations that mishandled their public response. Just in recent weeks we learned Uber will pay US $148m in fines over a 2016 breach, largely because of failures in how it went about disclosing the breach.

Typically, examples of poor public responses to breaches include one or more of the following characteristics:

  • The organisation was slow to reveal the incident to customers (ie. not prioritising truth, safety and reliability)
  • The organisation was legalistic or defensive (ie. not prioritising the protection of customers)
  • The organisation pointed the finger at others (ie. not prioritising reliability or accountability)
  • The organisation provided incorrect or inadequate technical details (ie. not prioritising a show of competence)

As we can see courtesy of the analyses in the brackets, the reason public responses often unravel as they do is that they feature statements that violate the key principles of trust that we outlined in part one of this series.

Achieving a high-quality, trust-building response that reflects and positively communicates principles of trust is not necessarily easy, especially in the intensity of managing an incident.

An organisation’s best chance of getting things right is to build communications plans in advance that embed the right messages and behaviours.

Plans and messages will always need to be adapted to suit specific incidents, of course, but this proactive approach allows organisation to develop a foundation of clear, trust-building messages in a calmer context.

It’s equally critical to run exercises and simulations around these plans, to ensure the key staff are aware of their roles and are aligned to the objectives of a good public crisis response and that hiccups are addressed before a real crisis occurs.


If you enjoyed this and would like to be notified of future elevenM blog posts, please subscribe below.

The journey toward trust – Part 2: Trust through regulatory compliance

This is the second article in a three-part series that explores the notion of trust in today’s digital economy, and how organisations can practically build trust. In part 1 we took a deeper look at what trust means, and uncovered some guiding principles organisations can work towards when seeking to build trust.

In this piece, we look at best practice approaches to using regulatory compliance to build trust.

Privacy laws and regulatory guidance provide a pretty good framework for doing the right thing when it comes to trusted privacy practices (otherwise known as, the proper collection, use and disclosure of personal information).

We are the first to advocate for a compliance-based framework.  Every entity bound by the Privacy Act 1988 and equivalent laws should be taking proactive steps to establish and maintain internal practices, procedures and systems that ensure compliance with the Australian Privacy Principles.  They should be able to demonstrate appropriate accountabilities, governance and resourcing.

But compliance alone won’t build trust.

For one, the majority of Australian businesses are not bound by the Privacy Act because they fall under its $3m threshold. This is one of several reasons why Australian regulation is considered inadequate by EU data protection standards.

Secondly, there is variability in the ways that entities operationalise privacy. The regulator has published guidance and tooling for the public sector to help create some common benchmarks and uplift maturity recognising that some entities are applying the bare minimum. No such guidance exists for the private sector – yet.

Consumer expectations are also higher than the law. It may once have been acceptable for businesses to use and share data to suit their own purposes whilst burying their notices in screeds of legalise. However, the furore over Facebook Cambridge / Analytica shows that sentiment has changed (and also raises a whole bucket of governance issues).  Similarly, increasingly global consumers expect to be protected by the high standards set by the GDPR and other stringent frameworks wherever they are, which include rights such as the right to be forgotten and the right to data portability.

Lastly, current compliance frameworks do not help organisations to determine what is ethical when it comes to using and repurposing personal information. In short, an organisation can comply with the Privacy Act and still fall into an ethical hole with its data uses.

Your organisation should be thinking about its approach to building and protecting trust through privacy frameworks.  Start with compliance, then seek to bolster weak spots with an ethical framework; a statement of boundaries to which your organisation should adhere. 


In the third and final part of this series, we detail how an organisation’s approach to reputation management for privacy and cyber security issues can build or damage trust.


If you enjoyed this and would like to be notified of future elevenM blog posts, please subscribe below.

The journey toward trust – Part 1: Understanding trust

Join us for a three-part series that explores the notion of trust in today’s digital economy, and how organisations practically can build trust. We also focus on the role of regulatory compliance and reputation management in building trust, and outline best practice approaches.

Be-it users stepping away from the world’s biggest social media platform after repeated privacy scandals, a major airline’s share price plummeting after a large data breach, or Australia’s largest bank issuing a stronger commitment to a stronger focus on privacy and security in rebuilding its image – events in recent weeks provide a strong reminder of the fragility and critical importance of trust to businesses seeking success in the digital economy.

Bodies as illustrious as the World Economic Forum and OECD have written at length about the pivotal role of trust as a driving factor for success today.

But what does trust actually mean in the context of your organisation? And how do you practically go about building it?

At elevenM, we spend considerable time discussing and researching these questions from the perspectives of our skills and experiences across privacy, cyber security, risk, strategy and communications.

A good starting point for any organisation wanting to make trust a competitive differentiator is to gain a deeper understanding of what trust actually means, and specifically, what it means for it.

Trust is a layered concept, and different things are required in different contexts to build trust.

Some basic tenets of trust become obvious when we look to popular dictionaries. Ideas like safety, reliability, truth, competence and consistency stand out as fundamental principles.

Another way to learn what trust means in a practical sense is to look at why brands are trusted. For instance, the most recent Roy Morgan survey listed supermarket ALDI as the most trusted brand in Australia. Roy Morgan explains this is built on ALDI’s reputation for reliability and meeting customer needs.

Importantly, the dictionary definitions also emphasise an ethical aspect – trust is built by doing good and protecting customers from harm.

Digging a little deeper, we look to the work of trust expert and business lecturer Rachel Botsman, who describes trust as “a confident relationship with the unknown”.  This moves us into the digital space in which organisations operate today, and towards a more nuanced understanding.

We can infer that consumers want new digital experiences, and an important part of building trust is for organisations to innovate and help customers step into the novel and unknown, but with safety and confidence.

So, how do we implement these ideas about trust in a practical sense?

With these definitions in mind, organisations should ask themselves some practical and instructive questions that illuminate whether they are building trust.

  • Do customers feel their data is safe with you?
  • Can customers see that you seek to protect them from harm?
  • Are you accurate and transparent in your representations?
  • Do your behaviours, statements, products and services convey a sense of competence and consistency?
  • Do you meet expectations of your customers (and not just clear the bar set by regulators)?
  • Are you innovative and helping customers towards new experiences?

In part two of this series, we will explore how regulatory compliance can be used to build trust.


If you enjoyed this and would like to be notified of future elevenM blog posts, please subscribe below.

What does the record FCA cyber fine mean for Australia?

First, bit of context: The Financial Conduct Authority (FCA) is the conduct and prudential regulator for financial services in the UK. They are in-part an equivalent to the Australian Prudential Regulatory Authority (APRA).

Record cyber related fine

This week the FCA handed down a record cyber related fine to the banking arm of the UK’s largest supermarket chain Tesco for failing to protect account holders from a “foreseeable” cyber attack two years ago. The fine totalled £23.4 million but due to an agreed early stage discount, the fine was reduced by 30% to £16.4 million.

Cyber attack?

It could be argued that this was not a cyber attack in that it was not a breach of Tesco Bank’s network or software but rather a new twist on good old card fraud. But for clarity, the FCA defined the attack which lead to this fine as: “a mass algorithmic fraud attack which affected Tesco Bank’s personal current account and debit card customers from 5 to 8 November 2016.”

What cyber rules did Tesco break?

Interestingly, the FCA does not have any cyber specific regulation. The FCA exercised powers through provisions published in their Handbook. This Handbook has Principles, which are general statements of the fundamental obligations. Therefore Tesco’s fine was issued against the comfortably generic Principle 2: “A firm must conduct its business with due skill, care and diligence”

What does this mean for Australian financial services?

APRA, you may recall from our previous blog. has issued a draft information security regulation CPS 243. This new regulation sets out clear rules on how regulated Australian institutions should be managing their cyber risk.

If we use the Tesco Bank incident as an example, here is how APRA could use CPS 234:

Information security capability: “An APRA-regulated entity must actively maintain its information security capability with respect to changes in vulnerabilities and threats, including those resulting from changes to information assets or its business environment”. –  Visa provided Tesco Bank with threat intelligence as Visa had noted this threat occurring in Brazil and the US.  Whilst Tesco Bank actioned this intelligence against its credit cards, it failed to do so against debit cards which netted the threat actors £2.26 million.

Incident management: “An APRA-regulated entity must have robust mechanisms in place to detect and respond to information security incidents in a timely manner. An APRA-regulated entity must maintain plans to respond to information security incidents that the entity considers could plausibly occur (information security response plans)”.  – The following incident management failings were noted by the FCA:

  • Tesco Bank’s Financial Crime Operations team failed to follow written procedures;
  • The Fraud Strategy Team drafted a rule to block the fraudulent transactions, but coded the rule incorrectly.
  • The Fraud Strategy Team failed to monitor the rule’s operation and did not discover until several hours later, that the rule was not working.
  • The responsible managers should have invoked crisis management procedures earlier.

Do we think APRA will be handing out fines this size?

Short answer, yes. Post the Royal Commission into Misconduct in the Banking, Superannuation and Financial Services Industry, there is very little love for the financial services industry in Australia. Our sense is that politicians who want to remain politicians will need to be seen to be tough on financial services and therefore enforcement authorities like APRA will most likely see an increase in their budgets.

Unfortunately for those of you in cyber and risk teams in financial services, it is a bit of a perfect storm. The regulator has a new set of rules to enforce, the money to conduct the investigation and a precedence from within the Commonwealth.

What about the suppliers?

Something that not many are talking about but really should be, is the supplier landscape. Like it or not, the banks in Australia are some of the biggest businesses in the country. They use a lot of suppliers to deliver critical services including cyber security. Under the proposed APRA standard:

Implementation of controls: “Where information assets are managed by a related party or third party, an APRA-regulated entity must evaluate the design and operating effectiveness of that party’s information security controls”.

Banks are now clearly accountable for the effectiveness of the information security controls operated by their suppliers as they relate to a bank’s defences. If you are a supplier (major or otherwise) to the banks, given this new level of oversight from their regulator, we advise you to get your house in order because it is likely that your door will be knocked upon soon.


If you enjoyed this and would like to be notified of future elevenM blog posts, please subscribe below.

You get an Aadhaar! You get an Aadhaar! Everybody gets an Aadhaar!

On 26 September 2018, the Supreme Court of India handed down a landmark ruling on the constitutionality of the biggest biometric identity system in the world, India’s Aadhaar system.

The Aadhaar was implemented in 2016, and has since acquired a billion registered users. It’s a 12-digit number issued to each resident of India, linked to biometrics including all ten fingerprints, facial photo and iris scans, and basic demographic data, all held in a central database. Since being implemented, it’s been turned to a variety of uses, including everything from proof of identification, tracking of government employee attendance, ration distribution and fraud reduction, entitlements for subsidies, and distribution of welfare benefits. The Aadhaar has quickly become mandatory for access to essential services such as bank accounts, mobile phone SIMs and passports.

Beyond banks and telcos, other private companies have also been eager to use to the Aadhaar, spurring concerns about private sector access to the database.

In 2012, a series of challenges were levelled at the Aadhaar, including that the Aadhaar violated constitutionally protected privacy rights.

In a mammoth 1448 page judgement, the Court made several key rulings:

  • The Court ruled that the Aadhaar system does not in itself violate the fundamental right to privacy. However, the Court specifically called out a need for a ‘robust data protection framework’ to ensure pricy rights are protected.
  • However, the Aadhaar cannot be mandatory for some purposes, including access to mobile phone services and bank accounts, as well as access to some government services, particularly education. Aadhaar-authentication will still be required for tax administration (this resolves some uncertainty from a previous ruling).
  • The private sector cannot demand that an Aadhaar be provided, and private usage of the Aadhaar database is unconstitutional unless expressly authorised by law.
  • The Court also specified that law enforcement access to Aadhaar data will require judicial approval, and any national security-based requests will require consultation with High Court justices (i.e., the highest court in the relevant Indian state).
  • Indian citizens must be able to file complaints regarding data breaches involving the Aadhaar; prior to this judgment, the ability to file complaints regarding violations of the Aadhaar Act was limited to the government authority administering the Aadhaar system, the Unique ID Authority of India.

The Aadhaar will continue to be required for many essential government services, including welfare benefits and ration distribution – s7 of the Aadhaar Act makes Aadhaar-based authentication a pre-condition for accessing “subsidy, benefits or services” by the government. This has been one of the key concerns of Aadhaar opponents – that access to essential government services shouldn’t be dependant on Aadhaar verification. There have been allegations that people have been denied rations due to ineffective implementation of Aadhaar verification, leading to deaths.

It’s also unclear whether information collected under provisions which have now been ruled as unconstitutional – for example, Aadhaar data collected by Indian banks and telcos – will need to be deleted.

As Australia moves towards linking siloed government databases and creating its own digital identity system, India’s experience with the Aadhaar offers many lessons. A digital identity system offers many potential benefits, but all technology is a double-edged sword. Obviously, Australia will need to ensure that any digital identity system is secure but, beyond that, that the Australian public trusts the system. To obtain that trust, Australian governments will need ensure the system and the uses of the digital identity are transparent and ethical – that the system will be used in the interests of the Australian public, in accordance with clear ethical frameworks. Those frameworks will need to be flexible enough to enable interfaces with the private sector to reap the full benefits of the system, but robust enough to ensure those uses are in the public interest. Law enforcement access to government databases remains a major concern for Australians, and will need to be addressed. It’s a tightrope, and it will need to be walked very carefully indeed.


If you enjoyed this and would like to be notified of future elevenM blog posts, please subscribe below.

The week in review – Oct 1, 2018

Helping your business stay abreast and make sense of the critical stories in digital risk, cyber security and privacy.

Week of: Sep 24-Oct 1

The round-up:

Multiple articles this week emphasise the rising financial cost of data breaches, particularly as a result of substantial regulatory fines. Major organisations such as Facebook, Equifax and Uber are all reportedly facing sizable penalties as a result of recent data breaches. The financial hit coincides with heightened discussion and reporting of privacy issues and complaints, likely the result of public awareness having increased in the wake of new legislation introduced over the past 12 months.

Key articles:

Uber will pay $148 million in connection with a 2016 data breach and cover-up

Summary: Uber paid hackers $100,000 to delete data that they had illegally accessed on 57 million customers and drivers, as well as to keep the breach quiet. Now they’re paying $148 million as a result of a legal settlement related to the breach.

Key risk takeaway: The large pay-out draws sharp focus to the poor quality of Uber’s initial response, which was to keep the breach quiet. Businesses are increasingly being judged not merely for having had a breach, but on how well they respond. To mitigate the risk of a poorly handled response, organisations should have well-rehearsed breach response plans and practice approaching public disclosure of security and privacy incidents with transparency and accountability.

Tags: #breach #response #comms

Facebook Faces Potential $1.63 Billion Fine in Europe Over Data Breach

Summary: A European Union privacy watchdog could fine Facebook as much as $1.63 billion for a data breach in which hackers compromised the accounts of over 50 million users.

Key risk takeaway: A series of recent incidents – including the Cambridge Analytica scandal – continues to undermine Facebook’s public standing on privacy and security issues. The sizeable potential fine here demonstrates the significant financial penalties outlined under recently introduced privacy regimes, such as GDPR and the Notifiable Data Breach scheme. Given growing public expectations around privacy, regulators will likely be keen to visibly enforce these new regulations. Organisations should accordingly prioritise achieving a thorough understanding of their obligations under these regulatory regimes.

Tags: #breach #privacy #regulations #GDPR

Equifax fined maximum penalty under 1998 UK data protection law

Summary: Credit monitoring giant Equifax has been hit with the maximum penalty from the UK’s data protection agency for its actions related to the company’s massive data breach.

Key risk takeaway: While the credit monitoring company received the maximum penalty from the UK’s data protection agency relating to a 2017 breach, this was under the pre-GDPR regime. Businesses face significantly higher fines (as much as 4 per cent of global turnover) under GDPR, so would be well advised to understand their obligations.

Tags: #breach #privacy #regulations #GDPR

Facebook using 2FA phone number to target you with ads

Summary: Facebook confirmed it uses the phone numbers provided by users specifically for security purposes to also target them with ads.

Key risk takeaway: While Facebook defended this practice as using information provided by users to “offer a better, more personalised experience”, and pointed out that the practice was outlined in it data use policies, the tech giant has faced criticism for having acted unethically. This illustrates the importance not only of transparency around data use and collection, but ensuring it is carried out in line with customer expectations.

Tags: #privacy #security

Tech giants open up on privacy questions

Summary: Tech giants Apple, Amazon, Google and Twitter were in front of the US Senate Commerce Committee to outline their approaches to user privacy, and to persuade lawmakers on their preferred approach to regulation and legislation.

Key takeaway: Governments and regulators are looking to visibly respond to the growing public expectation around data protection, as evidenced by the introduction of new legislative regimes and conducting of hearings such as the one outlined in this article. Even in the US (which is often deemed less stringent on privacy) consideration is now being given to mirroring the EU’s GDPR, with California already having done so via its new digital privacy laws. The tech giants are reportedly advocating instead for federal privacy legislation which would supersede state legislation such as California’s, and which they would likely seek to influence. In light of this global patchwork of regulations, organisations should seek to understand the extent of their obligations under different regimes, particularly where they offer services to international customers.

Tags: #privacy

New IoT botnet infects wide range of devices

Summary: Researchers have unearthed new malware attacking a large number of Internet-of-Things (IoT) devices.

Key takeaway: There continues to be growing signs of cyber attackers targeting internet-connected devices, particularly routers. Organisations should conduct thorough inventory of their assets to understand the devices in their environment, and any vulnerabilities. Device security “hygiene” – such as patching devices and changing default passwords – is critical.

Tags: #security #IOT

Australia’s surveillance laws could damage internet security globally, overseas critics say

Summary: Australia’s Assistance and Access Bill – which the Government argues is necessary to bolster national security and law enforcement – is attracting concern from around the world over its potentially weakening effect on online security.

Key takeaway: Under the legislation, communications companies could be required to assist the Government to access encrypted communications. A broader takeaway from the criticism of the legislation by privacy advocates (as well as concerns raised by technology companies) is the underlying community expectation that organisations will protect customer data.

Tags: #privacy #government #security

France records big jump in privacy complaints since GDPR

Summary: Another European data protection agency reports a sharp rise in the numbers of complaints since the EU introduced GDPR.

Key takeaway: Increased noise around new privacy regulations is translating into increased consumer awareness, and subsequently, complaints. Given these trends, organisations must become more proactive on data protection matters.

Tags: #privacy

Don’t call me, I’ll call you

You’ve just pulled dinner out of the oven, the kids have been wrangled to the table, and you’re just about to sit down.

Suddenly, your miracle of domestic logistics is shattered by the klaxon  of your phone ringing. Juggling a hot plate of roast chicken and a small, wriggling child, you grab for the handset… only to be greeted by the forced enthusiasm of a long-suffering call centre worker who desperately wants you to tell you about simply fantastic savings on energy prices.

The Do Not Call Register has been in place since 2006. The DNCR Register allows Australians to place their phone number on a register indicating that they don’t wish to receive marketing calls or faxes, with fines applying for non-compliance.

The ACMA enables to organisations that want to conduct telemarketing campaigns subscribe to the Register and  ‘wash’ their calls lists against it. This helps organisation make sure they aren’t calling people who don’t want to hear from them.

Of course, that doesn’t help if you don’t bother to check the Register in the first place, like Lead My Way. Lead My Way received a record civil penalty of $285,600 today for making marketing calls to numbers on the DNCR Register. Lead My Way had actually subscribed to the DNCR Register, but for some reason hadn’t washed their call list against it. This led to numerous complaints to the ACMA, which commenced an investigation.

Lead My Way was calling people to test their interest in its clients’ products or services, then on selling that information as ‘leads’ – that is, as prospective customers. This kind of business model can also raise significant Privacy Act compliance issues. Do the people being called understand that their personal information is collected and will be sold? How are they notified of the collection (APP 5)? Have they consented to that use? Is that consent informed and valid? Is the sale of their personal information permissible (APP 6)? Are they able to opt out of receiving further marketing calls, and are those opt outs being respected (APP 7)?

Cutting corners on how you manage and use personal information may save you time and money in the short term. But, as Lead My Way discovered, in the long run it can create massive compliance risk, annoy your end users, and incur the wrath of the regulators. Were the (likely minuscule) savings of ignoring the DNCR Register worth a regulator investigation and the comprehensive trashing of Lead My Way’s brand?

Perhaps we should call them and ask.


If you enjoyed this and would like to be notified of future elevenM blog posts, please subscribe below.

Nine steps to a successful privacy and cyber security capability uplift

Most organisations today understand the critical importance of cyber security and privacy protection to their business. Many are commencing major uplift programs, or at least considering how they should get started.

These projects inevitably carry high expectations because of what’s at stake. They’re also inherently complex and impact many parts of the organisation. Converting the effort and funding that goes into these projects into success and sustained improvement to business-as-usual practices is rarely straightforward.

Drawing on our collective experiences working on significant cyber security and privacy uplift programs across the globe, in a variety of industries, here’s what we believe are key elements to success.

1. Secure a clear executive mandate

Your uplift program is dealing with critical risks to your organisation. The changes you will seek to drive through these programs will require cooperation across may parts of your organisation and potentially partners and third parties too. A mandate and sponsorship from your executive is critical.

Think strategically about who else you need on-side, beyond your board and executive committee. Build an influence map and identify potential enablers and detractors, and engage early. Empower your program leadership team and business leadership from affected areas to make timely decisions and deliver their mandate.

2. Adopt a customer and human-centric approach

Uplift programs need to focus on people change as well as changes to processes and technology. Success in this space very often comes down to changing behaviours and ensuring the organisation has sufficient capacity to manage the new technology and process outputs (eg how to deal with incidents).

We therefore suggest that you adopt a customer and human-centric approach. Give serious time, attention and resourcing to areas including communications planning, organisational change management, stakeholder engagement, training and awareness.

3. Know the business value of what you are going to deliver and articulate it

An opaque or misaligned understanding of what a security or privacy program is meant to deliver is often the source of its undoing. It is crucial to ensure scope is clear and aligned to the executive mandate.

Define the value and benefits of your uplift program early, communicate them appropriately and find a way to demonstrate this value over time. Be sure to speak in terms the business understands, not just new technologies or capabilities you will roll-out for instance, what risks have you mitigated?

You can’t afford to be shy. Ramp up the PR to build recognition about your program and its value among staff, executive and board members. Think about branding.

4. Prioritise the foundational elements

If you’re in an organisation where security and privacy risks have been neglected, but now have a mandate for broad change, you can fall into the trap of trying to do too much at once.

Think of this as being your opportunity to get the groundwork in place for your future vision. Regardless of whether the foundational elements are technology or process related, most with tenure in your organisation know which of them need work. From our experience, those same people will also understand the importance of getting them right and in most cases would be willing to help you fix them.

As a friendly warning, don’t be lured down the path of purchasing expensive solutions without having the right groundwork in place. Most, if not all of these solutions rely on such foundations.

5. Deliver your uplift as a program

For the best results, deliver your uplift as a dedicated change program rather than through BAU.

Your program will of course need to work closely with BAU teams to ensure the sustained success of the program. Have clear and agreed criteria with those teams on the transition to BAU. Monitor BAU teams’ preparation and readiness as part of your program.

6. Introduce an efficient governance and decision making process

Robust and disciplined governance is critical. Involve key stakeholders, implement clear KPIs and methods of measurement, and create an efficient and responsive decision-making process to drive your program.

Governance can be light touch provided the right people are involved and the executive supports them. Ensure you limit the involvement of “passengers” on steering groups who aren’t able to contribute and make sure representatives from BAU are included

7. Have a ruthless focus on your strategic priorities

These programs operate in the context of a fast-moving threat and regulatory landscape. Things change rapidly and there will be unforeseen challenges.

It’s important to be brave and assured in holding to your strategic priorities. Avoid temptation to succumb to tactical “quick fixes” that solve short-term problems but bring long-term pain.

8. Build a high-performance culture and mindset for those delivering the program

These programs are hard but can be immensely satisfying and career-defining for those involved. Investing in the positivity, pride and engagement of your delivery team will pay immense dividends.

Seek to foster a high-performance culture, enthusiasm, tolerance and collaboration. Create an environment that is accepting of creativity and experimentation.

9. Be cognisant of the skills shortage and plan accordingly

While your project may be well funded, don’t be complacent about the difficulties accessing skilled people to achieve the goals of your project. Globally, the security and privacy industries continue to suffer severe short-ages in skilled professionals. Build these into your forecasts and expectations, and think laterally about the use of partners.


If you enjoyed this and would like to be notified of future elevenM blog posts, please subscribe below.

Musings on the OAIC’s second Notifiable Data Breaches report

On 31 July, the Office of the Australian Information Commissioner (OAIC) released its second Notifiable Data Breaches Quarterly Statistics Report.

This report covers the first full quarter since the Notifiable Data Breaches scheme (NDB scheme) began on 22 February 2018, and the OAIC has clearly put some work into building out the report with detailed stats and breakdowns. Let’s take a look.

Going up, up, up!

This quarter there were 242 notifications overall, noting that multiple notifications relating to the same incident (including the infamous PageUp breach) were counted as a single breach.

The OAIC’s month by month breakdown shows a steady increase in notifications by month, going from 55 notifications in March to 90 notifications in June. Overall, accounting for the partial quarter in the first report, we’ve seen a doubling in the rate of notifications.

However, there are a lot of factors that may be affecting the notification rate. Since February, many companies and agencies have implemented new processes to make sure they comply with the NDB scheme, and this may be driving more notifications. On the other hand, in our experience a lot of companies and agencies are still unsure about their notification obligations and when to notify, so they might be over reporting – notifying breaches that may not meet the ‘likely risk of serious harm’ threshold just to be sure that they are limiting their compliance risk.

At this early stage of the scheme, we think it’s premature to draw any conclusions on rising notification rates. The rate may change significantly as companies and agencies come to grips with their obligations and what does and doesn’t need to be reported.

Teach your children staff well

59% of breaches this quarter were identified as being caused by malicious or criminal attacks. The vast majority (68%) of attacks were cyber incidents and, of those, over three quarters related to lost or stolen credentials. This includes attacks based on phishing, malware, and social engineering. Brute force attacks also featured significantly.

We think that the obvious conclusion here is that there’s an opportunity to significantly reduce the attack surface by training your staff to better protect their credentials. For example, teach them how to recognise phishing attempts, run drills, and enforce regular password changes.

There are also some system issues that could be addressed, such as multi-factor authentication, enforcing complex password requirements, and implementing rate limiting on credential submissions to prevent brute force attacks.

To err is human

Human error accounted for 36% of breaches this quarter. It was the leading cause in the first quarterly report, but again, there are a number of factors that could have caused this shift.

Notably, over half of the breaches caused by human error were scenarios in which personal information was sent to the wrong person – by email, mail, post, messenger pigeon or what have you, but especially email (29 notifications). Again, this suggests a prime opportunity to reduce your risk by training your staff. For example, it appears that at least 7 people this quarter didn’t know (or forgot) how to use the BCC/Blind Carbon Copy function in their email.

People make mistakes. And we know this, so it’s a known risk. We should be designing processes and systems to limit that risk, such as systems to prevent mistakes in addressing.

Doctors and bankers and super, oh my!

Much ink has been spilt over information governance in the health and finance sectors recently, and those sectors accounted for more notifications than any other this quarter (49 and 36 notifications respectively). These are pretty massive industry sectors – healthcare alone accounts for 13.5% of jobs in Australia – so scale is likely affecting the high number of notifications. Anyway, the OAIC has helpfully provided industry level breakdowns for each of them.

In the finance sector (including superannuation providers), human error accounted for 50% of all breaches, and malicious attacks for 47%. Interestingly, in the finance sector almost all the malicious attacks were based on lost or stolen credentials, so we’re back to staff training as a key step to reduce risk.

Bucking the trend, human error accounted for almost two thirds of breaches in the health sector – clearly there’s some work to be done in that sector in terms of processes and staff training. Of the breaches caused by the malicious attacks, 45% were theft of physical documents or devices. This isn’t particularly surprising, as it can be challenging for small medical practices that make up a large part of the sector to provide high levels of physical security. It’s important to note that these notifications only came from private health care providers – public providers are covered under state-based privacy legislation. Also, these statistics don’t cover notifications relating to the My Health Records system – the OAIC reports on those numbers separately in its annual report. So these stats don’t offer a full picture of the Australian health industry as a whole.

All in all, this quarter’s NDB scheme report contains some interesting insights, but as agencies and organisations become more familiar with the scheme (and continue to build their privacy maturity), we may see things shift a bit. Only time will tell.


If you enjoyed this and would like to be notified of future elevenM blog posts, please subscribe below.