Our thoughts on the year ahead

At elevenM, we love shooting the breeze about all things work and play. We recently got together as a team to kick off the new year, share what we’d been up to and the thoughts inspiring us as we kick off 2019. Here’s a summary…

Early in the new year, under a beating sun at the Sydney Cricket Ground, our principal Arjun Ramachandran found himself thinking about cyber risk.

“Indian batsman Cheteshwar Pujara was piling on the runs and I realised – ‘I’m watching a masterclass in managing risk’. He’s not the fanciest or most talented batsman going around, but what Pujara has is total command over his own strengths and weaknesses. He knows when to be aggressive and when to let the ball go. In the face of complex external threats, I was struck by how much confidence comes from knowing your own capabilities and posture.”

A geeky thought to have at the cricket? No doubt. But professional parallels emerge when you least expect them. Particularly after a frantic year in which threats intensified, breaches got bigger, and major new privacy regulations came into force.

Is there privacy in the Home?

Far away from the cricket, our principal Melanie Marks was also having what she describes as a “summer quandary”. Like many people, Melanie this summer had her first extended experience of a virtual assistant (Google Home) over the break.

“These AI assistants are a lot of fun to engage with and offer endless trivia, convenience and integrated home entertainment without having to leave the comfort of the couch,” Melanie says. “However, it’s easy to forget they’re there and it’s hard to understand their collection practices, retention policies and deletion procedures (not to mention how they de-identify data, or the third parties they rely upon).”

Melanie has a challenge for Google in 2019: empower your virtual assistant to answer the question: “Hey Google – how long do you keep my data?” as quickly and clearly as it answers “How do you make an Old Fashioned?”.

Another of our principals and privacy stars Sheila Fitzpatrick has also been pondering the growing tension between new technologies and privacy. Sheila expects emerging technologies like AI and machine learning to keep pushing the boundaries of privacy rights in 2019.

“Many of these technologies have the ‘cool’ factor but do not embrace the fundamental right to privacy,” Sheila says. “They believe the more data they have to work with, the more they can expand the capabilities of their products without considering the negative impact on privacy rights.”

The consumer issue of our time

We expect to see the continued elevation of privacy as a public issue in 2019.  Watch for Australia’s consumer watchdog, the Australian Competition and Consumer Commission, to get more involved in privacy, Melanie says. The ACCC foreshadowed in December via its preliminary report into digital platforms.

Business will also latch onto the idea of privacy as a core consumer issue, says our Head of Product Development Alistair Macleod. Some are already using it as a competitive differentiator, Alistair notes, pointing to manufacturers promoting privacy-enhancing features in new products and Apple’s hard-to-miss pro-privacy billboard at the CES conference just this week.

We’ll also see further international expansion of privacy laws in 2019, Sheila says. Particularly in Asia Pacific and Canada, where some requirements (such as around data localisation) will even exceed provisions under GDPR, widely considered a high watermark for privacy when introduced last May.

Cyber security regulations have their turn

But don’t forget cyber security regulation. Our principal Alan Ligertwood expects the introduction of the Australian Prudential Regulation Authority’s new information security standard CPS 234 in July 2019 to have a significant impact.

CPS 234 applies to financial services companies and their suppliers and Alan predicts the standard’s shift to a “trust but verify” approach, in which policy and control frameworks are actually tested, could herald a broader shift to more substantive approach by regulators to oversight of regulatory and policy compliance.

There’s also a federal election in 2019. We’d be naïve not to expect jobs and national security to dominate the campaign, but the policy focus given to critical “new economy” issues like cyber security and privacy In the lead-up to the polls will be worth watching. In recent years cyber security as a portfolio has been shuffled around and dropped like a hot potato at ministerial level.

Will the Government that forms after the election – of whichever colour – show it more love and attention?

New age digital risks

At the very least, let’s hope cyber security agencies and services keep running. Ever dedicated, over the break Alan paid a visit to the National Institute of Standards and Technology’s website – the US standards body that creates the respected Cybersecurity Framework – only to find it unavailable due the US government shutdown.

“It didn’t quite ruin my holiday, but it did get me thinking about unintended consequences and third party risk. A squabble over border wall funding has resulted in a global cyber security resource being taken offline indefinitely.”

It points to a bigger issue. Third parties and supply chains, and poor governance over them, will again be a major contributor to security and privacy risk this year, reckons Principal Matt Smith.

“The problem is proving too hard for people to manage correctly. Even companies with budgets which extend to managing supplier risk are often not able to get it right – too many suppliers and not enough money or capacity to perform adequate assurance.”

If the growing use of third parties demands that businesses re-think security, our Senior Project Manager Mike Wood sees the same trend in cloud adoption.

“Cloud is the de-facto way of running technology for most businesses.  Many are still transitioning but have traditional security thinking still in place.  A cloud transition must come with a fully thought through security mindset.”

Mike’s expecting to see even stronger uptake of controls like Cloud Access Security Brokers in 2019.

But is this the silver bullet?

We wonder if growing interest in cyber risk insurance in 2019 could be the catalyst for uplifted controls and governance across the economy. After all, organisations will need to have the right controls and processes in place in order to qualify for insurance in line with underwriting requirements.

But questions linger over the maturity of these underwriting methodologies, Alan notes.

“Organisations themselves find it extremely difficult to quantify and adequately mitigate cyber threats, yet insurance companies sell policies to hedge against such an incident.”

The likely lesson here is for organisations not to treat cyber insurance as a silver bullet. Instead, do the hard yards and prioritise a risk-based approach built on strong executive sponsorship, effective governance, and actively engaging your people in the journey.

It’s all about trust

If there was a common theme in our team’s readings and reflections after the break, it was probably over the intricacies of trust in the digital age.

When the waves stopped breaking on Manly beach, Principal Peter Quigley spent time following the work of Renee DiResta, who has published insightful research into the use of disinformation and malign narratives in social media. There’s growing awareness of how digital platforms are being used to sow distrust in society. In a similar vein, Arjun has been studying the work of Peter Singer, whose research into how social media is being weaponised could have insights for organisations wanting to use social media to enhance trust, particularly in the wake of a breach.

Alistair notes how some technology companies have begun to prioritise digital wellbeing. For example, new features in Android and iOS that help users manage their screen time – and thus minimise harm – reflect the potential for a more trusting, collaborative digital ecosystem.

At the end of the day, much of our work as a team goes towards helping organisations mitigate digital risk in order to increase digital trust – among customers, staff and partners. The challenges are aplenty but exciting, and we look forward to working on them with many of you in 2019.

End of year wrap

The year started with a meltdown. Literally.

New Year’s Eve hangovers had barely cleared when security researchers announced they had discovered security flaws that would impact “virtually every user of a personal computer”. “Happy new year” to you too. Dubbed “Meltdown” and “Spectre”, the flaws in popular computer processors would allow hackers to access sensitive information from memory – certainly no small thing. Chipmakers urgently released updates. Users were urged to patch. Fortunately, the sky didn’t fall in.

If all this was meant to jolt us into taking notice of data security and privacy in 2018 … well, that seemed unnecessary. With formidable new data protection regulations coming into force, many organisations were already stepping into this year with a much sharper focus on digital risk.

The first of these new regulatory regimes took effect in February, when Australia finally introduced mandatory data breach reporting. Under the Notifiable Data Breaches (NDB) scheme, overseen by the Office of the Australian Information Commissioner, applicable organisations must now disclose any breaches of personal information likely to result in serious harm.

In May, the world also welcomed the EU’s General Data Protection Regulation (GDPR). Kind of hard to miss, with an onslaught of updated privacy policies flooding user inboxes from companies keen to show compliance.

The promise of GDPR is to increase consumers’ consent and control over their data and place a greater emphasis on transparency.  Its extra-territorial nature (GDPR applies to any organisation servicing customers based in Europe) meant companies all around the world worked fast to comply, updating privacy policies, implementing privacy by design and creating data breach response plans. A nice reward for these proactive companies was evidence that GDPR is emerging as a template for new privacy regulations around the world. GDPR-compliance gets you ahead of the game.

With these regimes in place, anticipation built around who would be first to test them out. For the local NDB scheme, the honour fell to PageUp. In May, the Australian HR service company detected an unknown attacker had gained access to job applicants’ personal details and usernames and passwords of PageUp employees.

It wasn’t the first breach reported under NDB but was arguably the first big one – not least because of who else it dragged into the fray. It was a veritable who’s who of big Aussie brands – Commonwealth Bank, Australia Post, Coles, Telstra and Jetstar, to name a few. For these PageUp clients, their own data had been caught up in a breach of a service provider, shining a bright light on what could be the security lesson of 2018: manage your supplier risks.

By July we were all bouncing off the walls. Commencement of the My Health Record (MHR) three month opt-out period heralded an almighty nationwide brouhaha. The scheme’s privacy provisions came under heavy fire, most particularly the fact the scheme was opt-out by default, loose provisions around law enforcement access to health records, and a lack of faith in how well-versed those accessing the records were in good privacy and security practices. Things unravelled so much that the Prime Minister had to step in, momentarily taking a break from more important national duties such as fighting those coming for his job.

Amendments to the MHR legislation were eventually passed (addressing some, but not all of these issues), but not before public trust in the project was severely tarnished. MHR stands as a stark lesson for any organisation delivering major projects and transformations – proactively managing the privacy and security risks is critical to success.

If not enough attention was given to data concerns in the design of MHR, security considerations thoroughly dominated the conversation about another national-level digital project – the build out of Australia’s 5G networks. After months of speculation, the Australian government in August banned Chinese telecommunications company Huawei from taking part in the 5G rollout, citing national security concerns. Despite multiple assurances from the company about its independence from the Chinese government and offers of greater oversight, Australia still said ‘no way’ to Huawei.

China responded frostily. Some now fear we’re in the early stages of a tech cold war in which retaliatory bans and invasive security provisions will be levelled at western businesses by China (where local cyber security laws should already be a concern for businesses with operations in China).

Putting aside the geopolitical ramifications, the sobering reminder for any business from the Huwaei ban is the heightened concern about supply chain risks. With supply chain attacks on the rise, managing vendor and third-party security risks requires the same energy as attending to risks in your own infrastructure.

Ask Facebook. A lax attitude towards its third-party partners brought the social media giant intense pain in 2018. The Cambridge Analytica scandal proved to be one of the most egregious misuses of data and abuses of user trust in recent memory, with the data of almost 90 million Facebook users harvested by a data mining company to influence elections. The global public reacted furiously. Many users would delete their Facebook accounts in anger. Schadenfreude enthusiasts had much to feast on when Facebook founder and CEO Mark Zuckerberg’s uncomfortably testified in front of the US Senate.

The social network would find itself under the pump on various privacy and security issues throughout 2018, including the millions of fake accounts on its platform, the high profile departure of security chief Alex Stamos and news of further data breaches.

But when it came to brands battling breaches, Facebook hardly went it alone in 2018. In the first full reporting quarter after the commencement of the NDB scheme, the OAIC received 242 data breach notifications, followed by 245 notifications for the subsequent quarter.

The scale of global data breaches has been eye-watering. Breaches involving Marriott International, Exactis, Aadhar and Quora all eclipsed 100 million affected customers.

With breaches on the rise, it becomes ever more important that businesses be well prepared to respond. The maxim that organisations will increasingly be judged not on the fact they had a breach, but on how they respond, grew strong legs this year.

But we needn’t succumb to defeatism. Passionate security and privacy communities continue to try to reduce the likelihood or impact of breaches and other cyber incidents. Technologies and solutions useful in mitigating common threats gained traction. For instance, multi-factor authentication had more moments in the sun this year, not least because we became more attuned to the flimsiness of relying on passwords alone (thanks Ye!). Security solutions supporting other key digital trends also continue to gain favour – tools like Cloud Access Security Brokers enjoyed strong momentum this year as businesses look to manage the risks of moving towards cloud.

Even finger-pointing was deployed in the fight against hackers. This year, the Australian government and its allies began to publicly attribute a number of major cyber campaigns to state-sponsored actors. A gentle step towards deterrence, the attributions signalled a more overt and more public pro-security posture from the Government. Regrettably, some of this good work may have been undone late in the year with the passage of an “encryption bill”, seen by many as weakening the security of the overall digital ecosystem and damaging to local technology companies.

In many ways, in 2018 we were given the chance to step into a more mature conversation about digital risk and the challenges of data protection, privacy and cyber security. Sensationalist FUD in earlier years about cyber-attacks or crippling GDPR compliance largely gave way to a more pragmatic acceptance of the likelihood of breaches, high public expectations and the need to be well prepared to respond and protect customers.

At a strategic level, a more mature and business-aligned approach is also evident. Both the Australian government and US governments introduced initiatives that emphasise the value of a risk-based approach to cyber security, which is also taking hold in the private sector. The discipline of cyber risk management is helping security executives better understand their security posture and have more engaging conversations with their boards.

All this progress, and we still have the grand promise that AI and blockchain will one day solve all our problems.  Maybe in 2019 ….

Till then, we wish you a happy festive season and a great new year.

From the team at elevenM.

APRA gets $60m in new funding: CPS 234 just got very real

We have previously talked about APRA’s new information security regulation and how global fines will influence the enforcement of this new regulation.

Today we saw a clear statement of intent from the government in the form of $58.7 million of new funding for APRA to focus on the identification of new and emerging risks such as cyber and fintech.

As previously stated, if you are in line of sight for CPS 234 either as a regulated entity or a supplier to one, we advise you to have a clear plan in place on how you will meet your obligations. No one wants to be the Tesco of Australia.

If you would like to talk to someone from elevenM about getting ready for CPS 234, please drop us a note at hello@elevenM.com.au or call us on 1300 003 922.


If you enjoyed this and would like to be notified of future elevenM blog posts, please subscribe below.

The journey toward trust – Part 1: Understanding trust

Join us for a three-part series that explores the notion of trust in today’s digital economy, and how organisations practically can build trust. We also focus on the role of regulatory compliance and reputation management in building trust, and outline best practice approaches.

Be-it users stepping away from the world’s biggest social media platform after repeated privacy scandals, a major airline’s share price plummeting after a large data breach, or Australia’s largest bank issuing a stronger commitment to a stronger focus on privacy and security in rebuilding its image – events in recent weeks provide a strong reminder of the fragility and critical importance of trust to businesses seeking success in the digital economy.

Bodies as illustrious as the World Economic Forum and OECD have written at length about the pivotal role of trust as a driving factor for success today.

But what does trust actually mean in the context of your organisation? And how do you practically go about building it?

At elevenM, we spend considerable time discussing and researching these questions from the perspectives of our skills and experiences across privacy, cyber security, risk, strategy and communications.

A good starting point for any organisation wanting to make trust a competitive differentiator is to gain a deeper understanding of what trust actually means, and specifically, what it means for it.

Trust is a layered concept, and different things are required in different contexts to build trust.

Some basic tenets of trust become obvious when we look to popular dictionaries. Ideas like safety, reliability, truth, competence and consistency stand out as fundamental principles.

Another way to learn what trust means in a practical sense is to look at why brands are trusted. For instance, the most recent Roy Morgan survey listed supermarket ALDI as the most trusted brand in Australia. Roy Morgan explains this is built on ALDI’s reputation for reliability and meeting customer needs.

Importantly, the dictionary definitions also emphasise an ethical aspect – trust is built by doing good and protecting customers from harm.

Digging a little deeper, we look to the work of trust expert and business lecturer Rachel Botsman, who describes trust as “a confident relationship with the unknown”.  This moves us into the digital space in which organisations operate today, and towards a more nuanced understanding.

We can infer that consumers want new digital experiences, and an important part of building trust is for organisations to innovate and help customers step into the novel and unknown, but with safety and confidence.

So, how do we implement these ideas about trust in a practical sense?

With these definitions in mind, organisations should ask themselves some practical and instructive questions that illuminate whether they are building trust.

  • Do customers feel their data is safe with you?
  • Can customers see that you seek to protect them from harm?
  • Are you accurate and transparent in your representations?
  • Do your behaviours, statements, products and services convey a sense of competence and consistency?
  • Do you meet expectations of your customers (and not just clear the bar set by regulators)?
  • Are you innovative and helping customers towards new experiences?

In part two of this series, we will explore how regulatory compliance can be used to build trust.


If you enjoyed this and would like to be notified of future elevenM blog posts, please subscribe below.

What does the record FCA cyber fine mean for Australia?

First, bit of context: The Financial Conduct Authority (FCA) is the conduct and prudential regulator for financial services in the UK. They are in-part an equivalent to the Australian Prudential Regulatory Authority (APRA).

Record cyber related fine

This week the FCA handed down a record cyber related fine to the banking arm of the UK’s largest supermarket chain Tesco for failing to protect account holders from a “foreseeable” cyber attack two years ago. The fine totalled £23.4 million but due to an agreed early stage discount, the fine was reduced by 30% to £16.4 million.

Cyber attack?

It could be argued that this was not a cyber attack in that it was not a breach of Tesco Bank’s network or software but rather a new twist on good old card fraud. But for clarity, the FCA defined the attack which lead to this fine as: “a mass algorithmic fraud attack which affected Tesco Bank’s personal current account and debit card customers from 5 to 8 November 2016.”

What cyber rules did Tesco break?

Interestingly, the FCA does not have any cyber specific regulation. The FCA exercised powers through provisions published in their Handbook. This Handbook has Principles, which are general statements of the fundamental obligations. Therefore Tesco’s fine was issued against the comfortably generic Principle 2: “A firm must conduct its business with due skill, care and diligence”

What does this mean for Australian financial services?

APRA, you may recall from our previous blog. has issued a draft information security regulation CPS 243. This new regulation sets out clear rules on how regulated Australian institutions should be managing their cyber risk.

If we use the Tesco Bank incident as an example, here is how APRA could use CPS 234:

Information security capability: “An APRA-regulated entity must actively maintain its information security capability with respect to changes in vulnerabilities and threats, including those resulting from changes to information assets or its business environment”. –  Visa provided Tesco Bank with threat intelligence as Visa had noted this threat occurring in Brazil and the US.  Whilst Tesco Bank actioned this intelligence against its credit cards, it failed to do so against debit cards which netted the threat actors £2.26 million.

Incident management: “An APRA-regulated entity must have robust mechanisms in place to detect and respond to information security incidents in a timely manner. An APRA-regulated entity must maintain plans to respond to information security incidents that the entity considers could plausibly occur (information security response plans)”.  – The following incident management failings were noted by the FCA:

  • Tesco Bank’s Financial Crime Operations team failed to follow written procedures;
  • The Fraud Strategy Team drafted a rule to block the fraudulent transactions, but coded the rule incorrectly.
  • The Fraud Strategy Team failed to monitor the rule’s operation and did not discover until several hours later, that the rule was not working.
  • The responsible managers should have invoked crisis management procedures earlier.

Do we think APRA will be handing out fines this size?

Short answer, yes. Post the Royal Commission into Misconduct in the Banking, Superannuation and Financial Services Industry, there is very little love for the financial services industry in Australia. Our sense is that politicians who want to remain politicians will need to be seen to be tough on financial services and therefore enforcement authorities like APRA will most likely see an increase in their budgets.

Unfortunately for those of you in cyber and risk teams in financial services, it is a bit of a perfect storm. The regulator has a new set of rules to enforce, the money to conduct the investigation and a precedence from within the Commonwealth.

What about the suppliers?

Something that not many are talking about but really should be, is the supplier landscape. Like it or not, the banks in Australia are some of the biggest businesses in the country. They use a lot of suppliers to deliver critical services including cyber security. Under the proposed APRA standard:

Implementation of controls: “Where information assets are managed by a related party or third party, an APRA-regulated entity must evaluate the design and operating effectiveness of that party’s information security controls”.

Banks are now clearly accountable for the effectiveness of the information security controls operated by their suppliers as they relate to a bank’s defences. If you are a supplier (major or otherwise) to the banks, given this new level of oversight from their regulator, we advise you to get your house in order because it is likely that your door will be knocked upon soon.


If you enjoyed this and would like to be notified of future elevenM blog posts, please subscribe below.

Facebook and Cambridge Analytica: Would the GDPR have helped?

It’s a modern-day truism that when you use a “free” online service, you’re still paying – not with your money, but with your personal information. This is simply the reality for many of the services we’ve come to rely on in our daily lives, and for most it’s an acceptable (if sometimes creepy) bargain.

But what if you’re paying for online services not just with your own personal information, but with that of your friends and family? And what if the information you’re handing over is being shared with others who might use it for purposes you didn’t consider when you signed up – maybe for research purposes, maybe to advertise to you, or maybe even to influence the way you vote?

Last week it emerged that an organisation called Cambridge Analytica may have used personal information scraped from Facebook to carry out targeted political advertising. The information was obtained when Facebook users accessed a psychometric profiling app called thisisyourdigitallife – but the data that was collected wasn’t just about app users, it was also about their Facebook friends (more on that below).

It’s what we’re now seeing from consumers that’s interesting.  People are rightfully asking for an explanation. Whilst we seem to have been asleep at the wheel over the last few years, as data empires around the world have pushed the boundaries, the current Facebook debacle is leading us to ask questions about the value of these so-called “free” services, and where the lines should be drawn.  The next few weeks will be telling, in terms of whether this really is the “tipping point” as many media commentators are calling it, or just another blip, soon forgotten.

In any case, with only a few months until the EU General Data Protection Regulation (GDPR), comes into force, this blog post asks:  If GDPR was operational now, would consumers be better protected?

First, some background

There’s plenty of news coverage out there covering the details, so we’ll just provide a quick summary of what happened.

A UK-based firm called Global Science Research (GSR) published thisisyourdigitallife and used the app to gather data about its users. Because GSR claimed this data was to be used for academic purposes, Facebook policies at the time allowed it to also collect limited information about friends of app users. All up, this meant that GSR collected the personal information of more than 50 million people – many more than the 270,000 people who used the app.

GSR then used the personal information to create psychometric profiles of the included individuals, apparently without their informed consent. These profiles were then allegedly passed on to Cambridge Analytica (possibly in breach of Facebook’s rules), which used the data to target, market to – and perhaps manipulate – individuals.

Was this a breach?

There’s been some debate over whether this incident can be fairly labelled a “breach”. Based on what we know, it certainly doesn’t appear that any personal information has been lost or disclosed by means of an accident or a security vulnerability, which is something many consider a necessary element of a “data breach”.

Facebook’s initial response was to hit back at claims it was a “data breach”, saying users willingly handed over their information, and the information of their friends. “Everyone involved gave their consent. People knowingly provided their information, no systems were infiltrated, and no passwords or sensitive pieces of information were stolen or hacked” it allegedly said.

Facebook has since hired a digital forensics firm to audit Cambridge Analytica and has stated that if the data still exists, it would be a “grave violation of Facebook’s policies and an unacceptable violation of trust and the commitments these groups made.”

In more recent days, Mark Zuckerberg has made something of a concession, apologising for the  “major breach of trust”.   We love this line from the man that told us that privacy is dead.

GDPR – would it have helped?

We at elevenM are supporters of the GDPR, arguably the most extensive and far reaching privacy reforms of the last 25 years. The GDPR raises the benchmark for businesses and government and brings us closer to one global framework for privacy.   But would the GDPR have prevented this situation from occurring? Would the individuals whose data has been caught up by Cambridge Analytica be in a better position if the GDPR applied?

Let’s imagine that GDPR is in force and it applies to the acts of all the parties in this case, and that Facebook still allowed apps to access information about friends of users (which it no longer does). Here is the lowdown:

  1. Facebook would have to inform its users in “clear and plain” language that their personal information (aka personal data under GDPR) could (among other things) be shared with third party apps used by their friends.
  2. Because the personal data may have been used to reveal political opinions, users would likely also need to provide consent. The notification and consent would have to be written in “clear and plain” language, and consent would have to be “freely given” via a “clear affirmative act” – implied consent or pre-ticked boxes would not be acceptable.
  3. The same requirements relating to notification and consent would apply to GSR and Cambridge Analytica when they collected and processed the data.
  4. Individuals would also have the right to withdraw their consent at any time, and to request that their personal data be erased (under the new “right to be forgotten”). If GSR or Cambridge Analytics were unable to find another lawful justification for collecting and processing the data (and it’s difficult to imagine what that justification could be), they would be required to comply with those requests.
  5. If Facebook, GSR or Cambridge Analytica were found to be in breach of the above requirements (although again, this is purely hypothetical because GDPR is not in force at the time of writing), they could each face fines up to 20 million EUR, or 4% of worldwide annual turnover (revenue), whichever is higher. Those figures represent the maximum penalty and would only be applied in the most extreme cases – but they make clear that GDPR is no toothless tiger.

So, there it is.  We think that GDPR would have made it far more likely that EU residents were made aware of what was happening with their personal data and would have given them effective control over it.

Some lessons

With so many recent data incidents resulting from outsourcing and supply chain, regulators around the world are focussing increasingly on supplier risk.  Just last week here in Australia, we saw the financial services regulator APRA’s new cyber security regulation littered with references to supplier risk.   The Cambridge Analytica situation is another reminder that we are only as strong as our weakest link.  The reputations of our businesses and the government departments for whom we work will often hinge on the control environments of third parties.  Therefore, organisations need to clearly assess third party risks and take commensurate steps to assure themselves that the risks and controls are reasonable and appropriate.

As for individuals – regardless of what regulatory action is taken in Australia and abroad, there are simple steps that we all can and should be taking.  This episode should prompt people to think again about the types of personal information they share online, and who they share it with. Reviewing your Facebook apps is a good start – you might be surprised by some of the apps you’ve granted access to, and how many of them you’d totally forgotten about (Candy Crush was so 2015).

What’s next

We expect this issue to receive more attention in the coming weeks and months.

Regulators around the world (including the Australian Privacy Commissioner, the UK Information Commissioner (ICO), the Canadian Privacy Commissioner and the EU Parliament) are looking into these issues now. Just over the weekend we saw images of ICO personnel allegedly raiding the premises of Cambridge Analytica, Law & Order style.

The Australian Competition and Consumer Commission (ACCC) also has been preparing to conduct a “Digital Platforms Inquiry” which, among other things, may consider “the extent to which consumers are aware of the amount of data they provide to digital platforms, the value of the data provided, and how that data is used…”

Meanwhile, we await the consumer backlash.  Consumers will likely expect increasingly higher standards from the organisations they share their data with and will seek out those organisations that are transparent and trustworthy, and which can demonstrate good governance over privacy and data protection practices.   Will you be one of them?


If you enjoyed this and would like to be notified of future elevenM blog posts, please subscribe below.