14 September 2023

Navigating the road ahead: privacy and security risks of self-driving cars

Jayden Hunter
Graduate Consultant
Daniel Duncan
Graduate Consultant

In this two-part series, elevenM’s Jayden Hunter and Daniel Duncan look at privacy, security and liability in the application of artificial intelligence. In this first part, they unpack some of the many privacy and security risks that self-driving cars bring, and raise some of the questions that need to be answered before self-driving becomes the new normal.

Every year it seems like a new self-driving car hits the market. Powered by the latest and greatest of machine learning, sensor fusion, localisation, and pathfinding algorithms, these cars are poised to become the standard of the automotive industry. And yet, hidden behind each unassuming Tesla, Mercedes-Benz, or Volkswagen exterior, is a potential privacy and cyber security disaster on wheels.

Privacy risk

Touted as the greatest revolution in automotive safety since Nash introduced the seatbelt in 1949, self-driving cars harness a deluge of cameras, sensors, and positioning systems to make them up to eight times safer than human drivers on the road. But unlike the seatbelt, full self-driving cars have access to a cocktail of personal information, posing an overlooked threat to owners and passengers of the future.

To navigate, a self-driving car needs to know where it is, hence localisation is utilised to pinpoint the vehicle’s location with centimetre precision. Unlike with your smartphone, you can’t disable location services on a self-driving car — thus your car builds an accurate map of your travels, making recurring locations such as your home, work, or children’s school, conveniently identifiable. If accessed by malicious actors, personal information regarding travel patterns, routines, and potentially sensitive locations such as religious institutions can be weaponised.

Self-driving cars also have a plethora of cameras and microphones, constantly recording and storing to manufacturers’ servers. How did Tesla, who describe their cameras as “designed from the ground up to protect your privacy” approach this from 2019 to 2022? Employees viewed and shared videos and images taken by customers’ cars, potentially in breach of the EU GDPR.

Security risk

Like all technology, the security of a car depends on its weakest link. For self-driving cars, this is especially critical, as a software bug could have life-or-death consequences. While traditional cars rely on physical components for security, self-driving cars face the additional challenge of digital vulnerabilities due to their networked decision-making, cameras, and steering systems.

Earlier this year, Tesla hosted a hackathon, inviting participants to test their self-driving software. Perhaps unsurprisingly, it resulted in a number of vulnerabilities surfacing quite quickly. Consider that if a self-driving car can be remotely hacked and controlled by a malicious user (aside from the hoard of personal information they could obtain), they could potentially physically operate the car remotely. Such an exploit could lead to dangerous situations, with a hacked car suddenly swerving, braking, or colliding with other vehicles.

The impact could extend beyond a single car if multiple self-driving cars share the same vulnerability. And, as we know from the increasing prevalence of vulnerabilities being exploited, it’s not an ‘if’ but a ‘when’ when it comes to large-scale vulnerabilities. There’s no easy way to combat this cyber-security risk – designing flawless code is practically impossible. As a result, manufacturers must prioritise a rapid response to such threats, focusing on quick bug patches to minimise potential damage.

This raises another potential risk – that of companies seeking to protect their reputation and/or profits over taking action when flaws are raised. Unfortunately, the history of companies seeking to silence ‘grey hats’ and security researchers is all too real.

The Future

Ultimately, we must weigh the benefits of self-driving cars against the inherent risks that they pose to people both on and off the road, from “drivers” to passengers to pedestrians. These risks are not the future’s problems, but the problems of today. We only have to look at the bumpy rollout of self-driving cars in San Francisco, which has resulted in fleets being slashed in response to collisions and traffic jams, to see these problems starting to unfold.

Moving forward, we need to consider:

  • The technology behind self-driving cars is not yet fully matured. While the crash rates are far lower than the average human driver, these cars still require manual override for unusual situations. How will this change and what considerations will be taken?
  • Past incidents involving companies mishandling user data and privacy raise doubts about safety and confidentiality. Are we, as consumers, comfortable assuming that auto manufacturers will always have our best interests at heart? If not, are we comfortable trusting regulatory oversight to protect us in this developing field?
  • The significant cyber-security risks associated with self-driving cars have serious implications for the safety of passengers and other road users. How will we mitigate this risk, and will it have to be a coordinated approach in order to achieve real results?

Manufacturers, governments, and most importantly, the general public all have a role to in determining the level of risk that we accept on our roads. In our view, innovation must be balanced against a clear-eyed analysis of the physical, cyber and privacy risks of self-driving cars, with “safety first” being the overriding principle.

As we peer into the future, uncertainty looms over the destination of self-driving cars. We will need to recognise the current risks associated with human drivers, weigh them the potential new risks associated with self-driving cars, and encourage public debate and transparent decision-making. We must also consider developing proper laws regarding the development and use of self-driving cars, and create plans for mitigating the risks they introduce.

Stay tuned for the second half of this blog series, where we will explore some of these questions we’ve posed, and ask a few more about the liability of AI in high-risk contexts.

Contact us

If you’re interested in learning more about privacy and security risk, contact us at hello@elevenM.com.au or on 1300 003 922.