elevenM’s Cassie Findlay brings a first-hand account of how privacy considerations are playing a role in shaping COVID-19 outcomes in parts of the US.
It’s no secret that the city of San Francisco and the surrounding counties that make up the Bay Area are home to some of the most stark inequities in the world.
Having just returned home after four and a half years living and working there, I can confirm that the evidence of staggering wealth existing side by side with extreme poverty and homelessness is everywhere, and it is shocking. Encampments dot the city streets in which people are lacking in basic sanitation and medical services. Solutions are often temporary and deployed only in response to residents’ complaints.
Bringing a pandemic response into this mix was never going to be easy. The local and State governments’ response to the COVID crisis has, by overall US standards, not been too bad, but not necessarily for its most vulnerable people.
A case in point can be found in the axing late last year of a testing program offered by the Google affiliate Verily, by the cities of Oakland and San Francisco. Introduced in March, the platform screens people for symptoms, books appointments, and reports test results. Unfortunately, from a privacy perspective, the design of the program added friction to the uptake of critical services in a pandemic.
In a letter to the California Secretary of Health, the City of Oakland’s Racial Disparities Task Force raised concerns about the collection of personal data on the platform amidst a crisis of trust amongst Black and Latinx communities in how their personal information might be used or shared by governments and corporations. Participants were required to sign an authorisation form that says their information can be shared with multiple third parties involved in the testing program, including unnamed contractors and state and federal health authorities.
As explained by the Electronic Frontier Foundation’s Lee Tien to local public radio station KQED: “While the form tells you that Verily may share data with ‘entities that assist with the testing program,’ it doesn’t say who those entities are. If one of those unnamed and unknown entities violates your privacy by misusing your data, you have no way to know and no way to hold them accountable.”
Given the need for better and more accessible testing for people experiencing homelessness, and the known severity of the impact of COVID on Black and Latinx communities, obstacles like this to testing uptake are concerning. Other testing services in Oakland and San Francisco have fortunately adopted approaches based on more direct engagement and building of trust in these communities, as opposed to defaulting to an app-based solution with the trust and privacy concerns that entails.
This case shows just how much trust issues around the use of personal information can affect critical services to vulnerable communities, and it has valuable lessons for those of us working on the delivery of public services with technology.
My key takeaways are:
- Consumers understand and take seriously the trade-offs involved in exchanging personal information for services, discounts and other benefits.
- We are moving beyond approaches to data collection that treat consumers as a homogenous group in terms of their willingness to share, but we can safely assume that unknown secondary purposes for their data will be always be regarded with suspicion.
- Success will increasingly depend on having a more nuanced picture of your ‘customers’, including their trust in your organisation or sector, whether it be commercial enterprise or public health services.
- Building a data governance strategy that can track and maintain a picture of your business, actors within the business including end users or customers, and evolving requirements — including less tangible ones like societal attitudes — is a great foundation for privacy policy and practice that respects diversity and can evolve as the landscape changes around you.
Photo credit: Gerson Repreza on Unsplash