Canada’s COVID Alert app is a case of tech-driven bad policy design
by Blayne Haggart, Associate Professor of Political Science, Brock University
One expert says the app was released without anyone having created a framework for evaluating its success
The July 31 release of Canada’s COVID Alert app was greeted with almost universal praise. Privacy experts applauded its strong privacy protections, echoing the official app website’s extensive detailing of how “your privacy is protected,” including a link to an entire other page that explains “how COVID Alert protects your privacy,” which in turn links to Health Canada’s privacy assessment of the app.
The focus on privacy was so overwhelming that you could be forgiven for thinking the app’s entire purpose was to protect people’s privacy rather than to save lives.
Despite this near-universal praise, when you focus on the actual purpose of the app, rather than on its elegant design, red flags start popping up everywhere.
The design and rollout of the app all suggest that considerations of the app’s medical effectiveness have been secondary to its technical design.
The website contains no information about how effective the app might be in reducing COVID-19 transmission. It does say that “COVID Alert is just one part of the public health effort to limit the spread of COVID-19,” but provides no details about what that means.
A poorly designed policy
More troubling, the app was released without anyone having created a framework for evaluating its success. Instead, Health Canada is currently deciding how to evaluate it, meaning it was released without anyone having a clear idea about what they wanted it to do, and now anything it does can be treated as a success.
The Logic, meanwhile, reports that Liberal Digital Government Minister Joyce Murray says that the government doesn’t have “a particular threshold below which it considers the app to be ineffective.”
If COVID Alert had been presented as a regular government mitigation policy, it would have been given a much rougher ride. Consider:
Rather than targeting the groups most vulnerable to COVID transmission, it focuses on those least likely to spread the disease, namely higher-income Canadians who can afford expensive smartphones and data plans. Meanwhile, it’s low-income communities that are most at risk of COVID-19 transmission.
The program will follow best practices in its setup (privacy), but it was designed without setting any criteria for judging its success or failure in helping to flatten the curve, and without embedding it in an appropriate regulatory framework to ensure that apps like COVID Alert aren’t misused by businesses. And the government is spending $10 million to advertise it.
Meanwhile, the World Health Organization reports that “the effectiveness of digital proximity tracking to assist contact tracing remains unknown,” with many variables affecting its potential utility, none of which have been publicly accounted for by the government.
Would you fund an unproven policy that likely wouldn’t reach the people most at risk of getting the disease, and with no way to know if it is working?
Focusing on the tool, not the problem
Instead, COVID Alert’s presentation and reception benefited from what tech critic Evgeny Morozov calls technological solutionism. Technological solutionism describes the all too prevalent tendency to assume that technology can solve all our problems.
When we make this leap, as the government and all too many others did here, we begin to focus on the technology’s design — on how effectively it protects privacy — rather than its effect on the problem it was supposedly designed to address.
Instead of starting from the more open question, “What is the best way to fight the pandemic?” technological solutionism asks the leading question, “How can we use apps to do this?”
Technological solutionism is a terrible way to set policy. It leads policy-makers to ignore other, potentially more effective alternatives. It downplays problems caused by the app’s design. In this case, the fact that people most at risk likely won’t be able to access the tool is treated as a minor bug, rather than as a policy-impairing flaw: if it can’t actually reach the people spreading the disease, it’s practically useless.
Technology is policy
Make no mistake: apps like COVID Alert are government programs, not neutral tools. They deliver services and benefits in ways that can create winners and losers. Their creation involved the allocation of scarce resources and the choice of policy pathways that necessarily involved discarding or delaying other options.
In this case, it shifts yet more responsibility onto individuals to battle the pandemic, rather than onto a dedicated bureaucracy. Rather than, say, closing potential COVID-19 hot spots like bars and providing income support to owners and workers, it relies on an incomplete technological fix to deal with failures to socially distance.
Because they are government programs, apps and technology generally should not be exempt from well-established policy-evaluation frameworks. All the privacy guarantees in the world are meaningless if the app doesn’t actually help to reduce the spread of COVID-19.
As digital technologies become increasingly pervasive and private-sector tech firms attempt to insinuate themselves ever deeper into government policy-making (as Apple and Google, the progenitors of this app, are trying to do here), it’s essential that governments avoid technological solutionism.
Otherwise, they will end up outsourcing basic policy-making responsibilities to private tech companies, just as was narrowly avoided when Sidewalk Labs abandoned Waterfront Toronto’s Quayside project in May.
And all of us need to stop thinking about social problems in terms of how tech can help address them. Because when all you have is an app, everything looks like data.
This article is republished from The Conversation under a Creative Commons license. Read the original article.