Talking about Trust
Last week, I gave a talk as part of a privacy class at the Rose-Hulman Institute of Technology, taught by my friend Sid Stamm. Data portability, and thus DTI’s work as an institution, has its roots in privacy, although it comes up more today in competition discussions. (And, as we have pointed out, it has benefits for other contexts including online safety.) It thus felt appropriate to speak to computer science students studying privacy, including about our work on building end-to-end data transfer tools showing how data portability as a privacy right is realized in practice, and also to our trust efforts, the privacy and security challenges that arise in the course of implementing data transfers.
I’ve worked on issues that could be lumped under a “trust” heading at many different times throughout my career. Sometimes it feels like the more we as a tech community work on trust, the less of it we feel; perhaps that’s a little cynicism showing in me, but perhaps not. Certainly, I and many other people I speak to feel a pressing need for more collective, strategic work on trust. And at the same time, there are many different “trust registries” and similar efforts to establish trust signals and processes for generating signals of trust and databases storing and sharing trust signals – enough so that I’ve encountered multiple network-building efforts working to ensure compatibility among such signals of trust.
This seeming conflict, expanding work yet expanding need, in fact makes sense because the landscape for “data” and the breadth of ways and places in which digital data is relevant is stupefyingly large and ever growing. And around each new corner in this many-dimensional many-sided behemoth of a data ecosystem lies not just new value opportunities, but also new opportunities for malfeasance.
And the short-lived dreams of “trustless” technology, powered by blockchains and math, placed their underlying assumptions poorly. Trust is built not on technology alone, but on people and institutions, with technology as a pure implementing function. Trust is not something that can ever be automated entirely.
An often pervasive illusion is that computers can be perfect, always correct, because a mathematical formula always produces the same answer. Right or wrong, 1 or 0, valid or invalid contracts – binary assumptions of outputs can dominate thinking. However, as soon as technology intersects with people, overly simplistic assumptions break down at every turn. Thus in practice, trust and safety systems in companies cannot eliminate all threats, and every step of training an artificial intelligence system on human data introduces human nuances and complexities and uncertainties.
Meanwhile, there is huge positive value to be gained in allowing users to transfer their personal data directly between services, despite the inherent risk. So, we must work to mitigate that risk as we can, through strategic investments in trust processes and structures.
Here’s how I talked about DTI’s work in this direction to the Rose-Hulman students:
- First, the problem statement: Imagine a hypothetical brand new photo management service, Sid’s Photo Sharing Garage. Imagine Sid offers you (the end user) a sweet deal: upload all your photos to him, and he’ll give you 10000 Robux. He promises he won’t do anything untoward with your data - just click this button and say you want to transfer all of your photos to him. If you do that, and Sid does something shady, you suffer, and the company that transferred your data to Sid faces consequences as well. (Sid does too, but only if the authorities can catch him!) Maybe legal harm, but certainly reputational. Direct transfers reduce friction, which reduces friction for harm as well.
- The European Union’s Digital Markets Act requires ‘designated gatekeepers’ to make data portability interfaces available to all EU-based third parties. The increased risk of harm creates a policy reason to make sure those third parties are responsible, e.g. not lying to users about what they’ll do with the data. But the DMA offers no specific guidance on how to calibrate such a check.
- Data protection law isn’t self-enforcing. So while it’s reasonable to say that EU companies should be expected to be in compliance, and that a gatekeeper is not an arbiter of compliance under the law, not all risk can be eliminated. Also, data portability is and must be built beyond just Europe and other regions with data protection laws.
- The solution for now has been bespoke, pairwise verification. Before companies B and C can receive data from company A’s portability interface, A has to verify, separately, B and C. Now if B and C also want data from D, D has to verify them separately. And if B and C want to exchange data between themselves, they have to verify each other.
- At DTI we envision a different future, one where the significant overlap in verification questions can be addressed – where B and C can register once and be seen as trustworthy by A and D, and each other, reducing or even eliminating additional verification procedures. But to get there, we needed to understand the risks in server-to-server data portability, and to articulate mitigation measures for those risks.
- So we built a threat model to describe what can go wrong, and a trust model grounded in those threats to articulate what kinds of questions could be asked to develop trust. And now we’re working on a trust registry to implement the model, and create a system where an organization can be verified once according to our standard, and have that status be used to gain (or at least streamline) access to data transfer mechanisms with many organizations.
- We’re piloting the registry now, with one platform partner and a few companies we’re evaluating. We’ve defined initial “trust levels” and the principal output of the registry is a statement by us that a company qualifies for a specific trust level. In the context of this pilot, when a company hits our “level 2” (i.e. we have verified them as reaching a certain level of trustworthiness), that will unlock access to our pilot partner’s data portability APIs. Over time, we anticipate bringing more platforms and companies into this system.
- Trust is not particularly durable. A significant change to policies or practices can put it out of sync with a trust level it formerly met. (One of the great questions I got from a Rose-Hulman student was in the direction of, what happens when things change?) We are initially proposing to address this through a combination of annual review and complaint handling; there is also the idea of asking a company to self-report changes if they reach some particular threshold (recognizing that that can be burdensome and inexact), or running monitoring tools to, for example, notify us if a company’s website published privacy policy changes (which risks false positives and inefficiency).
- Some of the toughest questions we don’t have answers to yet, particularly around thresholds of sufficiency in, e.g., privacy and security policies. We plan to build up some of those answers over time through case-by-case evaluations during the pilot project.
Our trust work reflects a major investment from DTI as an organization, and one that has been well-received by stakeholders in our orbit. We see a lot of value from a successful trust registry for data portability. And we’ll continue to share updates as the work progresses. We welcome all feedback on our plans. Stay tuned!