Talking about Trust

Last week, I gave a talk as part of a privacy class at the Rose-Hulman Institute of Technology, taught by my friend Sid Stamm. Data portability, and thus DTI’s work as an institution, has its roots in privacy, although it comes up more today in competition discussions. (And, as we have pointed out, it has benefits for other contexts including online safety.) It thus felt appropriate to speak to computer science students studying privacy, including about our work on building end-to-end data transfer tools showing how data portability as a privacy right is realized in practice, and also to our trust efforts, the privacy and security challenges that arise in the course of implementing data transfers.

I’ve worked on issues that could be lumped under a “trust” heading at many different times throughout my career. Sometimes it feels like the more we as a tech community work on trust, the less of it we feel; perhaps that’s a little cynicism showing in me, but perhaps not. Certainly, I and many other people I speak to feel a pressing need for more collective, strategic work on trust. And at the same time, there are many different “trust registries” and similar efforts to establish trust signals and processes for generating signals of trust and databases storing and sharing trust signals – enough so that I’ve encountered multiple network-building efforts working to ensure compatibility among such signals of trust.

This seeming conflict, expanding work yet expanding need, in fact makes sense because the landscape for “data” and the breadth of ways and places in which digital data is relevant is stupefyingly large and ever growing. And around each new corner in this many-dimensional many-sided behemoth of a data ecosystem lies not just new value opportunities, but also new opportunities for malfeasance.

And the short-lived dreams of “trustless” technology, powered by blockchains and math, placed their underlying assumptions poorly. Trust is built not on technology alone, but on people and institutions, with technology as a pure implementing function. Trust is not something that can ever be automated entirely.

An often pervasive illusion is that computers can be perfect, always correct, because a mathematical formula always produces the same answer. Right or wrong, 1 or 0, valid or invalid contracts – binary assumptions of outputs can dominate thinking. However, as soon as technology intersects with people, overly simplistic assumptions break down at every turn. Thus in practice, trust and safety systems in companies cannot eliminate all threats, and every step of training an artificial intelligence system on human data introduces human nuances and complexities and uncertainties.

Meanwhile, there is huge positive value to be gained in allowing users to transfer their personal data directly between services, despite the inherent risk. So, we must work to mitigate that risk as we can, through strategic investments in trust processes and structures.

Here’s how I talked about DTI’s work in this direction to the Rose-Hulman students:

Our trust work reflects a major investment from DTI as an organization, and one that has been well-received by stakeholders in our orbit. We see a lot of value from a successful trust registry for data portability. And we’ll continue to share updates as the work progresses. We welcome all feedback on our plans. Stay tuned!



Previous Post

Catch up on the latest from DTI

  • trust
Talking about Trust
  • engagement
Let’s Talk About Utah
  • money
Data portability could help unlock tax freedom
  • news
DTI in Europe - now on a continuous and real-time basis
  • engagement
Goodbye Skype, Old Friend
  • events
Portability in Practice - A Discussion with Google
  • AI
The future of AI hinges on data portability and APIs.
  • news
Year 3 for DTI brings growth
  • social
What the TikTok ban means for your data
  • policy
Reciprocity and holiday returns