Metrics for success in data portability work
How do you measure success?
The tech world is fond of acronyms like KPIs – key performance indicators – and OKRs – objectives and key results. And of course there are always SMART goals, or, for the neophytes, goals that are “Specific, Measurable, Achievable, Relevant, and Time-Bound.”
Typically these metrics gravitate towards quantitative, measurable targets, and the threshold for success is similarly numeric. With KPIs, 100% is often sought; whereas Google’s use of OKRs famously comes with a 60% target threshold, and if 100% is ever reached, it’s considered a miss as not reflecting enough ambition.
But sometimes success isn’t reducible to just a number. Or to put it another way, sometimes the number doesn’t tell the best story of the success.
That deeper lens requires considering all three of an organization’s outputs, capacities, and outcomes - and that’s the framework in which the Data Transfer Initiative is developing its metrics of impact. This piece will explain that framework and provide a little insight into how we are beginning to measure our success.
The outcomes and outputs of data portability
The Data Transfer Initiative is a social welfare organization (a “501(c)4” under the U.S. tax code, for those who track such things). As a result, the work must serve DTI’s mission: “Empower people by building a vibrant ecosystem for simple and secure data transfers.”
This mission statement breaks down into three outcomes that drive strategy development. First, we want to help produce more, better data transfer products. Second, we want to improve relevant policy outcomes for data portability. And last but not least, we want to ensure DTI, as an organization, is healthy and sustainable.
Outcomes are funny sometimes. They’re rarely in anyone’s direct control. Sometimes you can do nothing and still end up with a desirable and valuable outcome. At other times, you can do everything in your power to try to make something happen, and it can remain beyond your reach. I always think about a colleague I had at Mozilla years ago, who very clearly (and wisely, particularly in retrospect…) stated that she would under no circumstances accept as a stated goal “pass U.S. federal privacy law.”
Outputs, on the other hand, are comparatively easy to measure and to deliver. “Write two blog posts in the month of October,” for example, would be a nice tractable target for DTI. (And we’re already halfway through with this piece!) And it’s fairly easy to articulate outputs at DTI: we do original policy analysis and writing on data portability; we host and participate in community events and substantive discussions; we contribute to and shepherd the open source repository powering Data Transfer Project technologies; and we engage with partners and the developer community.
But what’s the through line connecting these outputs to the outcomes we want to deliver, the milestones that show we are making progress in line with our strategy and our mission? And how, other than the big-ticket moments where we can see how things are going with our outcomes, can we take a step back and evaluate whether the tactics we’re pursuing are the right ones?
If outputs are directly measurable and outcomes are meaningful, a layer in between helps bridge the two. One useful frame for that middle layer is capacities. In that sense, our outputs build towards something perhaps more qualitative than quantitative, but against which strategies can be developed, and from which a clear influence over potential outcomes can be seen.
Building better mousetraps, I mean metrics
There’s often a tradeoff between measurability and meaning in measuring success. Further complicating that is the risk that the metrics cart will pull the strategy horse – that as soon as we complete the exercise of translating strategy into metrics, we’ll build our tactics in service of those metrics, and thus potentially miss out on opportunities for more impact and better outcomes through sheer myopia.
Capacities help with this. They represent resources that can be built up by tactics and measurable outputs, and at the same time, contribute to the likelihood of good outcomes occurring. And articulating the interstitial and visible goals gives us a framework by which to look for new tactics we could pursue, and prioritize among the many options to maximize our return on investment.
Metrics for product building are perhaps a bit more tractable to develop as compared to policy work. But DTI’s work is a bit different from centralized software engineering processes. As our new social workstream makes clear, we do our product work in a collective way. With end-to-end data transfers, built on models designed for use by many different services, we can rarely – if ever – sit down and just code and ship a product. Our work is inherently collective on many levels, and thus depends on different thinking for strategies and metrics.
So what are some of these capacities we’re building? Here are a few on our current working list:
- Improved public knowledge of data portability;
- Active and engaged allies and partners;
- Positive DTI brand and reputation;
- New explorations for data transfer products and infrastructure; and
- A healthy community of contributors and collaborators.
These may not be right yet. It’s an iterative process. But they tell the story of some of the structural goals we’re building towards, and some of the pieces that will help advance the outcomes we’re looking for, including more data transfer products and effective policy outcomes. And the perspective they provide, looked at together, helps resist over-rotating on any one particular metric, because the goal isn’t just to take meetings or to write code – it’s to take meetings that help our community be healthy and engaged on these issues, and to write code that breaks new ground and adds sustainable value.
Looking to the ecosystem
These are DTI’s metrics. A separate, but equally interesting question is: What would metrics look like for the data portability ecosystem as a whole? Lisa’s recent post on the goals of portability calls out some “bad metrics” for portability overall – such as counts of user movements, or shift in market share. Measuring user empowerment is hard. The three-part outcomes, capacities, and outputs framework doesn’t quite fit, as it’s designed for organizations and strategies rather than ecosystem health.
But I’ll leave that exercise for a future day.