How data portability can help improve online safety

Did you notice anything surprising in DTI’s vision paper shared earlier this month? Maybe you noticed this goal we offer for tech policy: “Promote safety through choice - Transparency and accountability in platforms help govern how platforms moderate what users say online; and portability unlocks a future where users can gain more agency, and thus more safety, over what content they see.” The relationship between portability and safety hasn’t been well established; in this piece, I will dig in a bit more.

The greatest problems in the internet policy world today - the hardest, most intractable, and most harm inducing for the world - fall under the broad concept of “online safety.” Attacks against people online (and offline, but coordinated or otherwise facilitated online), mis- and disinformation undermining truth and democracy, and other manipulation and malfeasance plague online services at all scales.

The emergence and professionalization of the field of trust and safety professionals and the coordinated efforts of technology companies help massively to mitigate harm online. And governments around the world are investing in legal instruments like the European Union’s Digital Services Act and the United Kingdom’s Online Safety Act to standardize those practices and improve overall transparency and accountability in the tech sector.

But online safety is not the kind of problem that can ever just be “solved,” job done, safety established, check, nothing more to do here. The more tools in the toolkit to support online safety, the better.

Meaningful user choice of services, powered by effective data portability, is one of those tools. Where a user perceives a specific platform as harmful, data portability empowers that user to choose to leave and – at least temporarily – avoid future exposure to harms delivered through that platform without losing access to existing data.

I’ve been making this case – of the power of user choice to help make the internet ecosystem better – for a few years now, in one form or another. I tend to get roughly three very different categories of pushback. So let’s dig into them.

People, if given a choice, will choose poorly.

This is the Idiocracy hypothesis, and I get it. But there’s no end to the downward spiral if we assume humanity is, as a block, fundamentally irredeemable. Digital paternalism - removing options, or limiting them to just healthy choices - will breed resentment and circumvention, just as in environments of broader information and communication repression. As a society we must invest in, and ultimately trust, other methods than limiting choice to encourage responsibility, just as we do with food and lifestyle.

Not all harms can be mitigated through control over content impression.

Some (major) online harms are beyond the control of what a user sees. Doxxing, for example – sharing private, real world names and addresses of online personae for the purpose of enabling offline harassment – doesn’t stop if the victim chooses not to see it. For these, portability isn’t a useful tool, at least not directly. Platform moderation is necessary for many types of online harms (and encouraged or required by regulation). Over the long term, portability and choice can help align market pressure with regulatory pressure to encourage platform responsibility, but that’s an indirect contribution.

Meaningful switching requires alternatives that may not exist.

Platforms operate at a significant scale. Portability doesn’t have benefits for online safety unless there are alternatives, sufficiently close in functionality and effect to make a switch viable. And scale can be an impediment to this. But markets exist to create scale, and there is ample regulatory interest to create fair and contestable markets. There will be users who seek out safe services, enough to encourage investors and entrepreneurs to build them. Just as privacy has increasingly become a feature on which services compete, so too has safety.

Data portability is rising in prominence within the contexts of privacy and competition. But it offers the potential for even broader benefits than these, including for online safety.

To be clear: Data portability is far less powerful a tool for online safety than content moderation. The law of the instrument says, “If the only tool you have is a hammer, it is tempting to treat everything as if it were a nail.” Data portability is a hammer, and online safety is not a nail. But it sure is a spiky mess with plenty of sharp nail-like protrusions. And if we hammer a few of them back with the tools we can bring to bear, it will help.



Next Post
Previous Post

Catch up on the latest from DTI

  • trust
Establishing trust in data portability - DTI's Trust Model
  • engagement
Data portability and public engagement
  • news
A policy vision for portability in the United States
  • trust
Trusted data access and transfer, in many contexts
  • policy
Global Data Portability Policy Round-Up
  • policy
Data spaces and data portability
  • news
DTI’s UK tech policy vision
  • tools
New music playlist transfer tool released by Data Transfer Initiative members Apple and Google
  • AI
An inflection point for personal AI portability
  • trust
Update on trust efforts at DTI