<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="3.10.0">Jekyll</generator><link href="https://dtinit.org/feed.xml" rel="self" type="application/atom+xml" /><link href="https://dtinit.org/" rel="alternate" type="text/html" /><updated>2026-04-22T04:29:02+00:00</updated><id>https://dtinit.org/feed.xml</id><title type="html">Data Transfer Initiative</title><subtitle>Home page for the Data Transfer Initiative, a nonprofit organization dedicated to promoting data transfers</subtitle><entry><title type="html">Web browsers - a data portability patchwork</title><link href="https://dtinit.org/blog/2026/04/14/web-browsers-data-portability-patchwork" rel="alternate" type="text/html" title="Web browsers - a data portability patchwork" /><published>2026-04-14T00:00:00+00:00</published><updated>2026-04-14T00:00:00+00:00</updated><id>https://dtinit.org/blog/2026/04/14/web-browsers-data-portability-patchwork</id><content type="html" xml:base="https://dtinit.org/blog/2026/04/14/web-browsers-data-portability-patchwork"><![CDATA[<p>I often hear skepticism that there is no demand for data portability due to lack of consumer awareness. That data portability isn’t that important for switching after all. And that it can only work with sector-wide common standards and governance frameworks in place.</p>

<p>Although not perfect, browser data portability is a compelling case study for busting all of these myths.</p>

<p><strong><em>Browsers are digital wallets</em></strong></p>

<p>Browser users – basically everyone – are not a homogenous group.</p>

<p>For many users, the choice and setup of the browser is a deeply personal and personalised experience. Their tabs, bookmarks, favourites, reading lists, extensions and search histories are core to their experience of browsing and consuming online. But for others, the default browser will always do, straight out the box.</p>

<p>Privacy protection is another issue that can elicit a wide range of reactions, including anger and fear, general mistrust, apathy, and more positive value recognition.</p>

<p>Wherever a user sits on these overlapping spectrums, ongoing operational access to sensitive data by the browser can be critical to users’ experience online. For example, even the least engaged browser user may be relieved when their browser recalls their password for their Amazon shopping account. And a privacy conscious user may still be grateful to be offered up their payment card details at the checkout.</p>

<p>This is because browsers are not just a window to the web, with personalisation as an add on. Browsers now also double up as digital wallets, storing critical information that makes our online browsing and shopping experiences more efficient and convenient.</p>

<p>So when people choose to switch browsers – yep, you aren’t reading this on Netscape or Internet Explorer are you – it is important they have the option to take their useful data with them. Engaged users don’t want to start from scratch re-setting all of their bookmarks and personal touches. Nor does the average user want to start searching for their physical wallet every time they need to make a purchase, or resetting complex passwords for each website they return to.</p>

<p><strong><em>A patchwork of solutions</em></strong></p>

<p>Browser users don’t need to be aware of the concept of data portability, or be excited by their personal data rights. They just make a few extra clicks as they install and onboard with a new browser. This is how data portability works best: when it blends into the background, serving a higher purpose.</p>

<p>And browser switching is certainly not niche. On desktop devices, the market has tipped to a new winner several times over the last quarter of a century, meaning most people have changed the browser they use on their laptop or desktop computer at some stage. Switching rates have historically been lower for mobile browsers for a host of reasons, but even if just 16% have switched (<a href="https://assets.publishing.service.gov.uk/media/67d1abd1a005e6f9841a1d94/Final_decision_report1.pdf">according to a CMA survey of UK mobile browser users</a>) then we could be talking about a billion people worldwide. And the introduction of choice screens, such as those imposed by the EU’s Digital Markets Act (DMA), are driving up consumer engagement, with browsers such as <a href="https://press.opera.com/2025/11/13/opera-ios-growth-europe/#:~:text=The%20company's%20daily%20active%20iOS,browser%20competition%20in%20the%20EU.">Opera</a>, <a href="https://brave.com/blog/100m-mau/#:~:text=As%20of%20September%2030th%2C%20the,100%20million%20\(and%20counting!\)">Brave</a> and <a href="https://forum.vivaldi.net/topic/115872/vivaldi-reaches-4-million-users-worldwide">Vivaldi</a> all reporting growth in user numbers since DMA implementation in 2024.</p>

<p>Although browser switching works relatively well, the browser data portability landscape could affectionately be described as a scruffy patchwork: it just about does the job; new patches keep getting added; but some gaps remain and each patch is different from the one before. But that variation doesn’t necessarily seem to matter a great deal. For a given user looking to switch – possibly every few years or less – it just needs to work for them, there, in that context. The fact that a different user switching between two other browsers on a different platform may be getting a completely different experience doesn’t really matter at all.</p>

<p><strong><em>Desktop vs mobile</em></strong></p>

<p>Browser data portability has worked pretty well on desktop for some time. The majority of browsers support the direct passive transfer of browser data without the user needing to handle any files themselves. This means that each browser on desktop is generally able to include a data import feature in their installation setup wizard. This all tends to work pretty seamlessly.</p>

<p>To illustrate, if someone wants to install Firefox on their laptop, they can import their data from Chrome as part of the set up process. After a few clicks (see screenshots below), Firefox is essentially able to reach into your local files and extract the data it needs.</p>

<figure>
<img src="/images/blog/image1-4-13.png" alt="Screenshots of transfering data from Chrome to Firefox" width="650" style="display:block; margin-left:auto; margin-right:auto;" />
</figure>

<p>On mobile devices, the desktop method for direct transfers isn’t possible due to browser sandboxing. But the options for moving data between browsers are increasing and improving on mobile, with very little fanfare.</p>

<p>For example, on the iPhone, users can export a zip file of their Safari data to their files, through a relatively quick and seamless UX accessed via the device settings. But as I have written before, data portability takes two. And we are now starting to see alternative browsers on iOS develop the necessary import functionality, including <a href="https://www.macobserver.com/news/chrome-on-iphone-adds-guided-safari-import/">Chrome</a> in January 2026, and <a href="https://vivaldi.com/blog/vivaldi-on-mobile-7-9/">Vivaldi</a> last month.</p>

<p>The user journeys for the Safari exports and the alternative browser guided imports are pretty slick if you know where to look, and the transfer includes (subject to user choice) the full spectrum of useful data including bookmarks, history, passwords and credit card details.</p>

<figure>
<img src="/images/blog/image2-4-13.png" alt="Screenshots of exporting data from Safari" width="650" style="display:block; margin-left:auto; margin-right:auto;" />
</figure>

<p>There are further developments to suggest direct transfers will be supported on mobile in time. Google’s Data Portability API – yet to be fully explored by rival browsers to my knowledge – enables one off and ongoing direct transfers of a user’s Chrome data to third-party services, <a href="https://developers.google.com/data-portability/schema-reference/chrome">including history, bookmarks, reading lists and more</a>. Apple has also recently announced <a href="https://developer.apple.com/documentation/browserkit/transferring-browsing-data-to-another-browser">a tool that will facilitate direct transfers</a> of similar data scopes from Safari.</p>

<p><strong><em>Security vs usefulness</em></strong></p>

<p>Passwords and credit card details may just be up there with the most sensitive and dangerous data to expose to security threats. It also happens to be up there as some of the most practically useful information to share with your web browser. So there is an inherent tradeoff between the usefulness and security of browser data portability.</p>

<p>The market seems to have naturally converged towards a tiered approach that supports the direct browser-to-browser transfer of less sensitive data such as bookmarks and histories, then with some additional user action to move more sensitive data such as passwords and credit card information. Whether it is the user needing to export data to files before manually importing, or interacting with a separate password management service, these added steps create friction for the user and affect the user experience. But this friction is also a positive trigger for users to think carefully about their choices and be certain that they know and trust the destination.</p>

<p>A tiered approach appears logical to me, and is consistent with the approach we have taken to our Data Trust Registry, which has different requirements depending on data sensitivity.</p>

<p><strong><em>A patchwork that works</em></strong></p>

<p>Browser vendors have done a decent job at enabling their users to take their data with them, pulling together this patchwork of transfer solutions without sector wide requirements or an overarching coordination body. It may not be perfect or standardised, and there will continue to be some tensions between security and user experience to work through, but that doesn’t seem to matter too much.</p>

<p>The fact is, data portability really does support browser switching in its purest form, and it works pretty well.</p>

<p>I’m looking forward to continuing down this rabbit hole I recently stumbled into, to see what role DTI might play in helping the industry tackle any future challenges as they arise.</p>]]></content><author><name>Tom Fish</name></author><category term="policy" /><summary type="html"><![CDATA[Although portability for browser data may entail a patchwork of solutions, it is ultimately a compelling case study. In this email, we dig in.]]></summary></entry><entry><title type="html">Sense and Sensitivity</title><link href="https://dtinit.org/blog/2026/03/24/sense-and-sensitivity" rel="alternate" type="text/html" title="Sense and Sensitivity" /><published>2026-03-24T00:00:00+00:00</published><updated>2026-03-24T00:00:00+00:00</updated><id>https://dtinit.org/blog/2026/03/24/sense-and-sensitivity</id><content type="html" xml:base="https://dtinit.org/blog/2026/03/24/sense-and-sensitivity"><![CDATA[<p>In Jane Austen’s Sense and Sensibility, a secret engagement is a major plot point. The protagonist Elinor Dashwood is told the secret in chapter 22, and has much pain keeping it until chapter 37 when she can finally tell her sister Marianne:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>“For four months, Marianne, I have had all this hanging on my mind, without being at liberty to speak of it to a single creature; knowing that it would make you and my mother most unhappy whenever it were explained to you, yet unable to prepare you for it in the least. It was told me,—it was in a manner forced on me by the very person herself, whose prior engagement ruined all my prospects…” 
</code></pre></div></div>

<p>Elinor’s torment only makes sense if the reader absorbs the sensitivity of a secret engagement. Many readers bounce off Regency novels due to having little engagement with sensibilities like these.</p>

<p>I use this example not just for the pun in the title of this piece (ok, 80% for the pun) but to illustrate that all personal data can be sensitive. We can’t decide that an engagement, just because it’s usually announced happily, is a non-sensitive piece of data. The history of startups and tech platforms shows that we’re terrible at recognizing this. Many companies have blithely revealed information about women to their stalkers. My first pregnancy, though a secret to everybody I knew due to miscarriage, was not a secret to advertisers online because my search terms were shared. Companies cannot make these decisions for people.</p>

<p>We’ve accepted privacy principles of “protect all data” well in some areas. It’s nearly required for personal data in transit, as we’ve built the expectation for TLS everywhere. “<strong>There is no such thing as non-sensitive web traffic</strong>”, says <a href="https://https.cio.gov/everything/">https://https.cio.gov/everything/</a> , and thus all Web traffic should be encrypted.</p>

<p>—</p>

<p>When we make the case for data portability, it is tempting to forget all this. When we ask companies to make data portability a user right, yet companies have very short lists of allowed destinations due to security barriers, it’s tempting to ask them to dismantle those barriers entirely.</p>

<p>Companies have legal and reputational liability when they let users share personal data via platform features. Companies are well aware of the many opportunities for fraud. For example, users convinced they’re sharing data with a reliable company may be fooled by an impersonation attack. And so companies do what companies do, and build complex protective systems of service identification, data protections, security reviews, OAuth scopes and sensitivity levels.</p>

<p>Those systems are:</p>

<ul>
  <li>Very expensive to companies for both platforms and companies applying for access,</li>
  <li>Significant barriers to users actually porting their data,</li>
  <li>Often wrong about sensitivity level,</li>
  <li>Inconsistent internal to a platform,</li>
  <li>VERY inconsistent across the industry</li>
</ul>

<p>Sensitivity levels are a <a href="https://learn.microsoft.com/en-us/compliance/assurance/assurance-data-classification-and-labels">data classification</a> framework - broad groupings of kinds of data, linked to protection levels. It’s understandable that companies would try this approach for personal data, as companies already use this kind of framework for corporate data. A company can decide that its customer prospect list is “restricted” and its employee foosball chat is merely “private”. A company can decide that vendor X is secure enough to work with restricted data and vendor Y is secure enough to serve employee chat rooms. But does the same model work when companies apply it to personal data?</p>

<p>First, it’s much harder for companies to apply sensitivity levels to all personal data. A major recording artist’s music listening activity is more sensitive than my photos of my kids. My photos are more sensitive than my friend Jamie’s blood-glucose data (because he’s already donated it as a public research dataset). Any reasonable sensitivity classification of listen history, photos and medical data would reverse this ordering. Second, it always seems safer to put data in a more secure category. If some photo albums contain images of passports and drivers’ licenses, then all albums are considered the most sensitive. If some email folders contain password reset links, then all email folders are the most sensitive.</p>

<p>As a result, users are burdened by high protections. When a company forces users to protect their data too carefully, they take away choice <em>and</em> incentivize workarounds.</p>

<p>The workarounds are part of why a platform can have inconsistent security protections. Many photo sites, for example, have short lists of trusted partners who can access personal data APIs and do data transfers. On the big platforms, those partners are formally and consistently vetted to make sure they are trustworthy. However, because people do want to share their photos (to photo book printers, to family, to alternate services), the photo album interfaces typically have something like this:</p>

<figure>
<img src="/images/blog/sharingux.png" alt="A black window for data sharing with little information or optionality" width="650" style="display:block; margin-left:auto; margin-right:auto;" />
</figure>

<p>Now I <em>can</em> share my photos with an unvetted startup by creating an insecure link. Even if my judgement is sound and the startup is trustworthy, others can still use that link if it leaks.</p>

<p>The situation is even worse with email data access. The only common workaround for email is for users to share their email passwords with 3rd party software. If I realize that’s terribly risky, I am left with almost no safe choices for 3rd party email management services.</p>

<p>Protecting users by taking away choice, while also allowing insecure alternatives, is the worst compromise. It’s time to help users make choices with their own data and <strong>also</strong> to protect them.</p>

<p>The UX for helping users safely make their own choices is yet to be designed. What would it look like? The user could be shown ratings and offered highly trusted destinations before making a riskier choice. The user could be allowed to decide if their own data is public, private but not very sensitive, or of the highest sensitivity. Companies hosting personal data could offer to filter content objects to help the user choose which content goes where.</p>

<figure>
<img src="/images/blog/uxmockup.png" alt="A sketch of a more informed sharing experience with information on the requesting party presented in the transaction" width="650" style="display:block; margin-left:auto; margin-right:auto;" />
<figcaption>Mockup of UX for safety and choice in data access</figcaption>
</figure>

<p>Defaults are still important, but after being offered sensible defaults, users could apply their deeper knowledge and trade-off weights.</p>

<p>Let’s improve this whole situation. Over the last 20 years we’ve developed extensive systems and UX allowing users to make their own purchasing decisions online (verified brands, number of buyers, ratings and reviews). Modern online retail shows that this could be an empowering and smooth experience. We can make a lot of progress empowering users with their own data too.</p>]]></content><author><name>Lisa Dusseault</name></author><category term="trust" /><summary type="html"><![CDATA[Sensitivity levels are a data classification framework; but personal data of all forms can be sensitive, or not, depending on personal context.]]></summary></entry><entry><title type="html">A turning point for AI portability</title><link href="https://dtinit.org/blog/2026/03/10/turning-point-AI-portability" rel="alternate" type="text/html" title="A turning point for AI portability" /><published>2026-03-10T00:00:00+00:00</published><updated>2026-03-10T00:00:00+00:00</updated><id>https://dtinit.org/blog/2026/03/10/turning-point-AI-portability</id><content type="html" xml:base="https://dtinit.org/blog/2026/03/10/turning-point-AI-portability"><![CDATA[<p>Two years ago, I <a href="https://dtinit.org/blog/2024/01/02/portability-predictions">made a prediction</a> that in data portability, supply would exceed demand. The GDPR helped create a universal expectation that people should be able to – at the very least – download their data. (To be clear, Article 20 also requires companies to directly transfer data as well, though that hasn’t manifested as much in practice.) Companies around the world have adopted various forms of data download functions, whether it’s a few clicks within a website and an archive is emailed to the user, or a form that must be filled out.</p>

<p>Personal data is immensely valuable. And much of that value has not yet been unlocked. It’s great to be able to get a copy of your data for your own archives and reference, and for switching services. But the real magic happens downstream, with vertical innovation: building tools and services that can create new value from that data, including in integration across its origins.</p>

<p>The tide is turning. Awareness of digital footprints has been growing for years, and now, everyday people are learning more about what that means and how it can help them do things with their data. And the fundamental creativity of technology builders is awakening. And in parallel, people are generating more and more personal data, and consequently more potential downstream value. A huge factor amplifying these effects is AI.</p>

<p>Regular readers of this outlet know that DTI has been pushing for the importance of personal data portability in the context of AI for <a href="https://dtinit.org/blog/2023/11/21/future-AI-portable">quite</a> <a href="https://dtinit.org/blog/2024/06/04/digging-in-personal-AI">some</a> <a href="https://dtinit.org/blog/2025/02/11/future-of-AI-portability">time</a>. These have been predictions of what’s to come and guidance on how to shape the future, drawn largely from my work and experience in the global tech sector, and my understanding of the possibilities of technology and how it can be both used and controlled. Three dynamics are emerging to validate this dynamic:</p>

<ol>
  <li><strong>Developers are building tools</strong> to get value out of personal data – including developers and builders, not just large corporations – both with and through AI. Check out the reception for <a href="https://www.linkedin.com/pulse/worlds-first-ai-portability-hackathon-chris-riley-urq8c/">the world’s first AI portability hackathon</a>, which DTI recently helped organize.</li>
  <li>For better or worse, people are freely adopting tools like OpenClaw and giving it access to all of the personal data they possess, including their local files and access credentials to remove services. This is a wildly insecure path, but it is being widely pursued nevertheless, because <strong>the value is there</strong>.</li>
  <li>People are making choices about which AI service to use not based on performance but values, including flash reactions to news developments. And when they decide to switch, <a href="https://claude.com/import-memory">service providers</a> and <a href="https://www.instagram.com/reels/DUthzqTj7tx/">internet</a> <a href="https://www.threads.com/@mike.allton/post/DVTYKFOlfgV?xmt=AQF0bsUvkTb5Uv24HfVZtRjA9ckdimArxaonCJqpvtljeeyHGAsrwLGVIdtsoMpiFwAAfpE&amp;slof=1">commenters</a> are walking them through the best currently available pathways to <strong>transfer their data over</strong>.</li>
</ol>

<p>My colleague Tom <a href="https://dtinit.org/blog/2026/01/13/portability-predictions-2026">offered a prediction</a> this year as well: “Data portability use cases will be proven as commercially viable.” At the hackathon in late February, there was at least one angel investor present to look for opportunities. I think Tom’s right, and alongside that, there will be rapid acceleration on the growth curve of demand for and adoption of data portability and personal data use, in and with AI.</p>

<p>Why is AI accelerating portability? First, it helps people prototype technologies based on little more than a concept, reducing technical knowledge and experience barriers. Opinions vary on whether “vibe coding” and similar AI-assisted development can substitute for production-quality or long-term maintainable software. However, it’s hard to deny that it makes it easier to test out ideas and hypotheses.</p>

<p>Second, it unlocks new recommendation and suggestion power based on user tastes. While this is perhaps fairly basic functionality, it’s incredibly valuable to help someone identify new music they might want to listen to, restaurants they might want to try, or products they might want to purchase – both to the individual and to the enterprise. If, as posed by Eric Seufert, <a href="https://mobiledevmemo.com/everything-is-an-ad-network/">“everything is an ad network”</a>, then everything must also be a potential data portability use case.</p>

<p>Finally, AI interactions are themselves a new source of interesting and valuable data. People talk with their chatbots about lots of things. While some of this data can be extremely personal and sensitive, lots of it also can be extremely valuable, as we know from the ways in which it is used in fine tuning and improvements to the AI service itself. These same learnings and personalizations are of use in many other contexts as well.</p>

<p>But, how much is this last part true in practice? What form is portability of personal AI data taking today? Are the current tools and methods making the right data available? Will there be trust mechanisms in place, or will users be encouraged (or misled) to transfer potentially sensitive chat histories to new services without safeguards?</p>

<p>DTI has <a href="https://dtinit.org/blog/2025/08/26/path-forward-AI-portability">articulated our principles</a> for how it should work in practice. TL;DR: <strong>We aren’t there yet.</strong> Portability demand is growing. Can the supply keep up?</p>

<p>It’s great to see experimentations with memory transfers, as Anthropic is doing. I appreciate as well that you can still export your raw personal data from Claude as well – as you can from ChatGPT and other AI services. I hope, but cannot be certain, this will continue. And the direct transfer of such data, as articulated in GDPR Article 20, typically remains a work in progress, with few exceptions. In the age of possibility brought about by modern AI, I struggle to imagine that <a href="https://dtinit.org/blog/2026/02/10/not-rocket-science">technical feasibility</a> could be a plausible barrier.</p>

<p>Trust is missing here as well. Our <a href="https://dt-reg.org/">trust registry project</a>, nearing the end of its <a href="https://dtinit.org/blog/2025/10/07/announcing-data-trust-registry">pilot phase</a>, vets third-party recipients of direct transfers of personal data to help protect people – checking that their data will not be stored insecurely or abused, and that relevant consent mechanisms meaningfully reflect what the company will do with the data.</p>

<p>Contrast DTI’s trust work with the realities of OpenClaw, which Simon Willison has described as the technical development most likely to result in a “<a href="https://simonwillison.net/2026/Jan/30/moltbook/">Challenger disaster</a>.” People are wantonly opening their local drives and connecting their access credentials to AI agents they not only do not actively control, but in many cases do not understand.</p>

<p>I have confidence in DTI’s partners and affiliates, who together lead on data portability in all its implementations. Joining us in our work means supporting our mission: “Empower people by building a vibrant ecosystem for simple and secure data transfers.” These companies make personal data available through many methodologies, including downloads, Data Transfer Project-powered direct transfers, and APIs. With our affiliate Inflection, we <a href="https://dtinit.org/blog/2024/08/26/inflection-AI-portability">shipped a data model</a> for conversation histories designed to maximize effective reuse. And our affiliates Fabric and koodos are building new tools and open ecosystems around personal context portability in AI, including <a href="http://context-use.com/">this brand new context-use tool</a> from Fabric. Context-use is a local, open source tool that converts user archives like full ChatGPT conversations and Instagram stories into personal context for agents like OpenClaw. In this way, agents are able to use full personal context safely without accessing primary user accounts.</p>

<p>But I am worried about a reversion to historical patterns of trapping users in online services by their own data. Where there is money to be made, there is incentive to capture as much of it as possible. The question I asked in November 2023 has not been fully answered: “whether the future of generative AI will lock users into new technology silos, or empower them by ensuring portability.”</p>

<p>I’m also worried about privacy and security problems that could arise from an ecosystem of data movement that develops without collaboration and considerations of trust. In other portability contexts, great care is taken in scoping the data made available and in user understanding of the transfer and its safety. Without substantial investment in and coordination of portability, more problems – avoidable problems – will occur.</p>

<p>I’m not the only one thinking about the risks of consolidation and security in data flows. Regulation is on the horizon. In the EU’s recent DMA review process, <a href="https://www.openmarketsinstitute.org/publications/open-markets-submits-review-of-the-digital-markets-act-considerations-on-cloud-and-ai">Open Markets Institute</a> and other commentators explicitly called on the European Commission to designate virtual assistants and chatbots. Megan Kirkwood at Tech Policy Press wrote <a href="https://www.techpolicy.press/will-the-eu-designate-ai-under-the-digital-markets-act/">an overview of the issue</a>.</p>

<p>In the United States, at the state level at least, there is ample regulatory appetite. In 2025 alone, <a href="https://www.uschamber.com/technology/the-hidden-cost-of-50-state-ai-laws-a-data-driven-breakdown">more than 1100 AI-related bills</a> were introduced in U.S. states. The <a href="https://ash.harvard.edu/resources/utah-digital-choice-act-reshaping-social-media/">Digital Choice Act</a> in Utah, although it is not without controversy and challenge in implementation, includes substantial data portability obligations for social media services; and similar laws have been proposed in several other states. It’s not hard to imagine these two forces coming together.</p>

<p>DTI doesn’t take a position on regulatory matters, and we recognize that these are complex issues and regulation inherently involves tradeoffs. But we also recognize that regulation in some form is inevitable, regardless of one’s views on the merits.</p>

<p>Now is the time to get a head start on building portability infrastructure in AI the right way – together. We can, and should, collaborate on shared tools and methodologies to export and import personal data in AI, including both conversation histories as well as higher-level memories and contexts. It won’t take radical new engineering. Just the space and collective will to coordinate. And we at DTI exist to facilitate precisely this.</p>

<p>We invite you to <a href="https://dtinit.org/contact-us">join us</a> on this journey.</p>]]></content><author><name>Chris Riley</name></author><category term="AI" /><summary type="html"><![CDATA[The world of personal data in AI is changing as developer interest grows and portability falls short. Will collaboration or regulation come first?]]></summary></entry><entry><title type="html">Putting a price on portability</title><link href="https://dtinit.org/blog/2026/02/24/putting-a-price-on-portability" rel="alternate" type="text/html" title="Putting a price on portability" /><published>2026-02-24T00:00:00+00:00</published><updated>2026-02-24T00:00:00+00:00</updated><id>https://dtinit.org/blog/2026/02/24/putting-a-price-on-portability</id><content type="html" xml:base="https://dtinit.org/blog/2026/02/24/putting-a-price-on-portability"><![CDATA[<p>I literally went the extra mile to produce this newsletter, going for a tapas lunch with <a href="https://www.oxera.com/people/tanja-salem/">Tanja Salem</a>, a highly regarded regulatory economist now at Oxera (formerly Director of Economics at BT Group). Over small plates, we covered a big topic: <em>whether, and in what circumstances, data holders should be permitted to recover the costs of supporting data transfers.</em></p>

<p>(Disclaimer for the skim readers: DTI is absolutely not proposing the introduction of fees in the context of user-initiated transfers of personal data.)</p>

<p><strong><em>A tricky policy question</em></strong></p>

<p>The UK is looking to be a leader in data portability initiatives, bringing forward what it calls “Smart Data” schemes in a range of sectors, through which it is hoping to replicate the success of Open Banking. These include schemes for financial services, energy, telecoms and digital markets, which could include both personal and non-personal data within their scope. In this work, the UK government faces a difficult and somewhat controversial policy question looming on the horizon: who should cover the costs of facilitating user-requested data transfers?</p>

<p>Recent regulatory precedents suggest the burden will be placed on the incumbent data holders, with the aim of having the minimum impediment to individuals or third-party businesses from accessing the data. For example, data controllers in the EU and UK are generally not permitted to charge for user-led transfers of personal data under the data portability provisions within the GDPR, nor can “gatekeepers” under the EU’s Digital Markets Act (DMA), nor the largest banks in the UK as part of Open Banking arrangements that were triggered by the CMAs 2017 market investigation into retail banking, and complemented by the Second Payment Services Directive (PSD2).</p>

<p>However, unlike those existing regulatory frameworks, the UK’s Data (Use and Access) Act 2025, which enables the introduction of Smart Data Schemes, <a href="https://www.legislation.gov.uk/ukpga/2025/18/section/11">explicitly leaves open the potential for data holders to charge third parties for access.</a></p>

<p>As sector-specific schemes are developed, this issue is likely to elicit some polarised views. On one side, there are strong economic arguments for incentives so data holders continue to invest in data, high-quality data transfer tools, and unlocking data as an economic asset. On the other, some may question whether fees for data access are consistent with policy aims of promoting competition, unlocking innovation, and empowering consumers.</p>

<p><strong><em>The economist’s perspective</em></strong></p>

<p>Here is (roughly) how our conversation went…</p>

<p><strong>Tom:</strong> I enjoyed reading your <a href="https://www.linkedin.com/feed/update/urn:li:activity:7414984108763971584/">report on fees for Smart Data schemes</a>. The thing that struck me was that your proposed framework includes fees for data portability in nearly all circumstances, with data holders always permitted to recover costs. This is different to what we have seen in Open Banking regulation in the UK and the EU, or in digital markets regulation. What is your thinking behind this? Won’t we see more innovation if data is freely available?</p>

<p><strong>Tanja:</strong> Of course, demand will inevitably be higher (at least initially) if there is no charge. Economists call that pricing that facilitates “static competition”. But in return you may see lower quality supply, fewer sustainable use cases and weaker investment. Economists call that “dynamic competition” over time. It is about striking the right balance between these factors.</p>

<p>On the one hand, if data is freely available to make new scientific discoveries or provide public services, then there’ll be more opportunities for more people to contribute and create things that ultimately will benefit everyone.</p>

<p>But on the other hand, there are also good reasons why data should not be free because doing so could lead to under-investment – in data and also the products that use it. If companies must build data tools entirely at their own expense, with no financial incentives in place to generate any meaningful demand for the functionality,  the natural response will be to do the bare minimum for legal compliance. This means that outcomes may suffer.</p>

<p><strong>Tom:</strong> That makes sense, though I would add that collective action can also help to address this challenge by sharing the costs of investment while delivering better outcomes. The Data Transfer Project is one example where this kind of collaboration to support reciprocal transfers has made progress without fees.</p>

<p>Tell me more about the thinking behind your framework then. Surely you can’t have monopolies profiting from policies that are intended to address their market power?</p>

<p><strong>Tanja:</strong> There are a few factors that can affect the appropriate fee level, including the type of data and the motivations for the intervention, such as addressing competition challenges. Prof. Sean Ennis of the Centre for Competition Policy at the University of East Anglia and I created a framework with three categories of pricing solutions to account for this:</p>

<p>1) where data can be shared across markets to deliver significant consumer and potentially social benefits (for example in health and transport/smart city type application), without undermining data holders’ business models: transaction-cost-only pricing; <br />
2) where data sharing in competitive markets may undermine data holders’ business models: opportunity cost recovery; and <br />
3) where data sharing is required to remedy a competition problem in a market: at least transaction cost plus a margin, or a value based element subject to the case specific competitive assessment.</p>

<p>Even in cases of addressing market power, usually permitting a fee with a reasonable margin will create incentives that drive the most efficient outcomes. There are standard ways of doing this using cost-based, benchmark-based, income-based and externalities-based valuation methods (<a href="https://www.oxera.com/insights/agenda/articles/if-data-is-so-valuable-how-much-should-you-pay-to-access-it/">here</a>’s a good overview by Oxera, for example).</p>

<p><strong>Tom:</strong> I see where you are coming from regarding incentives. But the framework could be challenging to implement – both practically and politically – in the context of digital markets, where so much of the data collected is personal, and given the regulatory frameworks already in place. Some might also argue that where companies extract substantial profits from the collection of large volumes of personal data, supporting onward transfers of that data is merely a cost of doing business.</p>

<p><strong>Tanja:</strong> I addressed these issues in my paper, which points out the risk that when the economic benefits of participating in data sharing are not clear, then it’s difficult to incentivise the provision of high-quality data products. The OECD has also pointed this out (<a href="https://one.oecd.org/document/COM/DSTI/CDEP/STP/GOV/PGC\(2024\)1/FINAL/en/pdf">here</a>). Of course regulation can always force supply, but as regulators in other sectors know (telecoms, utilities) that’s hard to get right and is not ideal in differentiated product markets.</p>

<p>The DMA has required the gatekeepers to provide continuous and real-time access to data to authorised third parties free of charge. How is that working out?</p>

<p><strong>Tom:</strong> If you are asking me should Article 6(9) of the DMA be viewed as a positive success story, then I would say absolutely, yes! It has been a catalyst for major progress for data portability in digital markets, the likes of which we have not seen before. But if you are asking me whether the data portability tools could have been even more effective if the gatekeepers were offered carrots as well as sticks, I would probably agree.</p>

<p>Let’s just say hypothetically that we did agree a fee was justified for creating the right incentives for data holders, wouldn’t that just kill any startup that came along trying to create a new type of service? Digital services often struggle with strong network effects, and fees for data access could really stifle the kind of innovation that policy makers are looking to unleash. From my experience at a startup data intermediary, a fee each time a user wanted to share their data would have made the business completely non-viable in those early stages. Successful data transfer takes two: by improving the incentives for data holders, won’t we reduce the ability or incentive for data recipients to participate?</p>

<p><strong>Tanja:</strong> Yes, that is certainly a risk. In new markets, where users need to experience the value of a proposition before it becomes more mainstream, initial discounts can be important, ultimately to drive volume. So initially, high input prices, even if cost reflective, might be a challenge.</p>

<p>I can see that.</p>

<p>Once higher adoption is achieved, and learning effects happen in competitive markets, prices tend to come down as average costs reduce. We’ve seen this in mobile data since the iPhone’s launch in 2008, and in technologies from batteries to LEDs. So the issue is one of upfront cost when the uncertain rewards come later.</p>

<p>If both data holder and data recipient can see the potential in a proposition, teething problems can typically be resolved without regulatory intervention where there isn‘t market power. Firms in competitive markets holding data will want to establish long-term partnerships with firms having know-how that can help them create viable products and potentially include low or zero entry prices as part of the business case, with future pay-offs shared between partners.</p>

<p>And even where there is market power, it’s important to ensure that investment in data and the infrastructure that supports it will continue to be funded. Where pricing ends up being imposed this should align with incentives to achieve that and long-term contracts can also play a role here.</p>

<p><strong>Tom:</strong> I suppose it also depends on the type of data we are talking about. I am on board with your proposed more flexible approach where data sharing could undermine business models. I would actually question whether such a requirement is justified in the first place, regardless of fee structures. For example, at DTI we have been talking a lot lately about data portability from LLMs and AI assistants, including the need to capture both sides of conversation histories. But I absolutely draw the line when it comes to underlying model weights and parameters that are the individual company’s valuable IP. Forcing the sharing of that kind of information sets a harmful precedent and would be difficult to compensate for. Is this the kind of thing you mean?</p>

<p><strong>Tanja:</strong> Here it’s likely to be harder to land on a one size fits all solution. As an economist I’d say the challenge with legislation in this area is that it is hard to arrive at economically meaningful legal distinctions between different types of data. Whereas some legal distinctions have been hard-coded into the EU Data Act, there is still an opportunity for economically meaningful distinctions to be drawn in the future implementing regulations for the Data Use and Access Act in the UK, and potentially also the EU Financial Data Access regulation (FIDA).</p>

<p>As you say, when it comes to model weights – ultimately also information in digital form – IP rights will likely kick in. It appears that some legislation might jar with that. Forcing openness by imposing regulation that potentially interferes with IP rights or database rights are clearly highly risky for incentives to invest in these in the first place.</p>

<p><strong>Tom:</strong> Where do you see this going then? As the UK brings through firm proposals for an Open Finance Scheme, do you expect your framework for charging to be applied? And what about in some of the other schemes like digital that are perhaps less sector specific?</p>

<p><strong>Tanja:</strong> I certainly hope the framework we created will be useful, yes.</p>

<p>Open data initiatives such as smart cities suggest there is huge scope for voluntary open data and long-term commercial partnerships between data holders and data recipients, enabling the sharing of risk and reward. A key distinction will be the presence of absence of market power or other market failures, and where there isn’t, government policy and regulation should facilitate rather than determine outcomes.</p>

<p>As it spans all sectors, smart data is unlikely to be a one-size fits all policy – and different use cases and sectors will come with different opportunities, challenges and risks.</p>]]></content><author><name>Tom Fish</name></author><category term="policy" /><summary type="html"><![CDATA[Today’s note is a discussion with Tanja Salem on the complex topic of charging fees for data transfers in various contexts, including UK Smart Data.]]></summary></entry><entry><title type="html">Data portability - it’s not rocket science</title><link href="https://dtinit.org/blog/2026/02/10/not-rocket-science" rel="alternate" type="text/html" title="Data portability - it’s not rocket science" /><published>2026-02-10T00:00:00+00:00</published><updated>2026-02-10T00:00:00+00:00</updated><id>https://dtinit.org/blog/2026/02/10/not-rocket-science</id><content type="html" xml:base="https://dtinit.org/blog/2026/02/10/not-rocket-science"><![CDATA[<p>In an era of extraordinary technological progress – with driverless taxis navigating our roads and pop stars performing in space – any suggestion that it may not be technically feasible for one organisation to transfer data directly to another deserves far closer scrutiny.</p>

<p>Since 2018, technical feasibility has been treated by many organizations in the EU and the UK as a legal loophole in an obligation to support data portability through direct transfers. This element of the GDPR Article 20 has never had much bite, because the words have no single clear meaning.</p>

<p>This has long been a bugbear of mine, as has regulators’ and legislators’ heads-in-the-sand approach to the issue (most recently in the <a href="https://www.edpb.europa.eu/our-work-tools/documents/public-consultations/2025/joint-guidelines-interplay-between-digital_en">draft guidelines on the interplay between the DMA and the GDPR</a>). However, when I learned that the problem is being exported to the US through State legislation such as <a href="https://le.utah.gov/~2025/bills/static/HB0418.html">Utah’s Digital Choice Act</a>, I decided it was time to revisit the topic.</p>

<p>The term ‘where technically feasible’ was added to the direct transfer component of <a href="https://gdpr-info.eu/art-20-gdpr/">GDPR Article 20</a> for very understandable reasons. Unlike the DMA, which applies to a small number of very large technology companies, the GDPR has wide application to data controllers in the EU of all different shapes and sizes. From farmers, to factories, to florists, the data portability provisions of the GDPR will likely apply if data controllers process personal data on the basis of consent or performing a contract. So, quite reasonably, authors of the GDPR inserted a carve out for organisations that would find it challenging to implement.</p>

<p>Unfortunately, this carve out is open to different interpretations, and has consequently acted as a barrier to effective data portability implementation and enforcement throughout the European Union and the UK ever since.</p>

<p><strong><em>What does ‘technically feasible’ mean?</em></strong></p>

<p>The <a href="https://www.collinsdictionary.com/dictionary/english/feasible">Collins English Dictionary</a> felt like an obvious starting point for my research: it defines feasible as <em>“able to be done or put into effect; possible”.</em></p>

<p>Finding the consensus for a commonly used phrase is also a strong use case for Large Language Models (LLMs), given they have been trained on the entire Web archive:</p>

<ul>
  <li>
    <p>ChatGPT told me that <em>“Technically feasible means that something can be accomplished using existing or attainable technology, skills, and resources, even if it may be difficult, expensive, or impractical for other reasons. In other words: it’s possible in theory and in practice from a technical standpoint.”</em></p>
  </li>
  <li>
    <p>Along similar lines, Gemini explained <em>“When someone says a project is technically feasible, they mean it is actually possible to build or implement with the technology, tools, and expertise currently available.”</em></p>
  </li>
</ul>

<p>One critical area of uncertainty that these two responses subtly highlight is whether or not direct transfers need to be feasible with the technology and skills currently held by an organisation, or whether technical feasibility also takes into account the technology that could be relatively easily procured or built within a reasonable time frame.</p>

<p><a href="https://ec.europa.eu/newsroom/article29/items/611233">Official EU guidance</a> on the right to data portability adopted in 2016 sheds some light on the baseline requirements for the technical feasibility of direct transfers, which are:</p>

<ul>
  <li>Communication between two data controllers’ systems is possible;</li>
  <li>The transmission can take place in a secure way; and</li>
  <li>The receiving system is technically in a position to receive the incoming data.</li>
</ul>

<p>These do not appear to set a high bar.</p>

<p><strong><em>Practical interpretations</em></strong></p>

<p>There are numerous practical solutions available for facilitating direct transfers of data that can meet the above criteria. The official guidance itself provides some examples, including secure messaging, an SFTP server, a secured WebAPI, or a WebPortal, going on to emphasise that data subjects should be enabled to use personal data stores, PIMS, and other trusted third-parties.</p>

<p>Since that guidance was written a decade ago, a myriad of affordable options have emerged online for secure cloud storage, as well as commercial services that are dedicated to supporting secure transfers of large files (such as <a href="https://wetransfer.com/">WeTransfer</a>). The Data Transfer Project (DTP) has also demonstrated the art of the possible when organisations collaborate to develop interoperable, reciprocal, and scalable portability solutions based on common data models. Although not all organisations will have the resources to invest in large scale portability initiatives like the Data Transfer Project (DTP), many of the alternatives are far less complex, where any barriers to implementation must surely be non-technical (e.g. cost or lack of business incentive).</p>

<p>Rather than focusing too much on the relatively straightforward question of when direct transfers of any kind are technically feasible (answer=almost always), I am more interested in considering when it is technically feasible to deliver effective data portability e.g. through a scalable solution such as the DTP or via a purpose-built API. I’ve illustrated my thinking below by describing six plausible scenarios a data controller could find itself in where they cannot immediately support a transfer request. In each case, I’ve indicated the extent to which the barriers are technical in nature.</p>

<p><img src="/images/blog/tfmatrix.png" alt="A matrix showing six reasons why direct API transfers might not be available, ranked from highly technically feasible to technically challenging." class="blog-image" /></p>

<p>At one end of the spectrum, if organisations already have direct transfer tools such as an API at their disposal but choose to restrict their use for data portability in some way, then it would appear any barriers to effective data portability in these cases are non-technical (such as assertions of regulatory obstacles to implementation).</p>

<p>At the other end of the scale, many organisations (probably a very long tail) may simply not have the technical knowhow, software, or bandwidth required to facilitate data exports in a secure and efficient manner. Technical feasibility will be a barrier to effective data portability in these circumstances. Somewhere in the middle, there will be many organisations that do not currently have data transfer tools or technology available to them, but they could realistically build or acquire them in time.</p>

<p>While it might be tricky to know where to draw the line on technical feasibility from a legal standpoint, let’s put things into perspective with a reminder of some of the amazing things that are technically feasible in 2026:</p>

<ul>
  <li>Space tourism with re-usable rockets</li>
  <li>Driverless taxis</li>
  <li>Laboratory grown meat</li>
  <li>3D bioprinting of human organs</li>
  <li>Direct communication from brains to computers</li>
</ul>

<p>How about user-led data transfers? Well, let’s just say it’s not rocket science.</p>]]></content><author><name>Tom Fish</name></author><category term="policy" /><summary type="html"><![CDATA[Building seamless, secure and scalable data transfer tools is challenging. At what point do effective direct transfers become technically feasible?]]></summary></entry><entry><title type="html">DTI’s 2025 Annual Report</title><link href="https://dtinit.org/blog/2026/01/27/annual-report-2025" rel="alternate" type="text/html" title="DTI’s 2025 Annual Report" /><published>2026-01-27T00:00:00+00:00</published><updated>2026-01-27T00:00:00+00:00</updated><id>https://dtinit.org/blog/2026/01/27/annual-report-2025</id><content type="html" xml:base="https://dtinit.org/blog/2026/01/27/annual-report-2025"><![CDATA[<p>I’m pleased to share <a href="https://dtinit.org/assets/DTI-Annual-Report-2025.pdf">DTI’s annual report</a> for last year, calendar year 2025, effectively our third full year in operation. As I write in the note, it’s remarkable how different each of those years has felt – although there is continuity in much of our work, the context in which we operate is constantly shifting, a dynamic we are certainly not alone in experiencing in this decade.</p>

<p>In our first annual report, I described 2023 as our “launch” year. Last year, I framed 2024 as our “journey.” With this year’s report, I note the accelerating pace and demands of our task as an ongoing “march” forward. We can’t know what the year ahead will bring, but I have a hunch it, too, will be quite different from what has come before. (My colleague Tom has shared <a href="https://dtinit.org/blog/2026/01/13/portability-predictions-2026">some predictions</a>, in case you missed our last note!)</p>

<p>I recommend you read the report – it’s not too long, nor I hope too long-winded. But let me provide three bullet points to summarize:</p>

<ul>
  <li><strong>We shipped.</strong> We continued our work on the core Data Transfer Project open-source toolkit for simple and secure end-to-end data transfers, but our pilot <a href="https://dt-reg.org/">Data Trust Registry</a> project rose above DTP in widespread visibility as we shipped infrastructure, secured a platform partner, and signed up several organizations to our trust levels 1 and 2.</li>
  <li><strong>We showed up.</strong> Our goal is to be at the table whenever data portability is on the agenda. We organized workshops and events and spoke at conferences. We submitted comments to governments in multiple jurisdictions. And we published original work on critical issues, including realtime portability and AI.</li>
  <li><strong>We stayed on target.</strong> The world is changing, and the landscape for portability is evolving and growing with it. In 2025 we grew to meet that, adding new headcount and new affiliates. Yet our mission remains our north star: to empower people by building a vibrant ecosystem for simple and secure data transfers. As we increase our work on topics like trust and AI that were ancillary at our beginnings and are now central, our purpose does not and will not change.</li>
</ul>

<p>This year is poised to be even bigger for our work than any of the years thus far. And we’re ready for it. Stay tuned.</p>]]></content><author><name>Chris Riley</name></author><category term="news" /><summary type="html"><![CDATA[We’re pleased to share the annual report for 2025, our third full year in operation, in which we reflect on our impact within the ecosystem. Enjoy!]]></summary></entry><entry><title type="html">Predictions for the 2026 edition of data portability unwrapped</title><link href="https://dtinit.org/blog/2026/01/13/portability-predictions-2026" rel="alternate" type="text/html" title="Predictions for the 2026 edition of data portability unwrapped" /><published>2026-01-13T00:00:00+00:00</published><updated>2026-01-13T00:00:00+00:00</updated><id>https://dtinit.org/blog/2026/01/13/portability-predictions-2026</id><content type="html" xml:base="https://dtinit.org/blog/2026/01/13/portability-predictions-2026"><![CDATA[<p>January provides a natural break point to reflect on progress and to set new goals and plans for the year ahead. As I do this, it has got me excited about what a transformational year 2026 will be for data portability as a policy area in the tech sector, driven by a combination of regulatory interventions, technological advancements, and market developments. So I thought I would jump on the bandwagon and give you my predictions for portability related developments to look out for in 2026.</p>

<p>So here they are…</p>

<ol>
  <li>
    <p><strong>OpenAI will be designated as an ‘emerging gatekeeper’ in the EU.</strong>  Following the one year review of the Digital Markets Act (DMA), I expect the European Commission to move towards classifying ChatGPT as a “Virtual Assistant”, then applying a targeted subset of the DMA provisions to the service (including Articles 6(9) and 6(10)). As a result, I predict ChatGPT will implement a data portability API, supporting ongoing developer access to daily downloads of users’ data such as conversation histories. In a rapidly evolving technology context, we expect lots of discussion of scope, and anticipate greater visibility into <a href="https://dtinit.org/blog/2025/08/26/path-forward-AI-portability">DTI’s AI portability principles</a>. The Virtual Assistant CPS may also be applied more widely to existing gatekeepers that operate AI powered chatbots, thereby expanding the scope of data included in their data portability tools.</p>
  </li>
  <li>
    <p><strong>The UK will introduce a Smart Data Scheme for digital markets.</strong> This prediction is really about the when rather than the if, and I am expecting a rapid timeline from the Department for Science, Innovation and Technology (DSIT), with the necessary secondary legislation in place before the end of the year (just). That would be an extremely ambitious timeline but, for a government motivated by economic growth (and with no money to spend), it makes sense to move quickly.</p>
  </li>
  <li>
    <p><strong>DTI’s Data Trust Registry will reach critical mass on both sides.</strong>  Perhaps this is a bit of a cheat prediction. After all, it is something I am looking to influence directly, and we have already made significant progress in this direction. But nonetheless it feels sufficiently noteworthy for inclusion. I estimate that the tipping point for overcoming the ‘chicken and egg’ effects of this two-sided platform we are building will be three large platforms relying on <a href="https://dt-reg.org/">our Registry</a> for verification, and 30+ services registered and listed and set up for annual review. I am confident we will get there comfortably in 2026.</p>
  </li>
  <li>
    <p><strong>Data portability use cases will be proven as commercially viable.</strong>  I’m certainly not predicting an explosion of use cases with billions of users - I will save that prediction for 2027! But in this coming year I expect to see two very important developments for data portability use cases. First, a few of the most promising startups built on top of data portability APIs will become profitable and/or attract major inward investment (though we may be too early to start talking about exits). Second, a handful of existing businesses with established userbases will incorporate portability as a new feature into their offerings to enhance personalisation of their service. These early success stories will provide useful for regulators around the world examining digital market regulation.</p>
  </li>
  <li>
    <p><strong>Portability will be trialled to support consented personalisation of ads.</strong>  Perhaps as a subset of the prediction above, I foresee some ad-funded apps will trial the use of data portability APIs to power their ad targeting as a replacement for cross-site or cross-app tracking. For example, apps might request that their users share their personal data from other platforms such as Facebook, YouTube or the App Store, with explicit consent that it could be used to serve them relevant ads. Gaming seems a strong candidate for this move, given that games providers could incentivise users to share their data by giving them non-financial rewards such as extra lives or other in-game features that players might otherwise purchase through in-app payments.</p>
  </li>
  <li>
    <p><strong>Personalised audio content will grow in popularity.</strong>  The shift towards AI generated audio content going mainstream started last year, with AI generated music <a href="https://www.billboard.com/lists/ai-artists-on-billboard-charts/">entering the charts and capturing headlines</a>. While I’m sure that controversial trend will continue this year, I expect another (slightly conflicting) shift to take place that could make charts less relevant altogether. People are going to start creating and listening to their own music and podcasts, produced at the click of a button by AI powered services to their own tastes and interests. This development will drive new demand for data portability from major streaming platforms, with new AI powered content creation services seeking access to their users’ past listening, viewing and searching data to support creation of highly personalised content.</p>
  </li>
  <li>
    <p><strong>Europe will continue to lead the way on data portability.</strong>  Somewhat disappointingly, unless prediction four comes to fruition much sooner than expected, I don’t think we will reach the tipping point in 2026 where it is a no-brainer for all major tech platforms to support global availability of data portability APIs. Instead, I think we will see continued divergence between Europe (EU and UK) and the rest of the world. There will be some progress elsewhere (I hope to see at least one more existing portability API made available in the US for example) but regulatory uncertainty will stifle incentives, and the demand from developers and users in key markets such as the US will not be vocal enough (yet). As we strive to build a thriving global data portability ecosystem, this is one prediction we will be actively looking to disprove.</p>
  </li>
</ol>

<p>I look forward to returning to these at the end of the year to see how well I did. In the meantime, get in touch to let me know what you think, or even to share some predictions of your own!</p>]]></content><author><name>Tom Fish</name></author><category term="policy" /><summary type="html"><![CDATA[The year ahead will be transformational for data portability. Here are some predictions from our point of view at DTI.]]></summary></entry><entry><title type="html">Our Favorite Things</title><link href="https://dtinit.org/blog/2025/12/16/dti-eoy-unwrapped" rel="alternate" type="text/html" title="Our Favorite Things" /><published>2025-12-16T00:00:00+00:00</published><updated>2025-12-16T00:00:00+00:00</updated><id>https://dtinit.org/blog/2025/12/16/dti-eoy-unwrapped</id><content type="html" xml:base="https://dtinit.org/blog/2025/12/16/dti-eoy-unwrapped"><![CDATA[<p>Hello,</p>

<p>We made it! It’s the end of 2025, and wow, what a year it has been.</p>

<p>I hope you all are able to take a breath, step back from the work, and get some rest as the year winds down.</p>

<p>We here at DTI wanted to gift you some of our favorite things – things we’ve read this year that really mattered or opened our eyes, things we’ve learned we want to share, and even some personal tidbits we just couldn’t help but include to hopefully bring you some of the joy it brought us.</p>

<p>Enjoy and the happiest of holidays to you from all of us!</p>

<p>Chris, Lisa, Delara, Tom, Aaron, and Jen <br />
Your Data Transfer Initiative Team</p>

<p><strong>Our Favorite Things</strong></p>

<p><strong>Chris Riley, Executive Director</strong></p>

<ul>
  <li>I’ve spent a ton of time this year working on and thinking about AI. I could create a long list of recommendations just on that subject. But I’ll keep it to 1, and my apologies to the many left-behinds: Mustafa Suleyman’s writings on <a href="https://mustafa-suleyman.ai/seemingly-conscious-ai-is-coming">Seemingly Conscious AI</a>. As modern AI grows, so do its problems. (For a bonus, check out the <a href="https://www.techpolicy.press/we-need-to-control-personal-ai-data-so-personal-ai-cannot-control-us/">AI portability principles</a> we’ve published at DTI, to help people stay in control of the AI future.)</li>
  <li>In a year of significant, yet at times swirling, winds for data policy in Europe, I’ll highlight the <a href="https://www.gov.uk/guidance/data-use-and-access-act-2025-data-protection-and-privacy-changes">UK’s Data (Use and Access) Act</a>, which was fully adopted this year. The expansion of Smart Data into other sectors, including potentially the digital sector, will lead to a great increase in data transfers in practice, and DTI will be there to support.</li>
  <li>Shifting to the personal, I think this year saw an increase in fans for one of the greatest sources of unalloyed joy in my life: the TV show “<a href="https://www.nytimes.com/2025/07/02/dining/somebody-feed-phil-rosenthal.html?searchResultPosition=1">Somebody Feed Phil</a>.” Not as serious as Anthony Bourdain’s travel food shows, though inspired by them, Phil celebrates the positive and the wonder in good food wherever he goes, and the people, history, and culture behind the food.</li>
</ul>

<p><strong>Lisa Dusseault, CTO</strong></p>

<ul>
  <li>Favorite non-fiction book read this year: <a href="https://en.wikipedia.org/wiki/The_Unaccountability_Machine">The Unaccountability Machine</a>. Favorite new fiction series: <a href="https://en.wikipedia.org/wiki/Dungeon_Crawler_Carl">Dungeon Crawler Carl</a> –  I devoured these at the beginning of 2025.</li>
  <li>DjangoCon, in Chicago, where I spoke in October, was a terrific experience with friendly, supportive people, all willing to share their knowledge. Standout talk: <a href="https://www.youtube.com/watch?v=Ws9lNrrK8dw&amp;list=PL2NFhrDSOxgUSZVGkmbMhUpaaZ1ORfpCl&amp;index=17">AI Modest Proposal</a> by Mario Munoz.</li>
  <li>Favourite new DTI collaborators: getting to know <a href="http://Fabric.io">Fabric.io</a>, <a href="http://koodos.com">Koodos</a> and <a href="http://inflection.ai">Inflection AI</a> this year, all of whom became DTI affiliates in the year. I love learning about startups’ journeys and their vision.</li>
</ul>

<p><strong>Tom Fish, Head of Europe</strong></p>

<ul>
  <li>On the policy side of things, a personal highlight of the year for me was the UK government <a href="https://www.gov.uk/government/calls-for-evidence/smart-data-opportunities-in-digital-markets">consultation</a> on whether and how to implement a Smart Data scheme for digital markets. Having originally proposed this idea a few years ago in a former role, it reaffirmed my personal public policy mantra: <em>“it can all start with a <a href="https://gener8ads.com/blog/open-digital-an-entirely-unoriginal-idea/">blog</a>”.</em></li>
  <li>When selling the value of data portability to policy makers and politicians over the years, I have often called on the expression <em>“if you build it, they will come”</em>, slightly adapted from the 1989 classic <a href="https://www.imdb.com/title/tt0097351/">Field of Dreams</a>. At an event in London on Context Portability for AI Agents, hosted alongside two DTI members Google and Fabric, I officially retired this movie reference, as data portability use cases are finally emerging from the cornfield. (<a href="https://www.youtube.com/watch?v=qyYT-O9lxTU">Watch here from 28:00 onwards</a>).</li>
  <li>I’m sure this will also appear in all of my colleagues’ lists, but my biggest highlight of 2025 is of course joining DTI at the start of the year. As I said in <a href="https://dtinit.org/blog/2025/03/25/DTI-in-Europe">my first DTI newsletter</a>, <em>“my joining DTI feels like it was a foregone conclusion since my first conversation with its Executive Director Chris Riley in July 2023.”</em> It hasn’t disappointed!</li>
</ul>

<p><strong>Jen Caltrider, Director of Research &amp; Engagement</strong></p>

<ul>
  <li>My favorite dense slog of an academic read that actually changed my world view on how data portability could change the world for good would be this paper <a href="https://cdn.vanderbilt.edu/vu-URL/wp-content/uploads/sites/356/2025/05/25192846/Fenwick-FINAL.pdf">Data Portability Revisited: Toward the Human-Centric, AI-Driven Data Ecosystems of Tomorrow</a>. Don’t let the hefty wordcount of an academic paper fool you, it’s really quite good. And if you don’t have time to read it all, skim the first parts and then really read section IV on Portability Reimagined.</li>
  <li>As a consumer privacy advocate, I spent 2025 really trying to figure out how data portability could make life better for people and give them back control over their data and their privacy. The best person I’ve found out there already talking about the potential for this future is Jamie Smith, who writes the <a href="https://www.customerfutures.com/">Customer Futures</a> substack newsletter. Please, go check it out, and I’d recommend starting <a href="https://www.customerfutures.com/p/a-data-portability-earthquake-is">here</a> (which I know is part 2, you should also read part 1 and then go from there).</li>
  <li>My “fun” read recommendation probably isn’t exactly all that fun to read, as it’s a Harvard  Law Review article from 1850. But, stay with me here, it’s actually really, really cool. Back then, some smart legal types (<em>Warren and Brandeis</em>) wrote a paper called <a href="https://groups.csail.mit.edu/mac/classes/6.805/articles/privacy/Privacy_brand_warr2.html">The Right to Privacy</a> for the Harvard Law Review. In it, they defined privacy as the “right to be let alone.” People, we all deserve the right to be let alone in the AI age! Let’s embrace this definition of privacy please!</li>
</ul>

<p><strong>Delara Derakhshani, Director of Policy</strong></p>

<ul>
  <li>One of my favorite things this year has been a renewed interest in and ongoing momentum for data portability – in the U.S., around the world, and increasingly across new sectors. Part of my work at DTI is to track this, so I offer to you the tracker from <a href="https://dtinit.org/blog/2025/07/29/data-portability-regulatory">this July</a>. We are at a turning point for data portability and I’m excited to be part of this journey.</li>
  <li>Translating the technical work of portability to the everyday is tricky. Early in 2025 I wrote <a href="https://dtinit.org/blog/2025/01/14/what-ban-data">this blog post</a> that I wanted to highlight as a step forward. It spells out nicely and succinctly why data portability should matter to you, using the threat of a TikTok ban as an example.</li>
  <li>On the personal front, I’ve enjoyed spending more time with my mini-dachshund, Charlie Derakhshani, recently. Those of you who own this unique breed know that they ooze love and never leave your side – but that they would also probably happily trade you for a sandwich if they had the chance.</li>
</ul>

<p><strong>Aarón Ayerdis Espinoza, Software Developer</strong></p>

<ul>
  <li>This year I attended FediForum online, and among all the topics presented, there was a fantastic moment when <a href="https://www.imdb.com/es/name/nm5810189/">Elena Rossini</a> (whom I was very surprised to see after several years of watching a documentary she directed) appeared, presenting a <a href="https://www.youtube.com/watch?v=p9c2f63pIag">short film</a> explaining how the Fediverse works.</li>
  <li>Reading articles about Data Portability and discussing at social gatherings with former colleagues and classmates about the importance of our digital rights has been a fun experience. It’s a topic that has largely gone unnoticed, even by people working in the same field as me. You should try talking about portability with your friends - they may never have thought about whether they can move their data.</li>
</ul>]]></content><author><name>The DTI Team</name></author><category term="engagement" /><summary type="html"><![CDATA[Enjoy your holidays with the gift of our favorite things from 2025.]]></summary></entry><entry><title type="html">The DMA-GDPR joint guidelines - new answers bring new questions</title><link href="https://dtinit.org/blog/2025/12/02/dma-gdpr-joint-guidelines" rel="alternate" type="text/html" title="The DMA-GDPR joint guidelines - new answers bring new questions" /><published>2025-12-02T00:00:00+00:00</published><updated>2025-12-02T00:00:00+00:00</updated><id>https://dtinit.org/blog/2025/12/02/dma-gdpr-joint-guidelines</id><content type="html" xml:base="https://dtinit.org/blog/2025/12/02/dma-gdpr-joint-guidelines"><![CDATA[<p>I am almost ready to hit send on DTI’s draft response to the EC and EDPB’s joint guidelines on the interplay between the Digital Markets Act and the General Data Protection Regulation. This newsletter gives you a flavour of the topics we have focused on, and why.</p>

<p>As an (entirely self-proclaimed) specialist in technology policy at the intersection of competition and data protection, the opportunity to feed into this document is as good as it gets for me. Strictly speaking in my view, at least for the data portability sections that I examined, the document goes a little beyond the stated scope of the interplay between the two regulations, also providing some useful detail on how the DMA provisions themselves should be interpreted by gatekeepers. This is a good thing, if not a little late in the day!</p>

<p>Stating the obvious, and helpfully so, the guidelines confirm that Article 20 of the GDPR and Article 6(9) of the DMA are complements to one another. They also clarify how compliance with Article 6(9) of the DMA fits within the framework of the legal responsibilities placed on gatekeepers by the GDPR. This is welcome and should provide additional confidence to all participants within the data portability ecosystem going forwards.</p>

<p>Beyond this, the guidelines also provide some additional practical detail on how the data portability provisions in DMA Article 6(9) should be implemented by gatekeepers, addressing several topics that have been the source of lengthy and sometimes polarised debates over the last two years.</p>

<p>Of these new details, there are many areas where the merits of the policy direction could (and may continue to) be hotly debated, even though the intent and meaning of the guidance itself is pretty clear. For example, the guidelines set out a fairly explicit position on the treatment of other users’ personal data in the context of a data portability transfer. Many will agree with the position, just as many won’t. But most will understand what the text means.</p>

<p>Then, there are a smaller number of areas where the policy issue itself need not be particularly controversial, but the intent of the guidance appears to be open to various interpretations. Rather than answering questions, some sections of the text appear to pose new ones.</p>

<p>Given the objective of the guidance to promote a “consistent and coherent interpretation of the DMA and the GDPR”, I have focused on this latter category of issues where further clarity is needed.</p>

<p>The first of these areas is the interplay between Article 20 of the GDPR and Article 6(9) of the DMA. The document spends several pages detailing how the DMA’s data portability provisions should be interpreted, but the equivalent provisions in the GDPR are almost entirely overlooked. This is a shame, and feels like a real missed opportunity to provide some much needed clarity around the circumstances where a data controller should support direct transfers under Article 20. In particular, a few sentences covering what “where technically feasible” means in practice could be a game changer for the prospects of widespread user-led data transfers. After all, the world of technology has moved on a fair bit since the GDPR was drafted, so perhaps a refresh of thinking is needed beyond the gatekeeper seven.</p>

<p>The second issue we have highlighted is the guidance on the meaning of “continuous and real time”. There is some new detail on how to interpret this requirement, but I’m not convinced the new words are any less open to interpretation than the ones we already had.</p>

<p>In our response, we have encouraged an approach that is context specific and keeps user needs central, which could draw from the <a href="https://dtinit.org/blog/2025/11/04/what-does-real-time-mean">recent research by DTI Summer Fellow Thomas Carey-Wilson</a> that presented a Functional Real-Time framework to help conceptualise latency and speed in data portability.</p>

<p>The third issue we are drawing attention to is Trust. As anyone that has followed my writing on this topic will know, this is an area where DTI has skin in the game. I have previously highlighted the fact that <a href="https://www.techpolicy.press/building-trust-for-data-portability-within-the-dma-framework/">the DMA was unhelpfully silent on Trust</a>, so it is welcome that the guidelines now explicitly recognise the need for gatekeepers to onboard third parties. This includes by requesting identity documents, and also through robust authentication processes integrated into each data transfer request. However, they don’t go any further than that, appearing to rule out the placement of any other guardrails, and completely omitting any reference to the two big ‘C words’:</p>

<ul>
  <li><strong>Criminals:</strong> the guidelines state that “Gatekeepers can therefore not restrict, in any way, the data portability use cases and business purposes that authorised third parties can pursue with the data they receive under Article 6(9) DMA.” While I firmly agree with the spirit (and what I believe to be the intent) of this sentence, it oversimplifies the issue. What about use cases that are illegal? What about third parties that are suspected to be criminal enterprises, or even state actors? If the European Commission wants this guidance to be useful and credible, it needs to be more exhaustive about acceptable vetting procedures, and more explicit that blocking such applications is necessary, and that some basic checks to identify them are expected.</li>
  <li><strong>Consent:</strong> the guidance seems to suggest that gatekeepers must not do any checks in the onboarding process to validate that third parties intend to obtain valid consent, or even consent of any kind at all. Such checks can be very straightforward and light touch, by comparing the organisation’s privacy policy with an image or mock up of their consent screen. The benefits of doing this are immediately obvious – blatantly dishonest and deceptive businesses can be blocked, while the standard of consent is nudged upwards as third parties try harder in the knowledge that someone is checking their homework.</li>
</ul>

<p>As DTI is finalising the processes and documentation for our <a href="https://dt-reg.org/">Data Trust Registry</a>, you can be absolutely certain that a proportionate review of third-parties’ approach to consent will be a core component, as will the aim of blocking criminals’ access to user data. I’d suggest this is fully aligned with the complementary goals of data protection and market contestability. Don’t you agree?</p>

<p>The <a href="https://www.edpb.europa.eu/our-work-tools/documents/public-consultations/2025/joint-guidelines-interplay-between-digital_en">consultation</a> closes in two days, so you still have time to get involved and offer up your own views to these questions.</p>]]></content><author><name>Tom Fish</name></author><category term="policy" /><summary type="html"><![CDATA[The consultation closes in two days. In this email I preview what DTI is going to submit, focusing on direct transfers, trust, consent, and AI.]]></summary></entry><entry><title type="html">Quick Hits from DTI</title><link href="https://dtinit.org/blog/2025/11/18/quick-hits" rel="alternate" type="text/html" title="Quick Hits from DTI" /><published>2025-11-18T00:00:00+00:00</published><updated>2025-11-18T00:00:00+00:00</updated><id>https://dtinit.org/blog/2025/11/18/quick-hits</id><content type="html" xml:base="https://dtinit.org/blog/2025/11/18/quick-hits"><![CDATA[<p>I have three items to share from our recent work at the Data Transfer Initiative: two recently published external articles, and an update on network growth. Enjoy!</p>

<h3 id="consent-and-portability-piece-by-tom-fish-at-tech-policy-press">Consent and portability piece by Tom Fish at <em>Tech Policy Press</em></h3>

<p>Today, <em>Tech Policy Press</em> published “<a href="http://techpolicy.press/data-portability-can-restore-real-consumer-choice-between-consent-or-pay-offerings-online/">Consent, pay or port</a>” by DTI Head of Europe Tom Fish. Tom digs into a privacy question that has been at a roiling boil in Europe for some time: whether current modalities of consent to data collection and use in order to use a service without payment are appropriate. Some companies – including Meta, one of DTI’s founding members – offer paid access to their services, and in some circumstances, also options for less personalized advertising. Tom identifies an orthogonal issue that is critical for meaningful consent: whether or not users can effectively port their data between services. Data portability is a necessary condition for empowering users and making sure that consent, and particularly the withdrawal of consent, is meaningful. Portability helps make markets work, and work to serve the interests of consumers.</p>

<h3 id="ai-and-privacy-piece-by-jen-caltrider-at-fast-company">AI and privacy piece by Jen Caltrider at <em>Fast Company</em></h3>

<p>DTI Director of Research and Engagement Jen Caltrider has a new piece in <em>Fast Company</em> this week entitled “<a href="https://www.fastcompany.com/91435189/ai-privacy-openai-tracking-apps">AI is killing privacy. We can’t let that happen</a>.” In it, Jen writes of the often-overlooked significance of the printing press in the history of privacy – how books give us space to read and to think in solitude. She proposes that in the emerging era of AI – one already marked by, let’s say, less-than-ideal levels of user empowerment over personal data – we look to data portability, “the underdog of privacy rights”, as the lever that we need to change the future of privacy for the better. Check it out!</p>

<h3 id="new-affiliates">New affiliates</h3>

<p>DTI is a membership organization, but a social welfare variant, not a trade association; our structure is one of our superpowers, in my view, as it lets us be grounded and aspirational in equal parts. I wrote <a href="https://dtinit.org/blog/2024/07/16/working-with-industry">a fairly extensive piece</a> last summer about how and why we work with industry to put real solutions into real people’s hands, while maintaining strategic independence and unwavering dedication to our mission of empowering people through data portability.</p>

<p>For some months now, <a href="https://dtinit.org/partners">our website</a> has listed our organizational partners, including founding members Apple, Google, and Meta and partners Amazon and ErnieApp. We’ve also long listed some other organizations with which we have various levels of association; the European Internet Form and the World Wide Web Consortium, both with long established membership structures, along with FediForum and the Trust Over IP Foundation.</p>

<p>But our network is broader than even these data points indicate. And in particular, we’ve begun identifying industry collaborators who provide immense support and alignment on specific projects, and with whom we’ve decided to codify a formal relationship as “affiliates.” Our website now lists <a href="https://inflection.ai/">Inflection AI</a>, which joined last year as our first affiliate, as well as <a href="https://onfabric.io/">Fabric</a> and <a href="https://koodos.com/">Koodos</a>. We’re delighted to have them on board the DTI train, and are looking forward to continued collaboration.</p>

<p>If you’re reading this and thinking, hey I want to get in on that – there’s a “Contact Us” link at the bottom of our website, or here: <a href="https://dtinit.org/contact-us">send us a note</a>!</p>]]></content><author><name>Chris Riley</name></author><category term="news" /><summary type="html"><![CDATA[Sharing three items from our recent work at the Data Transfer Initiative - two recently published external articles and an update on network growth.]]></summary></entry></feed>