<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="3.10.0">Jekyll</generator><link href="https://dtinit.org/feed.xml" rel="self" type="application/atom+xml" /><link href="https://dtinit.org/" rel="alternate" type="text/html" /><updated>2026-03-10T15:04:08+00:00</updated><id>https://dtinit.org/feed.xml</id><title type="html">Data Transfer Initiative</title><subtitle>Home page for the Data Transfer Initiative, a nonprofit organization dedicated to promoting data transfers</subtitle><entry><title type="html">A turning point for AI portability</title><link href="https://dtinit.org/blog/2026/03/10/turning-point-AI-portability" rel="alternate" type="text/html" title="A turning point for AI portability" /><published>2026-03-10T00:00:00+00:00</published><updated>2026-03-10T00:00:00+00:00</updated><id>https://dtinit.org/blog/2026/03/10/turning-point-AI-portability</id><content type="html" xml:base="https://dtinit.org/blog/2026/03/10/turning-point-AI-portability"><![CDATA[<p>Two years ago, I <a href="https://dtinit.org/blog/2024/01/02/portability-predictions">made a prediction</a> that in data portability, supply would exceed demand. The GDPR helped create a universal expectation that people should be able to – at the very least – download their data. (To be clear, Article 20 also requires companies to directly transfer data as well, though that hasn’t manifested as much in practice.) Companies around the world have adopted various forms of data download functions, whether it’s a few clicks within a website and an archive is emailed to the user, or a form that must be filled out.</p>

<p>Personal data is immensely valuable. And much of that value has not yet been unlocked. It’s great to be able to get a copy of your data for your own archives and reference, and for switching services. But the real magic happens downstream, with vertical innovation: building tools and services that can create new value from that data, including in integration across its origins.</p>

<p>The tide is turning. Awareness of digital footprints has been growing for years, and now, everyday people are learning more about what that means and how it can help them do things with their data. And the fundamental creativity of technology builders is awakening. And in parallel, people are generating more and more personal data, and consequently more potential downstream value. A huge factor amplifying these effects is AI.</p>

<p>Regular readers of this outlet know that DTI has been pushing for the importance of personal data portability in the context of AI for <a href="https://dtinit.org/blog/2023/11/21/future-AI-portable">quite</a> <a href="https://dtinit.org/blog/2024/06/04/digging-in-personal-AI">some</a> <a href="https://dtinit.org/blog/2025/02/11/future-of-AI-portability">time</a>. These have been predictions of what’s to come and guidance on how to shape the future, drawn largely from my work and experience in the global tech sector, and my understanding of the possibilities of technology and how it can be both used and controlled. Three dynamics are emerging to validate this dynamic:</p>

<ol>
  <li><strong>Developers are building tools</strong> to get value out of personal data – including developers and builders, not just large corporations – both with and through AI. Check out the reception for <a href="https://www.linkedin.com/pulse/worlds-first-ai-portability-hackathon-chris-riley-urq8c/">the world’s first AI portability hackathon</a>, which DTI recently helped organize.</li>
  <li>For better or worse, people are freely adopting tools like OpenClaw and giving it access to all of the personal data they possess, including their local files and access credentials to remove services. This is a wildly insecure path, but it is being widely pursued nevertheless, because <strong>the value is there</strong>.</li>
  <li>People are making choices about which AI service to use not based on performance but values, including flash reactions to news developments. And when they decide to switch, <a href="https://claude.com/import-memory">service providers</a> and <a href="https://www.instagram.com/reels/DUthzqTj7tx/">internet</a> <a href="https://www.threads.com/@mike.allton/post/DVTYKFOlfgV?xmt=AQF0bsUvkTb5Uv24HfVZtRjA9ckdimArxaonCJqpvtljeeyHGAsrwLGVIdtsoMpiFwAAfpE&amp;slof=1">commenters</a> are walking them through the best currently available pathways to <strong>transfer their data over</strong>.</li>
</ol>

<p>My colleague Tom <a href="https://dtinit.org/blog/2026/01/13/portability-predictions-2026">offered a prediction</a> this year as well: “Data portability use cases will be proven as commercially viable.” At the hackathon in late February, there was at least one angel investor present to look for opportunities. I think Tom’s right, and alongside that, there will be rapid acceleration on the growth curve of demand for and adoption of data portability and personal data use, in and with AI.</p>

<p>Why is AI accelerating portability? First, it helps people prototype technologies based on little more than a concept, reducing technical knowledge and experience barriers. Opinions vary on whether “vibe coding” and similar AI-assisted development can substitute for production-quality or long-term maintainable software. However, it’s hard to deny that it makes it easier to test out ideas and hypotheses.</p>

<p>Second, it unlocks new recommendation and suggestion power based on user tastes. While this is perhaps fairly basic functionality, it’s incredibly valuable to help someone identify new music they might want to listen to, restaurants they might want to try, or products they might want to purchase – both to the individual and to the enterprise. If, as posed by Eric Seufert, <a href="https://mobiledevmemo.com/everything-is-an-ad-network/">“everything is an ad network”</a>, then everything must also be a potential data portability use case.</p>

<p>Finally, AI interactions are themselves a new source of interesting and valuable data. People talk with their chatbots about lots of things. While some of this data can be extremely personal and sensitive, lots of it also can be extremely valuable, as we know from the ways in which it is used in fine tuning and improvements to the AI service itself. These same learnings and personalizations are of use in many other contexts as well.</p>

<p>But, how much is this last part true in practice? What form is portability of personal AI data taking today? Are the current tools and methods making the right data available? Will there be trust mechanisms in place, or will users be encouraged (or misled) to transfer potentially sensitive chat histories to new services without safeguards?</p>

<p>DTI has <a href="https://dtinit.org/blog/2025/08/26/path-forward-AI-portability">articulated our principles</a> for how it should work in practice. TL;DR: <strong>We aren’t there yet.</strong> Portability demand is growing. Can the supply keep up?</p>

<p>It’s great to see experimentations with memory transfers, as Anthropic is doing. I appreciate as well that you can still export your raw personal data from Claude as well – as you can from ChatGPT and other AI services. I hope, but cannot be certain, this will continue. And the direct transfer of such data, as articulated in GDPR Article 20, typically remains a work in progress, with few exceptions. In the age of possibility brought about by modern AI, I struggle to imagine that <a href="https://dtinit.org/blog/2026/02/10/not-rocket-science">technical feasibility</a> could be a plausible barrier.</p>

<p>Trust is missing here as well. Our <a href="https://dt-reg.org/">trust registry project</a>, nearing the end of its <a href="https://dtinit.org/blog/2025/10/07/announcing-data-trust-registry">pilot phase</a>, vets third-party recipients of direct transfers of personal data to help protect people – checking that their data will not be stored insecurely or abused, and that relevant consent mechanisms meaningfully reflect what the company will do with the data.</p>

<p>Contrast DTI’s trust work with the realities of OpenClaw, which Simon Willison has described as the technical development most likely to result in a “<a href="https://simonwillison.net/2026/Jan/30/moltbook/">Challenger disaster</a>.” People are wantonly opening their local drives and connecting their access credentials to AI agents they not only do not actively control, but in many cases do not understand.</p>

<p>I have confidence in DTI’s partners and affiliates, who together lead on data portability in all its implementations. Joining us in our work means supporting our mission: “Empower people by building a vibrant ecosystem for simple and secure data transfers.” These companies make personal data available through many methodologies, including downloads, Data Transfer Project-powered direct transfers, and APIs. With our affiliate Inflection, we <a href="https://dtinit.org/blog/2024/08/26/inflection-AI-portability">shipped a data model</a> for conversation histories designed to maximize effective reuse. And our affiliates Fabric and koodos are building new tools and open ecosystems around personal context portability in AI, including <a href="http://context-use.com/">this brand new context-use tool</a> from Fabric. Context-use is a local, open source tool that converts user archives like full ChatGPT conversations and Instagram stories into personal context for agents like OpenClaw. In this way, agents are able to use full personal context safely without accessing primary user accounts.</p>

<p>But I am worried about a reversion to historical patterns of trapping users in online services by their own data. Where there is money to be made, there is incentive to capture as much of it as possible. The question I asked in November 2023 has not been fully answered: “whether the future of generative AI will lock users into new technology silos, or empower them by ensuring portability.”</p>

<p>I’m also worried about privacy and security problems that could arise from an ecosystem of data movement that develops without collaboration and considerations of trust. In other portability contexts, great care is taken in scoping the data made available and in user understanding of the transfer and its safety. Without substantial investment in and coordination of portability, more problems – avoidable problems – will occur.</p>

<p>I’m not the only one thinking about the risks of consolidation and security in data flows. Regulation is on the horizon. In the EU’s recent DMA review process, <a href="https://www.openmarketsinstitute.org/publications/open-markets-submits-review-of-the-digital-markets-act-considerations-on-cloud-and-ai">Open Markets Institute</a> and other commentators explicitly called on the European Commission to designate virtual assistants and chatbots. Megan Kirkwood at Tech Policy Press wrote <a href="https://www.techpolicy.press/will-the-eu-designate-ai-under-the-digital-markets-act/">an overview of the issue</a>.</p>

<p>In the United States, at the state level at least, there is ample regulatory appetite. In 2025 alone, <a href="https://www.uschamber.com/technology/the-hidden-cost-of-50-state-ai-laws-a-data-driven-breakdown">more than 1100 AI-related bills</a> were introduced in U.S. states. The <a href="https://ash.harvard.edu/resources/utah-digital-choice-act-reshaping-social-media/">Digital Choice Act</a> in Utah, although it is not without controversy and challenge in implementation, includes substantial data portability obligations for social media services; and similar laws have been proposed in several other states. It’s not hard to imagine these two forces coming together.</p>

<p>DTI doesn’t take a position on regulatory matters, and we recognize that these are complex issues and regulation inherently involves tradeoffs. But we also recognize that regulation in some form is inevitable, regardless of one’s views on the merits.</p>

<p>Now is the time to get a head start on building portability infrastructure in AI the right way – together. We can, and should, collaborate on shared tools and methodologies to export and import personal data in AI, including both conversation histories as well as higher-level memories and contexts. It won’t take radical new engineering. Just the space and collective will to coordinate. And we at DTI exist to facilitate precisely this.</p>

<p>We invite you to <a href="https://dtinit.org/contact-us">join us</a> on this journey.</p>]]></content><author><name>Chris Riley</name></author><category term="AI" /><summary type="html"><![CDATA[The world of personal data in AI is changing as developer interest grows and portability falls short. Will collaboration or regulation come first?]]></summary></entry><entry><title type="html">Putting a price on portability</title><link href="https://dtinit.org/blog/2026/02/24/putting-a-price-on-portability" rel="alternate" type="text/html" title="Putting a price on portability" /><published>2026-02-24T00:00:00+00:00</published><updated>2026-02-24T00:00:00+00:00</updated><id>https://dtinit.org/blog/2026/02/24/putting-a-price-on-portability</id><content type="html" xml:base="https://dtinit.org/blog/2026/02/24/putting-a-price-on-portability"><![CDATA[<p>I literally went the extra mile to produce this newsletter, going for a tapas lunch with <a href="https://www.oxera.com/people/tanja-salem/">Tanja Salem</a>, a highly regarded regulatory economist now at Oxera (formerly Director of Economics at BT Group). Over small plates, we covered a big topic: <em>whether, and in what circumstances, data holders should be permitted to recover the costs of supporting data transfers.</em></p>

<p>(Disclaimer for the skim readers: DTI is absolutely not proposing the introduction of fees in the context of user-initiated transfers of personal data.)</p>

<p><strong><em>A tricky policy question</em></strong></p>

<p>The UK is looking to be a leader in data portability initiatives, bringing forward what it calls “Smart Data” schemes in a range of sectors, through which it is hoping to replicate the success of Open Banking. These include schemes for financial services, energy, telecoms and digital markets, which could include both personal and non-personal data within their scope. In this work, the UK government faces a difficult and somewhat controversial policy question looming on the horizon: who should cover the costs of facilitating user-requested data transfers?</p>

<p>Recent regulatory precedents suggest the burden will be placed on the incumbent data holders, with the aim of having the minimum impediment to individuals or third-party businesses from accessing the data. For example, data controllers in the EU and UK are generally not permitted to charge for user-led transfers of personal data under the data portability provisions within the GDPR, nor can “gatekeepers” under the EU’s Digital Markets Act (DMA), nor the largest banks in the UK as part of Open Banking arrangements that were triggered by the CMAs 2017 market investigation into retail banking, and complemented by the Second Payment Services Directive (PSD2).</p>

<p>However, unlike those existing regulatory frameworks, the UK’s Data (Use and Access) Act 2025, which enables the introduction of Smart Data Schemes, <a href="https://www.legislation.gov.uk/ukpga/2025/18/section/11">explicitly leaves open the potential for data holders to charge third parties for access.</a></p>

<p>As sector-specific schemes are developed, this issue is likely to elicit some polarised views. On one side, there are strong economic arguments for incentives so data holders continue to invest in data, high-quality data transfer tools, and unlocking data as an economic asset. On the other, some may question whether fees for data access are consistent with policy aims of promoting competition, unlocking innovation, and empowering consumers.</p>

<p><strong><em>The economist’s perspective</em></strong></p>

<p>Here is (roughly) how our conversation went…</p>

<p><strong>Tom:</strong> I enjoyed reading your <a href="https://www.linkedin.com/feed/update/urn:li:activity:7414984108763971584/">report on fees for Smart Data schemes</a>. The thing that struck me was that your proposed framework includes fees for data portability in nearly all circumstances, with data holders always permitted to recover costs. This is different to what we have seen in Open Banking regulation in the UK and the EU, or in digital markets regulation. What is your thinking behind this? Won’t we see more innovation if data is freely available?</p>

<p><strong>Tanja:</strong> Of course, demand will inevitably be higher (at least initially) if there is no charge. Economists call that pricing that facilitates “static competition”. But in return you may see lower quality supply, fewer sustainable use cases and weaker investment. Economists call that “dynamic competition” over time. It is about striking the right balance between these factors.</p>

<p>On the one hand, if data is freely available to make new scientific discoveries or provide public services, then there’ll be more opportunities for more people to contribute and create things that ultimately will benefit everyone.</p>

<p>But on the other hand, there are also good reasons why data should not be free because doing so could lead to under-investment – in data and also the products that use it. If companies must build data tools entirely at their own expense, with no financial incentives in place to generate any meaningful demand for the functionality,  the natural response will be to do the bare minimum for legal compliance. This means that outcomes may suffer.</p>

<p><strong>Tom:</strong> That makes sense, though I would add that collective action can also help to address this challenge by sharing the costs of investment while delivering better outcomes. The Data Transfer Project is one example where this kind of collaboration to support reciprocal transfers has made progress without fees.</p>

<p>Tell me more about the thinking behind your framework then. Surely you can’t have monopolies profiting from policies that are intended to address their market power?</p>

<p><strong>Tanja:</strong> There are a few factors that can affect the appropriate fee level, including the type of data and the motivations for the intervention, such as addressing competition challenges. Prof. Sean Ennis of the Centre for Competition Policy at the University of East Anglia and I created a framework with three categories of pricing solutions to account for this:</p>

<p>1) where data can be shared across markets to deliver significant consumer and potentially social benefits (for example in health and transport/smart city type application), without undermining data holders’ business models: transaction-cost-only pricing; <br />
2) where data sharing in competitive markets may undermine data holders’ business models: opportunity cost recovery; and <br />
3) where data sharing is required to remedy a competition problem in a market: at least transaction cost plus a margin, or a value based element subject to the case specific competitive assessment.</p>

<p>Even in cases of addressing market power, usually permitting a fee with a reasonable margin will create incentives that drive the most efficient outcomes. There are standard ways of doing this using cost-based, benchmark-based, income-based and externalities-based valuation methods (<a href="https://www.oxera.com/insights/agenda/articles/if-data-is-so-valuable-how-much-should-you-pay-to-access-it/">here</a>’s a good overview by Oxera, for example).</p>

<p><strong>Tom:</strong> I see where you are coming from regarding incentives. But the framework could be challenging to implement – both practically and politically – in the context of digital markets, where so much of the data collected is personal, and given the regulatory frameworks already in place. Some might also argue that where companies extract substantial profits from the collection of large volumes of personal data, supporting onward transfers of that data is merely a cost of doing business.</p>

<p><strong>Tanja:</strong> I addressed these issues in my paper, which points out the risk that when the economic benefits of participating in data sharing are not clear, then it’s difficult to incentivise the provision of high-quality data products. The OECD has also pointed this out (<a href="https://one.oecd.org/document/COM/DSTI/CDEP/STP/GOV/PGC\(2024\)1/FINAL/en/pdf">here</a>). Of course regulation can always force supply, but as regulators in other sectors know (telecoms, utilities) that’s hard to get right and is not ideal in differentiated product markets.</p>

<p>The DMA has required the gatekeepers to provide continuous and real-time access to data to authorised third parties free of charge. How is that working out?</p>

<p><strong>Tom:</strong> If you are asking me should Article 6(9) of the DMA be viewed as a positive success story, then I would say absolutely, yes! It has been a catalyst for major progress for data portability in digital markets, the likes of which we have not seen before. But if you are asking me whether the data portability tools could have been even more effective if the gatekeepers were offered carrots as well as sticks, I would probably agree.</p>

<p>Let’s just say hypothetically that we did agree a fee was justified for creating the right incentives for data holders, wouldn’t that just kill any startup that came along trying to create a new type of service? Digital services often struggle with strong network effects, and fees for data access could really stifle the kind of innovation that policy makers are looking to unleash. From my experience at a startup data intermediary, a fee each time a user wanted to share their data would have made the business completely non-viable in those early stages. Successful data transfer takes two: by improving the incentives for data holders, won’t we reduce the ability or incentive for data recipients to participate?</p>

<p><strong>Tanja:</strong> Yes, that is certainly a risk. In new markets, where users need to experience the value of a proposition before it becomes more mainstream, initial discounts can be important, ultimately to drive volume. So initially, high input prices, even if cost reflective, might be a challenge.</p>

<p>I can see that.</p>

<p>Once higher adoption is achieved, and learning effects happen in competitive markets, prices tend to come down as average costs reduce. We’ve seen this in mobile data since the iPhone’s launch in 2008, and in technologies from batteries to LEDs. So the issue is one of upfront cost when the uncertain rewards come later.</p>

<p>If both data holder and data recipient can see the potential in a proposition, teething problems can typically be resolved without regulatory intervention where there isn‘t market power. Firms in competitive markets holding data will want to establish long-term partnerships with firms having know-how that can help them create viable products and potentially include low or zero entry prices as part of the business case, with future pay-offs shared between partners.</p>

<p>And even where there is market power, it’s important to ensure that investment in data and the infrastructure that supports it will continue to be funded. Where pricing ends up being imposed this should align with incentives to achieve that and long-term contracts can also play a role here.</p>

<p><strong>Tom:</strong> I suppose it also depends on the type of data we are talking about. I am on board with your proposed more flexible approach where data sharing could undermine business models. I would actually question whether such a requirement is justified in the first place, regardless of fee structures. For example, at DTI we have been talking a lot lately about data portability from LLMs and AI assistants, including the need to capture both sides of conversation histories. But I absolutely draw the line when it comes to underlying model weights and parameters that are the individual company’s valuable IP. Forcing the sharing of that kind of information sets a harmful precedent and would be difficult to compensate for. Is this the kind of thing you mean?</p>

<p><strong>Tanja:</strong> Here it’s likely to be harder to land on a one size fits all solution. As an economist I’d say the challenge with legislation in this area is that it is hard to arrive at economically meaningful legal distinctions between different types of data. Whereas some legal distinctions have been hard-coded into the EU Data Act, there is still an opportunity for economically meaningful distinctions to be drawn in the future implementing regulations for the Data Use and Access Act in the UK, and potentially also the EU Financial Data Access regulation (FIDA).</p>

<p>As you say, when it comes to model weights – ultimately also information in digital form – IP rights will likely kick in. It appears that some legislation might jar with that. Forcing openness by imposing regulation that potentially interferes with IP rights or database rights are clearly highly risky for incentives to invest in these in the first place.</p>

<p><strong>Tom:</strong> Where do you see this going then? As the UK brings through firm proposals for an Open Finance Scheme, do you expect your framework for charging to be applied? And what about in some of the other schemes like digital that are perhaps less sector specific?</p>

<p><strong>Tanja:</strong> I certainly hope the framework we created will be useful, yes.</p>

<p>Open data initiatives such as smart cities suggest there is huge scope for voluntary open data and long-term commercial partnerships between data holders and data recipients, enabling the sharing of risk and reward. A key distinction will be the presence of absence of market power or other market failures, and where there isn’t, government policy and regulation should facilitate rather than determine outcomes.</p>

<p>As it spans all sectors, smart data is unlikely to be a one-size fits all policy – and different use cases and sectors will come with different opportunities, challenges and risks.</p>]]></content><author><name>Tom Fish</name></author><category term="policy" /><summary type="html"><![CDATA[Today’s note is a discussion with Tanja Salem on the complex topic of charging fees for data transfers in various contexts, including UK Smart Data.]]></summary></entry><entry><title type="html">Data portability - it’s not rocket science</title><link href="https://dtinit.org/blog/2026/02/10/not-rocket-science" rel="alternate" type="text/html" title="Data portability - it’s not rocket science" /><published>2026-02-10T00:00:00+00:00</published><updated>2026-02-10T00:00:00+00:00</updated><id>https://dtinit.org/blog/2026/02/10/not-rocket-science</id><content type="html" xml:base="https://dtinit.org/blog/2026/02/10/not-rocket-science"><![CDATA[<p>In an era of extraordinary technological progress – with driverless taxis navigating our roads and pop stars performing in space – any suggestion that it may not be technically feasible for one organisation to transfer data directly to another deserves far closer scrutiny.</p>

<p>Since 2018, technical feasibility has been treated by many organizations in the EU and the UK as a legal loophole in an obligation to support data portability through direct transfers. This element of the GDPR Article 20 has never had much bite, because the words have no single clear meaning.</p>

<p>This has long been a bugbear of mine, as has regulators’ and legislators’ heads-in-the-sand approach to the issue (most recently in the <a href="https://www.edpb.europa.eu/our-work-tools/documents/public-consultations/2025/joint-guidelines-interplay-between-digital_en">draft guidelines on the interplay between the DMA and the GDPR</a>). However, when I learned that the problem is being exported to the US through State legislation such as <a href="https://le.utah.gov/~2025/bills/static/HB0418.html">Utah’s Digital Choice Act</a>, I decided it was time to revisit the topic.</p>

<p>The term ‘where technically feasible’ was added to the direct transfer component of <a href="https://gdpr-info.eu/art-20-gdpr/">GDPR Article 20</a> for very understandable reasons. Unlike the DMA, which applies to a small number of very large technology companies, the GDPR has wide application to data controllers in the EU of all different shapes and sizes. From farmers, to factories, to florists, the data portability provisions of the GDPR will likely apply if data controllers process personal data on the basis of consent or performing a contract. So, quite reasonably, authors of the GDPR inserted a carve out for organisations that would find it challenging to implement.</p>

<p>Unfortunately, this carve out is open to different interpretations, and has consequently acted as a barrier to effective data portability implementation and enforcement throughout the European Union and the UK ever since.</p>

<p><strong><em>What does ‘technically feasible’ mean?</em></strong></p>

<p>The <a href="https://www.collinsdictionary.com/dictionary/english/feasible">Collins English Dictionary</a> felt like an obvious starting point for my research: it defines feasible as <em>“able to be done or put into effect; possible”.</em></p>

<p>Finding the consensus for a commonly used phrase is also a strong use case for Large Language Models (LLMs), given they have been trained on the entire Web archive:</p>

<ul>
  <li>
    <p>ChatGPT told me that <em>“Technically feasible means that something can be accomplished using existing or attainable technology, skills, and resources, even if it may be difficult, expensive, or impractical for other reasons. In other words: it’s possible in theory and in practice from a technical standpoint.”</em></p>
  </li>
  <li>
    <p>Along similar lines, Gemini explained <em>“When someone says a project is technically feasible, they mean it is actually possible to build or implement with the technology, tools, and expertise currently available.”</em></p>
  </li>
</ul>

<p>One critical area of uncertainty that these two responses subtly highlight is whether or not direct transfers need to be feasible with the technology and skills currently held by an organisation, or whether technical feasibility also takes into account the technology that could be relatively easily procured or built within a reasonable time frame.</p>

<p><a href="https://ec.europa.eu/newsroom/article29/items/611233">Official EU guidance</a> on the right to data portability adopted in 2016 sheds some light on the baseline requirements for the technical feasibility of direct transfers, which are:</p>

<ul>
  <li>Communication between two data controllers’ systems is possible;</li>
  <li>The transmission can take place in a secure way; and</li>
  <li>The receiving system is technically in a position to receive the incoming data.</li>
</ul>

<p>These do not appear to set a high bar.</p>

<p><strong><em>Practical interpretations</em></strong></p>

<p>There are numerous practical solutions available for facilitating direct transfers of data that can meet the above criteria. The official guidance itself provides some examples, including secure messaging, an SFTP server, a secured WebAPI, or a WebPortal, going on to emphasise that data subjects should be enabled to use personal data stores, PIMS, and other trusted third-parties.</p>

<p>Since that guidance was written a decade ago, a myriad of affordable options have emerged online for secure cloud storage, as well as commercial services that are dedicated to supporting secure transfers of large files (such as <a href="https://wetransfer.com/">WeTransfer</a>). The Data Transfer Project (DTP) has also demonstrated the art of the possible when organisations collaborate to develop interoperable, reciprocal, and scalable portability solutions based on common data models. Although not all organisations will have the resources to invest in large scale portability initiatives like the Data Transfer Project (DTP), many of the alternatives are far less complex, where any barriers to implementation must surely be non-technical (e.g. cost or lack of business incentive).</p>

<p>Rather than focusing too much on the relatively straightforward question of when direct transfers of any kind are technically feasible (answer=almost always), I am more interested in considering when it is technically feasible to deliver effective data portability e.g. through a scalable solution such as the DTP or via a purpose-built API. I’ve illustrated my thinking below by describing six plausible scenarios a data controller could find itself in where they cannot immediately support a transfer request. In each case, I’ve indicated the extent to which the barriers are technical in nature.</p>

<p><img src="/images/blog/tfmatrix.png" alt="A matrix showing six reasons why direct API transfers might not be available, ranked from highly technically feasible to technically challenging." class="blog-image" /></p>

<p>At one end of the spectrum, if organisations already have direct transfer tools such as an API at their disposal but choose to restrict their use for data portability in some way, then it would appear any barriers to effective data portability in these cases are non-technical (such as assertions of regulatory obstacles to implementation).</p>

<p>At the other end of the scale, many organisations (probably a very long tail) may simply not have the technical knowhow, software, or bandwidth required to facilitate data exports in a secure and efficient manner. Technical feasibility will be a barrier to effective data portability in these circumstances. Somewhere in the middle, there will be many organisations that do not currently have data transfer tools or technology available to them, but they could realistically build or acquire them in time.</p>

<p>While it might be tricky to know where to draw the line on technical feasibility from a legal standpoint, let’s put things into perspective with a reminder of some of the amazing things that are technically feasible in 2026:</p>

<ul>
  <li>Space tourism with re-usable rockets</li>
  <li>Driverless taxis</li>
  <li>Laboratory grown meat</li>
  <li>3D bioprinting of human organs</li>
  <li>Direct communication from brains to computers</li>
</ul>

<p>How about user-led data transfers? Well, let’s just say it’s not rocket science.</p>]]></content><author><name>Tom Fish</name></author><category term="policy" /><summary type="html"><![CDATA[Building seamless, secure and scalable data transfer tools is challenging. At what point do effective direct transfers become technically feasible?]]></summary></entry><entry><title type="html">DTI’s 2025 Annual Report</title><link href="https://dtinit.org/blog/2026/01/27/annual-report-2025" rel="alternate" type="text/html" title="DTI’s 2025 Annual Report" /><published>2026-01-27T00:00:00+00:00</published><updated>2026-01-27T00:00:00+00:00</updated><id>https://dtinit.org/blog/2026/01/27/annual-report-2025</id><content type="html" xml:base="https://dtinit.org/blog/2026/01/27/annual-report-2025"><![CDATA[<p>I’m pleased to share <a href="https://dtinit.org/assets/DTI-Annual-Report-2025.pdf">DTI’s annual report</a> for last year, calendar year 2025, effectively our third full year in operation. As I write in the note, it’s remarkable how different each of those years has felt – although there is continuity in much of our work, the context in which we operate is constantly shifting, a dynamic we are certainly not alone in experiencing in this decade.</p>

<p>In our first annual report, I described 2023 as our “launch” year. Last year, I framed 2024 as our “journey.” With this year’s report, I note the accelerating pace and demands of our task as an ongoing “march” forward. We can’t know what the year ahead will bring, but I have a hunch it, too, will be quite different from what has come before. (My colleague Tom has shared <a href="https://dtinit.org/blog/2026/01/13/portability-predictions-2026">some predictions</a>, in case you missed our last note!)</p>

<p>I recommend you read the report – it’s not too long, nor I hope too long-winded. But let me provide three bullet points to summarize:</p>

<ul>
  <li><strong>We shipped.</strong> We continued our work on the core Data Transfer Project open-source toolkit for simple and secure end-to-end data transfers, but our pilot <a href="https://dt-reg.org/">Data Trust Registry</a> project rose above DTP in widespread visibility as we shipped infrastructure, secured a platform partner, and signed up several organizations to our trust levels 1 and 2.</li>
  <li><strong>We showed up.</strong> Our goal is to be at the table whenever data portability is on the agenda. We organized workshops and events and spoke at conferences. We submitted comments to governments in multiple jurisdictions. And we published original work on critical issues, including realtime portability and AI.</li>
  <li><strong>We stayed on target.</strong> The world is changing, and the landscape for portability is evolving and growing with it. In 2025 we grew to meet that, adding new headcount and new affiliates. Yet our mission remains our north star: to empower people by building a vibrant ecosystem for simple and secure data transfers. As we increase our work on topics like trust and AI that were ancillary at our beginnings and are now central, our purpose does not and will not change.</li>
</ul>

<p>This year is poised to be even bigger for our work than any of the years thus far. And we’re ready for it. Stay tuned.</p>]]></content><author><name>Chris Riley</name></author><category term="news" /><summary type="html"><![CDATA[We’re pleased to share the annual report for 2025, our third full year in operation, in which we reflect on our impact within the ecosystem. Enjoy!]]></summary></entry><entry><title type="html">Predictions for the 2026 edition of data portability unwrapped</title><link href="https://dtinit.org/blog/2026/01/13/portability-predictions-2026" rel="alternate" type="text/html" title="Predictions for the 2026 edition of data portability unwrapped" /><published>2026-01-13T00:00:00+00:00</published><updated>2026-01-13T00:00:00+00:00</updated><id>https://dtinit.org/blog/2026/01/13/portability-predictions-2026</id><content type="html" xml:base="https://dtinit.org/blog/2026/01/13/portability-predictions-2026"><![CDATA[<p>January provides a natural break point to reflect on progress and to set new goals and plans for the year ahead. As I do this, it has got me excited about what a transformational year 2026 will be for data portability as a policy area in the tech sector, driven by a combination of regulatory interventions, technological advancements, and market developments. So I thought I would jump on the bandwagon and give you my predictions for portability related developments to look out for in 2026.</p>

<p>So here they are…</p>

<ol>
  <li>
    <p><strong>OpenAI will be designated as an ‘emerging gatekeeper’ in the EU.</strong>  Following the one year review of the Digital Markets Act (DMA), I expect the European Commission to move towards classifying ChatGPT as a “Virtual Assistant”, then applying a targeted subset of the DMA provisions to the service (including Articles 6(9) and 6(10)). As a result, I predict ChatGPT will implement a data portability API, supporting ongoing developer access to daily downloads of users’ data such as conversation histories. In a rapidly evolving technology context, we expect lots of discussion of scope, and anticipate greater visibility into <a href="https://dtinit.org/blog/2025/08/26/path-forward-AI-portability">DTI’s AI portability principles</a>. The Virtual Assistant CPS may also be applied more widely to existing gatekeepers that operate AI powered chatbots, thereby expanding the scope of data included in their data portability tools.</p>
  </li>
  <li>
    <p><strong>The UK will introduce a Smart Data Scheme for digital markets.</strong> This prediction is really about the when rather than the if, and I am expecting a rapid timeline from the Department for Science, Innovation and Technology (DSIT), with the necessary secondary legislation in place before the end of the year (just). That would be an extremely ambitious timeline but, for a government motivated by economic growth (and with no money to spend), it makes sense to move quickly.</p>
  </li>
  <li>
    <p><strong>DTI’s Data Trust Registry will reach critical mass on both sides.</strong>  Perhaps this is a bit of a cheat prediction. After all, it is something I am looking to influence directly, and we have already made significant progress in this direction. But nonetheless it feels sufficiently noteworthy for inclusion. I estimate that the tipping point for overcoming the ‘chicken and egg’ effects of this two-sided platform we are building will be three large platforms relying on <a href="https://dt-reg.org/">our Registry</a> for verification, and 30+ services registered and listed and set up for annual review. I am confident we will get there comfortably in 2026.</p>
  </li>
  <li>
    <p><strong>Data portability use cases will be proven as commercially viable.</strong>  I’m certainly not predicting an explosion of use cases with billions of users - I will save that prediction for 2027! But in this coming year I expect to see two very important developments for data portability use cases. First, a few of the most promising startups built on top of data portability APIs will become profitable and/or attract major inward investment (though we may be too early to start talking about exits). Second, a handful of existing businesses with established userbases will incorporate portability as a new feature into their offerings to enhance personalisation of their service. These early success stories will provide useful for regulators around the world examining digital market regulation.</p>
  </li>
  <li>
    <p><strong>Portability will be trialled to support consented personalisation of ads.</strong>  Perhaps as a subset of the prediction above, I foresee some ad-funded apps will trial the use of data portability APIs to power their ad targeting as a replacement for cross-site or cross-app tracking. For example, apps might request that their users share their personal data from other platforms such as Facebook, YouTube or the App Store, with explicit consent that it could be used to serve them relevant ads. Gaming seems a strong candidate for this move, given that games providers could incentivise users to share their data by giving them non-financial rewards such as extra lives or other in-game features that players might otherwise purchase through in-app payments.</p>
  </li>
  <li>
    <p><strong>Personalised audio content will grow in popularity.</strong>  The shift towards AI generated audio content going mainstream started last year, with AI generated music <a href="https://www.billboard.com/lists/ai-artists-on-billboard-charts/">entering the charts and capturing headlines</a>. While I’m sure that controversial trend will continue this year, I expect another (slightly conflicting) shift to take place that could make charts less relevant altogether. People are going to start creating and listening to their own music and podcasts, produced at the click of a button by AI powered services to their own tastes and interests. This development will drive new demand for data portability from major streaming platforms, with new AI powered content creation services seeking access to their users’ past listening, viewing and searching data to support creation of highly personalised content.</p>
  </li>
  <li>
    <p><strong>Europe will continue to lead the way on data portability.</strong>  Somewhat disappointingly, unless prediction four comes to fruition much sooner than expected, I don’t think we will reach the tipping point in 2026 where it is a no-brainer for all major tech platforms to support global availability of data portability APIs. Instead, I think we will see continued divergence between Europe (EU and UK) and the rest of the world. There will be some progress elsewhere (I hope to see at least one more existing portability API made available in the US for example) but regulatory uncertainty will stifle incentives, and the demand from developers and users in key markets such as the US will not be vocal enough (yet). As we strive to build a thriving global data portability ecosystem, this is one prediction we will be actively looking to disprove.</p>
  </li>
</ol>

<p>I look forward to returning to these at the end of the year to see how well I did. In the meantime, get in touch to let me know what you think, or even to share some predictions of your own!</p>]]></content><author><name>Tom Fish</name></author><category term="policy" /><summary type="html"><![CDATA[The year ahead will be transformational for data portability. Here are some predictions from our point of view at DTI.]]></summary></entry><entry><title type="html">Our Favorite Things</title><link href="https://dtinit.org/blog/2025/12/16/dti-eoy-unwrapped" rel="alternate" type="text/html" title="Our Favorite Things" /><published>2025-12-16T00:00:00+00:00</published><updated>2025-12-16T00:00:00+00:00</updated><id>https://dtinit.org/blog/2025/12/16/dti-eoy-unwrapped</id><content type="html" xml:base="https://dtinit.org/blog/2025/12/16/dti-eoy-unwrapped"><![CDATA[<p>Hello,</p>

<p>We made it! It’s the end of 2025, and wow, what a year it has been.</p>

<p>I hope you all are able to take a breath, step back from the work, and get some rest as the year winds down.</p>

<p>We here at DTI wanted to gift you some of our favorite things – things we’ve read this year that really mattered or opened our eyes, things we’ve learned we want to share, and even some personal tidbits we just couldn’t help but include to hopefully bring you some of the joy it brought us.</p>

<p>Enjoy and the happiest of holidays to you from all of us!</p>

<p>Chris, Lisa, Delara, Tom, Aaron, and Jen <br />
Your Data Transfer Initiative Team</p>

<p><strong>Our Favorite Things</strong></p>

<p><strong>Chris Riley, Executive Director</strong></p>

<ul>
  <li>I’ve spent a ton of time this year working on and thinking about AI. I could create a long list of recommendations just on that subject. But I’ll keep it to 1, and my apologies to the many left-behinds: Mustafa Suleyman’s writings on <a href="https://mustafa-suleyman.ai/seemingly-conscious-ai-is-coming">Seemingly Conscious AI</a>. As modern AI grows, so do its problems. (For a bonus, check out the <a href="https://www.techpolicy.press/we-need-to-control-personal-ai-data-so-personal-ai-cannot-control-us/">AI portability principles</a> we’ve published at DTI, to help people stay in control of the AI future.)</li>
  <li>In a year of significant, yet at times swirling, winds for data policy in Europe, I’ll highlight the <a href="https://www.gov.uk/guidance/data-use-and-access-act-2025-data-protection-and-privacy-changes">UK’s Data (Use and Access) Act</a>, which was fully adopted this year. The expansion of Smart Data into other sectors, including potentially the digital sector, will lead to a great increase in data transfers in practice, and DTI will be there to support.</li>
  <li>Shifting to the personal, I think this year saw an increase in fans for one of the greatest sources of unalloyed joy in my life: the TV show “<a href="https://www.nytimes.com/2025/07/02/dining/somebody-feed-phil-rosenthal.html?searchResultPosition=1">Somebody Feed Phil</a>.” Not as serious as Anthony Bourdain’s travel food shows, though inspired by them, Phil celebrates the positive and the wonder in good food wherever he goes, and the people, history, and culture behind the food.</li>
</ul>

<p><strong>Lisa Dusseault, CTO</strong></p>

<ul>
  <li>Favorite non-fiction book read this year: <a href="https://en.wikipedia.org/wiki/The_Unaccountability_Machine">The Unaccountability Machine</a>. Favorite new fiction series: <a href="https://en.wikipedia.org/wiki/Dungeon_Crawler_Carl">Dungeon Crawler Carl</a> –  I devoured these at the beginning of 2025.</li>
  <li>DjangoCon, in Chicago, where I spoke in October, was a terrific experience with friendly, supportive people, all willing to share their knowledge. Standout talk: <a href="https://www.youtube.com/watch?v=Ws9lNrrK8dw&amp;list=PL2NFhrDSOxgUSZVGkmbMhUpaaZ1ORfpCl&amp;index=17">AI Modest Proposal</a> by Mario Munoz.</li>
  <li>Favourite new DTI collaborators: getting to know <a href="http://Fabric.io">Fabric.io</a>, <a href="http://koodos.com">Koodos</a> and <a href="http://inflection.ai">Inflection AI</a> this year, all of whom became DTI affiliates in the year. I love learning about startups’ journeys and their vision.</li>
</ul>

<p><strong>Tom Fish, Head of Europe</strong></p>

<ul>
  <li>On the policy side of things, a personal highlight of the year for me was the UK government <a href="https://www.gov.uk/government/calls-for-evidence/smart-data-opportunities-in-digital-markets">consultation</a> on whether and how to implement a Smart Data scheme for digital markets. Having originally proposed this idea a few years ago in a former role, it reaffirmed my personal public policy mantra: <em>“it can all start with a <a href="https://gener8ads.com/blog/open-digital-an-entirely-unoriginal-idea/">blog</a>”.</em></li>
  <li>When selling the value of data portability to policy makers and politicians over the years, I have often called on the expression <em>“if you build it, they will come”</em>, slightly adapted from the 1989 classic <a href="https://www.imdb.com/title/tt0097351/">Field of Dreams</a>. At an event in London on Context Portability for AI Agents, hosted alongside two DTI members Google and Fabric, I officially retired this movie reference, as data portability use cases are finally emerging from the cornfield. (<a href="https://www.youtube.com/watch?v=qyYT-O9lxTU">Watch here from 28:00 onwards</a>).</li>
  <li>I’m sure this will also appear in all of my colleagues’ lists, but my biggest highlight of 2025 is of course joining DTI at the start of the year. As I said in <a href="https://dtinit.org/blog/2025/03/25/DTI-in-Europe">my first DTI newsletter</a>, <em>“my joining DTI feels like it was a foregone conclusion since my first conversation with its Executive Director Chris Riley in July 2023.”</em> It hasn’t disappointed!</li>
</ul>

<p><strong>Jen Caltrider, Director of Research &amp; Engagement</strong></p>

<ul>
  <li>My favorite dense slog of an academic read that actually changed my world view on how data portability could change the world for good would be this paper <a href="https://cdn.vanderbilt.edu/vu-URL/wp-content/uploads/sites/356/2025/05/25192846/Fenwick-FINAL.pdf">Data Portability Revisited: Toward the Human-Centric, AI-Driven Data Ecosystems of Tomorrow</a>. Don’t let the hefty wordcount of an academic paper fool you, it’s really quite good. And if you don’t have time to read it all, skim the first parts and then really read section IV on Portability Reimagined.</li>
  <li>As a consumer privacy advocate, I spent 2025 really trying to figure out how data portability could make life better for people and give them back control over their data and their privacy. The best person I’ve found out there already talking about the potential for this future is Jamie Smith, who writes the <a href="https://www.customerfutures.com/">Customer Futures</a> substack newsletter. Please, go check it out, and I’d recommend starting <a href="https://www.customerfutures.com/p/a-data-portability-earthquake-is">here</a> (which I know is part 2, you should also read part 1 and then go from there).</li>
  <li>My “fun” read recommendation probably isn’t exactly all that fun to read, as it’s a Harvard  Law Review article from 1850. But, stay with me here, it’s actually really, really cool. Back then, some smart legal types (<em>Warren and Brandeis</em>) wrote a paper called <a href="https://groups.csail.mit.edu/mac/classes/6.805/articles/privacy/Privacy_brand_warr2.html">The Right to Privacy</a> for the Harvard Law Review. In it, they defined privacy as the “right to be let alone.” People, we all deserve the right to be let alone in the AI age! Let’s embrace this definition of privacy please!</li>
</ul>

<p><strong>Delara Derakhshani, Director of Policy</strong></p>

<ul>
  <li>One of my favorite things this year has been a renewed interest in and ongoing momentum for data portability – in the U.S., around the world, and increasingly across new sectors. Part of my work at DTI is to track this, so I offer to you the tracker from <a href="https://dtinit.org/blog/2025/07/29/data-portability-regulatory">this July</a>. We are at a turning point for data portability and I’m excited to be part of this journey.</li>
  <li>Translating the technical work of portability to the everyday is tricky. Early in 2025 I wrote <a href="https://dtinit.org/blog/2025/01/14/what-ban-data">this blog post</a> that I wanted to highlight as a step forward. It spells out nicely and succinctly why data portability should matter to you, using the threat of a TikTok ban as an example.</li>
  <li>On the personal front, I’ve enjoyed spending more time with my mini-dachshund, Charlie Derakhshani, recently. Those of you who own this unique breed know that they ooze love and never leave your side – but that they would also probably happily trade you for a sandwich if they had the chance.</li>
</ul>

<p><strong>Aarón Ayerdis Espinoza, Software Developer</strong></p>

<ul>
  <li>This year I attended FediForum online, and among all the topics presented, there was a fantastic moment when <a href="https://www.imdb.com/es/name/nm5810189/">Elena Rossini</a> (whom I was very surprised to see after several years of watching a documentary she directed) appeared, presenting a <a href="https://www.youtube.com/watch?v=p9c2f63pIag">short film</a> explaining how the Fediverse works.</li>
  <li>Reading articles about Data Portability and discussing at social gatherings with former colleagues and classmates about the importance of our digital rights has been a fun experience. It’s a topic that has largely gone unnoticed, even by people working in the same field as me. You should try talking about portability with your friends - they may never have thought about whether they can move their data.</li>
</ul>]]></content><author><name>The DTI Team</name></author><category term="engagement" /><summary type="html"><![CDATA[Enjoy your holidays with the gift of our favorite things from 2025.]]></summary></entry><entry><title type="html">The DMA-GDPR joint guidelines - new answers bring new questions</title><link href="https://dtinit.org/blog/2025/12/02/dma-gdpr-joint-guidelines" rel="alternate" type="text/html" title="The DMA-GDPR joint guidelines - new answers bring new questions" /><published>2025-12-02T00:00:00+00:00</published><updated>2025-12-02T00:00:00+00:00</updated><id>https://dtinit.org/blog/2025/12/02/dma-gdpr-joint-guidelines</id><content type="html" xml:base="https://dtinit.org/blog/2025/12/02/dma-gdpr-joint-guidelines"><![CDATA[<p>I am almost ready to hit send on DTI’s draft response to the EC and EDPB’s joint guidelines on the interplay between the Digital Markets Act and the General Data Protection Regulation. This newsletter gives you a flavour of the topics we have focused on, and why.</p>

<p>As an (entirely self-proclaimed) specialist in technology policy at the intersection of competition and data protection, the opportunity to feed into this document is as good as it gets for me. Strictly speaking in my view, at least for the data portability sections that I examined, the document goes a little beyond the stated scope of the interplay between the two regulations, also providing some useful detail on how the DMA provisions themselves should be interpreted by gatekeepers. This is a good thing, if not a little late in the day!</p>

<p>Stating the obvious, and helpfully so, the guidelines confirm that Article 20 of the GDPR and Article 6(9) of the DMA are complements to one another. They also clarify how compliance with Article 6(9) of the DMA fits within the framework of the legal responsibilities placed on gatekeepers by the GDPR. This is welcome and should provide additional confidence to all participants within the data portability ecosystem going forwards.</p>

<p>Beyond this, the guidelines also provide some additional practical detail on how the data portability provisions in DMA Article 6(9) should be implemented by gatekeepers, addressing several topics that have been the source of lengthy and sometimes polarised debates over the last two years.</p>

<p>Of these new details, there are many areas where the merits of the policy direction could (and may continue to) be hotly debated, even though the intent and meaning of the guidance itself is pretty clear. For example, the guidelines set out a fairly explicit position on the treatment of other users’ personal data in the context of a data portability transfer. Many will agree with the position, just as many won’t. But most will understand what the text means.</p>

<p>Then, there are a smaller number of areas where the policy issue itself need not be particularly controversial, but the intent of the guidance appears to be open to various interpretations. Rather than answering questions, some sections of the text appear to pose new ones.</p>

<p>Given the objective of the guidance to promote a “consistent and coherent interpretation of the DMA and the GDPR”, I have focused on this latter category of issues where further clarity is needed.</p>

<p>The first of these areas is the interplay between Article 20 of the GDPR and Article 6(9) of the DMA. The document spends several pages detailing how the DMA’s data portability provisions should be interpreted, but the equivalent provisions in the GDPR are almost entirely overlooked. This is a shame, and feels like a real missed opportunity to provide some much needed clarity around the circumstances where a data controller should support direct transfers under Article 20. In particular, a few sentences covering what “where technically feasible” means in practice could be a game changer for the prospects of widespread user-led data transfers. After all, the world of technology has moved on a fair bit since the GDPR was drafted, so perhaps a refresh of thinking is needed beyond the gatekeeper seven.</p>

<p>The second issue we have highlighted is the guidance on the meaning of “continuous and real time”. There is some new detail on how to interpret this requirement, but I’m not convinced the new words are any less open to interpretation than the ones we already had.</p>

<p>In our response, we have encouraged an approach that is context specific and keeps user needs central, which could draw from the <a href="https://dtinit.org/blog/2025/11/04/what-does-real-time-mean">recent research by DTI Summer Fellow Thomas Carey-Wilson</a> that presented a Functional Real-Time framework to help conceptualise latency and speed in data portability.</p>

<p>The third issue we are drawing attention to is Trust. As anyone that has followed my writing on this topic will know, this is an area where DTI has skin in the game. I have previously highlighted the fact that <a href="https://www.techpolicy.press/building-trust-for-data-portability-within-the-dma-framework/">the DMA was unhelpfully silent on Trust</a>, so it is welcome that the guidelines now explicitly recognise the need for gatekeepers to onboard third parties. This includes by requesting identity documents, and also through robust authentication processes integrated into each data transfer request. However, they don’t go any further than that, appearing to rule out the placement of any other guardrails, and completely omitting any reference to the two big ‘C words’:</p>

<ul>
  <li><strong>Criminals:</strong> the guidelines state that “Gatekeepers can therefore not restrict, in any way, the data portability use cases and business purposes that authorised third parties can pursue with the data they receive under Article 6(9) DMA.” While I firmly agree with the spirit (and what I believe to be the intent) of this sentence, it oversimplifies the issue. What about use cases that are illegal? What about third parties that are suspected to be criminal enterprises, or even state actors? If the European Commission wants this guidance to be useful and credible, it needs to be more exhaustive about acceptable vetting procedures, and more explicit that blocking such applications is necessary, and that some basic checks to identify them are expected.</li>
  <li><strong>Consent:</strong> the guidance seems to suggest that gatekeepers must not do any checks in the onboarding process to validate that third parties intend to obtain valid consent, or even consent of any kind at all. Such checks can be very straightforward and light touch, by comparing the organisation’s privacy policy with an image or mock up of their consent screen. The benefits of doing this are immediately obvious – blatantly dishonest and deceptive businesses can be blocked, while the standard of consent is nudged upwards as third parties try harder in the knowledge that someone is checking their homework.</li>
</ul>

<p>As DTI is finalising the processes and documentation for our <a href="https://dt-reg.org/">Data Trust Registry</a>, you can be absolutely certain that a proportionate review of third-parties’ approach to consent will be a core component, as will the aim of blocking criminals’ access to user data. I’d suggest this is fully aligned with the complementary goals of data protection and market contestability. Don’t you agree?</p>

<p>The <a href="https://www.edpb.europa.eu/our-work-tools/documents/public-consultations/2025/joint-guidelines-interplay-between-digital_en">consultation</a> closes in two days, so you still have time to get involved and offer up your own views to these questions.</p>]]></content><author><name>Tom Fish</name></author><category term="policy" /><summary type="html"><![CDATA[The consultation closes in two days. In this email I preview what DTI is going to submit, focusing on direct transfers, trust, consent, and AI.]]></summary></entry><entry><title type="html">Quick Hits from DTI</title><link href="https://dtinit.org/blog/2025/11/18/quick-hits" rel="alternate" type="text/html" title="Quick Hits from DTI" /><published>2025-11-18T00:00:00+00:00</published><updated>2025-11-18T00:00:00+00:00</updated><id>https://dtinit.org/blog/2025/11/18/quick-hits</id><content type="html" xml:base="https://dtinit.org/blog/2025/11/18/quick-hits"><![CDATA[<p>I have three items to share from our recent work at the Data Transfer Initiative: two recently published external articles, and an update on network growth. Enjoy!</p>

<h3 id="consent-and-portability-piece-by-tom-fish-at-tech-policy-press">Consent and portability piece by Tom Fish at <em>Tech Policy Press</em></h3>

<p>Today, <em>Tech Policy Press</em> published “<a href="http://techpolicy.press/data-portability-can-restore-real-consumer-choice-between-consent-or-pay-offerings-online/">Consent, pay or port</a>” by DTI Head of Europe Tom Fish. Tom digs into a privacy question that has been at a roiling boil in Europe for some time: whether current modalities of consent to data collection and use in order to use a service without payment are appropriate. Some companies – including Meta, one of DTI’s founding members – offer paid access to their services, and in some circumstances, also options for less personalized advertising. Tom identifies an orthogonal issue that is critical for meaningful consent: whether or not users can effectively port their data between services. Data portability is a necessary condition for empowering users and making sure that consent, and particularly the withdrawal of consent, is meaningful. Portability helps make markets work, and work to serve the interests of consumers.</p>

<h3 id="ai-and-privacy-piece-by-jen-caltrider-at-fast-company">AI and privacy piece by Jen Caltrider at <em>Fast Company</em></h3>

<p>DTI Director of Research and Engagement Jen Caltrider has a new piece in <em>Fast Company</em> this week entitled “<a href="https://www.fastcompany.com/91435189/ai-privacy-openai-tracking-apps">AI is killing privacy. We can’t let that happen</a>.” In it, Jen writes of the often-overlooked significance of the printing press in the history of privacy – how books give us space to read and to think in solitude. She proposes that in the emerging era of AI – one already marked by, let’s say, less-than-ideal levels of user empowerment over personal data – we look to data portability, “the underdog of privacy rights”, as the lever that we need to change the future of privacy for the better. Check it out!</p>

<h3 id="new-affiliates">New affiliates</h3>

<p>DTI is a membership organization, but a social welfare variant, not a trade association; our structure is one of our superpowers, in my view, as it lets us be grounded and aspirational in equal parts. I wrote <a href="https://dtinit.org/blog/2024/07/16/working-with-industry">a fairly extensive piece</a> last summer about how and why we work with industry to put real solutions into real people’s hands, while maintaining strategic independence and unwavering dedication to our mission of empowering people through data portability.</p>

<p>For some months now, <a href="https://dtinit.org/partners">our website</a> has listed our organizational partners, including founding members Apple, Google, and Meta and partners Amazon and ErnieApp. We’ve also long listed some other organizations with which we have various levels of association; the European Internet Form and the World Wide Web Consortium, both with long established membership structures, along with FediForum and the Trust Over IP Foundation.</p>

<p>But our network is broader than even these data points indicate. And in particular, we’ve begun identifying industry collaborators who provide immense support and alignment on specific projects, and with whom we’ve decided to codify a formal relationship as “affiliates.” Our website now lists <a href="https://inflection.ai/">Inflection AI</a>, which joined last year as our first affiliate, as well as <a href="https://onfabric.io/">Fabric</a> and <a href="https://koodos.com/">Koodos</a>. We’re delighted to have them on board the DTI train, and are looking forward to continued collaboration.</p>

<p>If you’re reading this and thinking, hey I want to get in on that – there’s a “Contact Us” link at the bottom of our website, or here: <a href="https://dtinit.org/contact-us">send us a note</a>!</p>]]></content><author><name>Chris Riley</name></author><category term="news" /><summary type="html"><![CDATA[Sharing three items from our recent work at the Data Transfer Initiative - two recently published external articles and an update on network growth.]]></summary></entry><entry><title type="html">What does ‘real-time’ data portability actually mean?</title><link href="https://dtinit.org/blog/2025/11/04/what-does-real-time-mean" rel="alternate" type="text/html" title="What does ‘real-time’ data portability actually mean?" /><published>2025-11-04T00:00:00+00:00</published><updated>2025-11-04T00:00:00+00:00</updated><id>https://dtinit.org/blog/2025/11/04/what-does-real-time-mean</id><content type="html" xml:base="https://dtinit.org/blog/2025/11/04/what-does-real-time-mean"><![CDATA[<p>My name is Thomas, and I am a Researcher at the Open Data Institute and was a Summer Fellow at the Data Transfer Initiative (DTI). My work operates at the intersection of technology, policy, and open ecosystems, and I’m grateful for the opportunity this summer to tackle one of the most pressing and ambiguous questions in digital regulation.</p>

<p>Across the globe, regulators are looking to data portability as a tool to dismantle data silos, lower switching costs for users, and create a more level playing field. Leading the way is the EU, where the Digital Markets Act (DMA) mandates “continuous and real-time” data portability.</p>

<p>However, these landmark regulations have left a critical interpretation gap. Namely, they never defined what “real-time” actually means. This ambiguity leaves regulators without a clear compliance benchmark and platforms without clear guidance. My research this summer, which has been published in the paper <a href="https://dtinit.org/assets/DefiningRealtimeTCW.pdf"><strong>Defining ‘real-time’: A toolkit for assessing data portability</strong></a>, addresses this gap directly.</p>

<h3 id="beyond-absolutes-introducing-functional-real-time-frt"><strong>Beyond absolutes: Introducing Functional Real-Time (FRT)</strong></h3>

<p>The central finding of our research is that “real-time” is not an absolute measure but is context-dependent. To bring clarity to this, we introduced two key concepts.</p>

<p>The first is Absolute Real-Time (ART), which is the theoretical, instantaneous moment an event occurs. This is a physical limit that is impossible to achieve in practice. The second, and more important, concept is Functional Real-Time (FRT). This is the point of diminishing returns, beyond which further reductions in latency (delay) provide no meaningful benefit to the user for a given task. For example, a 500-millisecond delay is disastrous for a video call, but a five-minute delay for a blood glucose reading may be perfectly functional, as the body’s own chemistry isn’t necessarily faster than that. So, the FRT is the threshold that matters.</p>

<p>To build this framework, we developed a new taxonomy to classify data transfer systems and used it to analyse 18 different implementations across the health, finance, and social media sectors. This allowed us to map why and how delays to FRT happen in practice.</p>

<h3 id="latency-as-a-feature-not-a-bug"><strong>Latency as a feature, not a bug</strong></h3>

<p>We found that latency is often not a technical failure but a deliberate architectural choice that trades raw speed for other essential functions.</p>

<p>One major factor is the use of intermediary processes. Financial aggregators like Plaid and Yodlee, for instance, introduce delays of seconds or even minutes. This is most likely explained by their performing vital functions like managing the complexity of thousands of different bank connections whilst performing fraud checks (amongst other security measures). Given that getting this wrong could imply significant financial risk, a kind of intermediary hub model is a legitimate trade-off here, which sacrifices instantaneous speed for safety.</p>

<p>In other cases, delays are built in to add analytical value. For example, the Dexcom G7’s public API for third-party developers intentionally delays glucose data by over an hour. This is a policy choice to provide a high-quality, retrospective trend-data service rather than a live feed.</p>

<h3 id="the-natural-rhythm-of-data"><strong>The ‘natural rhythm’ of data</strong></h3>

<p>Beyond these architectural choices, we observed that data itself has an intrinsic cadence; a natural speed limit set by either physical or digital constraints.</p>

<p>This is most obvious in health monitoring, which has a physical cadence. As mentioned, a continuous glucose monitor (CGM) is limited by the body’s physiology. It takes at least 5 minutes for glucose to diffuse from the blood to the interstitial fluid, where sensors take their measurement. Therefore, the FRT for this use case is 5 minutes. Attempting to poll the sensor every second would be counterproductive, yielding no new information and needlessly draining the battery.</p>

<p>In purely digital systems, this cadence is often defined by a digital cadence related to mutability. Delivering an immutable (ie, unchangeable) data stream, like a stock tick from Alpaca or Tradier, simply requires appending new data. But platforms like Slack or Gmail are mutable (ie, messages can be edited or deleted). A “real-time” system here must not only deliver new messages quickly but also synchronise all edits and deletions instantly to all users. This creates a far more complex challenge and shifts the architectural priority from simple delivery to state synchronisation (eg, ensuring that all deletions, changes, etc, are the same across devices, as well as new entries), which itself can introduce latency.</p>

<h3 id="a-toolkit-for-regulators"><strong>A toolkit for regulators</strong></h3>

<p>This research provides policymakers and enforcers with a practical toolkit (the taxonomy and the FRT framework) to move beyond absolutist views of speed.</p>

<p>This framework has direct implications for enforcing laws like the DMA. For compliance assessments, the taxonomy helps identify when latency is a legitimate trade-off versus a deliberate tactic to undermine portability. This allows regulators to spot potential anti-competitive delay. This brings us back to Dexcom. While its public API has a 1-3 hour delay, Dexcom’s own first-party app receives data on a near-real 5-minute cycle. Our FRT framework allows a regulator to scrutinise this discrepancy and question whether the longer delays align with legitimate technical and physical constraints.</p>

<p>My time as a Fellow with DTI has reinforced my belief, and the core mission of the ODI, that effective policy must be built on a foundation of technical reality. It has been made clearer to me that we can help ensure that new data portability remedies are not just fast, but effective and truly fit-for-purpose by defining “real-time” in a functional, context-aware way.</p>]]></content><author><name>Thomas Carey-Wilson</name></author><category term="metrics" /><summary type="html"><![CDATA[DTI Summer Fellow Thomas presents a Functional Real-Time framework report to help contextualize latency and speed in data portability.]]></summary></entry><entry><title type="html">Oh Snap</title><link href="https://dtinit.org/blog/2025/10/21/oh-snap" rel="alternate" type="text/html" title="Oh Snap" /><published>2025-10-21T00:00:00+00:00</published><updated>2025-10-21T00:00:00+00:00</updated><id>https://dtinit.org/blog/2025/10/21/oh-snap</id><content type="html" xml:base="https://dtinit.org/blog/2025/10/21/oh-snap"><![CDATA[<p>Hello,</p>

<p>Sometimes understanding data transfer in the real world can be a challenge. It all boils down to how much control of your own data do you really have in our digital lives? Here’s a quick story that makes it real.</p>

<p>Snapchat – that funky messaging and photo app popular with the under 40 crowd – enables people to share disappearing photos and videos, often run through funny filters that make you look like a dog or an angry old lady. For many folks, it’s a fun, silly way to share bits of their lives with friends while retaining a sense of control. They can keep the photos and videos of happy times – what Snapchat calls Memories – but the recipients do not. Until now, all those Memories people gathered were stored on Snapchat free of charge. That’s all <a href="https://techcrunch.com/2025/09/29/snapchat-caps-free-memory-storage-launches-paid-storage-plans/">changing</a>. Now these memories could start disappearing for the sender as well.</p>

<p>Snapchat recently <a href="https://newsroom.snap.com/snap-memory-storage">announced</a> users will only be able to store up to 5GB of those photo and video Memories for free. After that, they will have to <a href="https://www.bbc.com/news/articles/cz69238p5p8o">pay for extra storage</a>, much like Apple and Google charge with their cloud storage services. Snapchat users will get 12 months to sort out if they want to pay an as yet unannounced fee for an additional 100GB, 250GB, or 5TB of storage. As you can imagine, people <a href="https://www.bbc.com/news/articles/c4g5ypl6nkzo">aren’t happy</a> about having to soon pay for something they’ve been getting for free. Especially when it comes to keeping the digital memories of some of their most nostalgic, sentimental, happy times.</p>

<p>Many Snapchat users now have a choice to make. Pay up and keep adding memories as long as they can afford it. Or spend time figuring out <a href="https://help.snapchat.com/hc/en-us/articles/7012305371156-How-do-I-download-my-data-from-Snapchat">how to download</a> all those memories to their device and save those locally. For some people, it won’t be a big deal to pay up to get more storage. For others, it won’t be a big deal to download everything to their computer and keep all their memories there. But for other folks, they might not be able to afford the extra money to keep all their memories on Snapchat. Or, they might not own a device with enough storage to keep all those memories close to home.</p>

<p>There’s two sides to this coin, right? Snapchat says they have trillions of people’s photos and videos stored on their servers, and we all know that data storage costs money. So it is not surprising that Snapchat doesn’t want to keep footing the bill for all our photos and videos. Google and Apple felt the same and we all took that in stride, for the most part.</p>

<p>The other side of this coin though is users feel like the goal posts got moved on them. Snapchat encouraged them to share all their chats, photos, and videos and store them on the platform and that kept them coming back to Snapchat, which in turn made the company billions of dollars in ad revenue. Now users have their personal lives saved on Snapchat and they’re being asked to pay up or move out. It’s a good reminder that as a Snapchat user, you don’t control all your data.</p>

<p>Here’s the kicker though, and here’s where data transfer comes in and where Snapchat falls short. Yes, users can either pay up (if they can afford it) or download their memories to their own device (if they have their own one with enough storage space). What Snapchat users can’t do easily is transfer all those memories to another service and store them there. Maybe they already pay for Apple iCloud storage and want to keep it simple and put everything there. There is no easy way to do that. Yes, they could download their memories to their device and reupload them to the iCloud. But that’s not easy – it’s time consuming, it’s tedious, it takes a certain level of technical knowledge.</p>

<p>Here’s what we want to see. Snapchat, you moved the goal posts on your users. That’s your right as a company, you can change your business model whenever you want. But you didn’t give your users an easy, effective off ramp to take <strong><em>their</em></strong> memories somewhere else by offering them good data transfer tools. This is a failure on your part. The good news is, it’s one that’s not that hard to fix.</p>

<p>Our lives are online these days. All those photos, videos, chats – memories – exist under someone else’s control. If we want to keep control of our lives – our memories —  we need the ability to transfer that data where we want, when we want, because companies can always move the goal posts. We need the ability to move too.</p>

<p>Jen Caltrider</p>

<p>Data Transfer Initiative
Data Transfer Initiative</p>]]></content><author><name>Jen Caltrider</name></author><category term="engagement" /><summary type="html"><![CDATA[Snapchat just made the case for why being able to transfer your data easily from one service to another really matters.]]></summary></entry></feed>