A turning point for AI portability
Two years ago, I made a prediction that in data portability, supply would exceed demand. The GDPR helped create a universal expectation that people should be able to – at the very least – download their data. (To be clear, Article 20 also requires companies to directly transfer data as well, though that hasn’t manifested as much in practice.) Companies around the world have adopted various forms of data download functions, whether it’s a few clicks within a website and an archive is emailed to the user, or a form that must be filled out.
Personal data is immensely valuable. And much of that value has not yet been unlocked. It’s great to be able to get a copy of your data for your own archives and reference, and for switching services. But the real magic happens downstream, with vertical innovation: building tools and services that can create new value from that data, including in integration across its origins.
The tide is turning. Awareness of digital footprints has been growing for years, and now, everyday people are learning more about what that means and how it can help them do things with their data. And the fundamental creativity of technology builders is awakening. And in parallel, people are generating more and more personal data, and consequently more potential downstream value. A huge factor amplifying these effects is AI.
Regular readers of this outlet know that DTI has been pushing for the importance of personal data portability in the context of AI for quite some time. These have been predictions of what’s to come and guidance on how to shape the future, drawn largely from my work and experience in the global tech sector, and my understanding of the possibilities of technology and how it can be both used and controlled. Three dynamics are emerging to validate this dynamic:
- Developers are building tools to get value out of personal data – including developers and builders, not just large corporations – both with and through AI. Check out the reception for the world’s first AI portability hackathon, which DTI recently helped organize.
- For better or worse, people are freely adopting tools like OpenClaw and giving it access to all of the personal data they possess, including their local files and access credentials to remove services. This is a wildly insecure path, but it is being widely pursued nevertheless, because the value is there.
- People are making choices about which AI service to use not based on performance but values, including flash reactions to news developments. And when they decide to switch, service providers and internet commenters are walking them through the best currently available pathways to transfer their data over.
My colleague Tom offered a prediction this year as well: “Data portability use cases will be proven as commercially viable.” At the hackathon in late February, there was at least one angel investor present to look for opportunities. I think Tom’s right, and alongside that, there will be rapid acceleration on the growth curve of demand for and adoption of data portability and personal data use, in and with AI.
Why is AI accelerating portability? First, it helps people prototype technologies based on little more than a concept, reducing technical knowledge and experience barriers. Opinions vary on whether “vibe coding” and similar AI-assisted development can substitute for production-quality or long-term maintainable software. However, it’s hard to deny that it makes it easier to test out ideas and hypotheses.
Second, it unlocks new recommendation and suggestion power based on user tastes. While this is perhaps fairly basic functionality, it’s incredibly valuable to help someone identify new music they might want to listen to, restaurants they might want to try, or products they might want to purchase – both to the individual and to the enterprise. If, as posed by Eric Seufert, “everything is an ad network”, then everything must also be a potential data portability use case.
Finally, AI interactions are themselves a new source of interesting and valuable data. People talk with their chatbots about lots of things. While some of this data can be extremely personal and sensitive, lots of it also can be extremely valuable, as we know from the ways in which it is used in fine tuning and improvements to the AI service itself. These same learnings and personalizations are of use in many other contexts as well.
But, how much is this last part true in practice? What form is portability of personal AI data taking today? Are the current tools and methods making the right data available? Will there be trust mechanisms in place, or will users be encouraged (or misled) to transfer potentially sensitive chat histories to new services without safeguards?
DTI has articulated our principles for how it should work in practice. TL;DR: We aren’t there yet. Portability demand is growing. Can the supply keep up?
It’s great to see experimentations with memory transfers, as Anthropic is doing. I appreciate as well that you can still export your raw personal data from Claude as well – as you can from ChatGPT and other AI services. I hope, but cannot be certain, this will continue. And the direct transfer of such data, as articulated in GDPR Article 20, typically remains a work in progress, with few exceptions. In the age of possibility brought about by modern AI, I struggle to imagine that technical feasibility could be a plausible barrier.
Trust is missing here as well. Our trust registry project, nearing the end of its pilot phase, vets third-party recipients of direct transfers of personal data to help protect people – checking that their data will not be stored insecurely or abused, and that relevant consent mechanisms meaningfully reflect what the company will do with the data.
Contrast DTI’s trust work with the realities of OpenClaw, which Simon Willison has described as the technical development most likely to result in a “Challenger disaster.” People are wantonly opening their local drives and connecting their access credentials to AI agents they not only do not actively control, but in many cases do not understand.
I have confidence in DTI’s partners and affiliates, who together lead on data portability in all its implementations. Joining us in our work means supporting our mission: “Empower people by building a vibrant ecosystem for simple and secure data transfers.” These companies make personal data available through many methodologies, including downloads, Data Transfer Project-powered direct transfers, and APIs. With our affiliate Inflection, we shipped a data model for conversation histories designed to maximize effective reuse. And our affiliates Fabric and koodos are building new tools and open ecosystems around personal context portability in AI, including this brand new context-use tool from Fabric. Context-use is a local, open source tool that converts user archives like full ChatGPT conversations and Instagram stories into personal context for agents like OpenClaw. In this way, agents are able to use full personal context safely without accessing primary user accounts.
But I am worried about a reversion to historical patterns of trapping users in online services by their own data. Where there is money to be made, there is incentive to capture as much of it as possible. The question I asked in November 2023 has not been fully answered: “whether the future of generative AI will lock users into new technology silos, or empower them by ensuring portability.”
I’m also worried about privacy and security problems that could arise from an ecosystem of data movement that develops without collaboration and considerations of trust. In other portability contexts, great care is taken in scoping the data made available and in user understanding of the transfer and its safety. Without substantial investment in and coordination of portability, more problems – avoidable problems – will occur.
I’m not the only one thinking about the risks of consolidation and security in data flows. Regulation is on the horizon. In the EU’s recent DMA review process, Open Markets Institute and other commentators explicitly called on the European Commission to designate virtual assistants and chatbots. Megan Kirkwood at Tech Policy Press wrote an overview of the issue.
In the United States, at the state level at least, there is ample regulatory appetite. In 2025 alone, more than 1100 AI-related bills were introduced in U.S. states. The Digital Choice Act in Utah, although it is not without controversy and challenge in implementation, includes substantial data portability obligations for social media services; and similar laws have been proposed in several other states. It’s not hard to imagine these two forces coming together.
DTI doesn’t take a position on regulatory matters, and we recognize that these are complex issues and regulation inherently involves tradeoffs. But we also recognize that regulation in some form is inevitable, regardless of one’s views on the merits.
Now is the time to get a head start on building portability infrastructure in AI the right way – together. We can, and should, collaborate on shared tools and methodologies to export and import personal data in AI, including both conversation histories as well as higher-level memories and contexts. It won’t take radical new engineering. Just the space and collective will to coordinate. And we at DTI exist to facilitate precisely this.
We invite you to join us on this journey.