Putting a price on portability
I literally went the extra mile to produce this newsletter, going for a tapas lunch with Tanja Salem, a highly regarded regulatory economist now at Oxera (formerly Director of Economics at BT Group). Over small plates, we covered a big topic: whether, and in what circumstances, data holders should be permitted to recover the costs of supporting data transfers.
(Disclaimer for the skim readers: DTI is absolutely not proposing the introduction of fees in the context of user-initiated transfers of personal data.)
A tricky policy question
The UK is looking to be a leader in data portability initiatives, bringing forward what it calls “Smart Data” schemes in a range of sectors, through which it is hoping to replicate the success of Open Banking. These include schemes for financial services, energy, telecoms and digital markets, which could include both personal and non-personal data within their scope. In this work, the UK government faces a difficult and somewhat controversial policy question looming on the horizon: who should cover the costs of facilitating user-requested data transfers?
Recent regulatory precedents suggest the burden will be placed on the incumbent data holders, with the aim of having the minimum impediment to individuals or third-party businesses from accessing the data. For example, data controllers in the EU and UK are generally not permitted to charge for user-led transfers of personal data under the data portability provisions within the GDPR, nor can “gatekeepers” under the EU’s Digital Markets Act (DMA), nor the largest banks in the UK as part of Open Banking arrangements that were triggered by the CMAs 2017 market investigation into retail banking, and complemented by the Second Payment Services Directive (PSD2).
However, unlike those existing regulatory frameworks, the UK’s Data (Use and Access) Act 2025, which enables the introduction of Smart Data Schemes, explicitly leaves open the potential for data holders to charge third parties for access.
As sector-specific schemes are developed, this issue is likely to elicit some polarised views. On one side, there are strong economic arguments for incentives so data holders continue to invest in data, high-quality data transfer tools, and unlocking data as an economic asset. On the other, some may question whether fees for data access are consistent with policy aims of promoting competition, unlocking innovation, and empowering consumers.
The economist’s perspective
Here is (roughly) how our conversation went…
Tom: I enjoyed reading your report on fees for Smart Data schemes. The thing that struck me was that your proposed framework includes fees for data portability in nearly all circumstances, with data holders always permitted to recover costs. This is different to what we have seen in Open Banking regulation in the UK and the EU, or in digital markets regulation. What is your thinking behind this? Won’t we see more innovation if data is freely available?
Tanja: Of course, demand will inevitably be higher (at least initially) if there is no charge. Economists call that pricing that facilitates “static competition”. But in return you may see lower quality supply, fewer sustainable use cases and weaker investment. Economists call that “dynamic competition” over time. It is about striking the right balance between these factors.
On the one hand, if data is freely available to make new scientific discoveries or provide public services, then there’ll be more opportunities for more people to contribute and create things that ultimately will benefit everyone.
But on the other hand, there are also good reasons why data should not be free because doing so could lead to under-investment – in data and also the products that use it. If companies must build data tools entirely at their own expense, with no financial incentives in place to generate any meaningful demand for the functionality, the natural response will be to do the bare minimum for legal compliance. This means that outcomes may suffer.
Tom: That makes sense, though I would add that collective action can also help to address this challenge by sharing the costs of investment while delivering better outcomes. The Data Transfer Project is one example where this kind of collaboration to support reciprocal transfers has made progress without fees.
Tell me more about the thinking behind your framework then. Surely you can’t have monopolies profiting from policies that are intended to address their market power?
Tanja: There are a few factors that can affect the appropriate fee level, including the type of data and the motivations for the intervention, such as addressing competition challenges. Prof. Sean Ennis of the Centre for Competition Policy at the University of East Anglia and I created a framework with three categories of pricing solutions to account for this:
1) where data can be shared across markets to deliver significant consumer and potentially social benefits (for example in health and transport/smart city type application), without undermining data holders’ business models: transaction-cost-only pricing;
2) where data sharing in competitive markets may undermine data holders’ business models: opportunity cost recovery; and
3) where data sharing is required to remedy a competition problem in a market: at least transaction cost plus a margin, or a value based element subject to the case specific competitive assessment.
Even in cases of addressing market power, usually permitting a fee with a reasonable margin will create incentives that drive the most efficient outcomes. There are standard ways of doing this using cost-based, benchmark-based, income-based and externalities-based valuation methods (here’s a good overview by Oxera, for example).
Tom: I see where you are coming from regarding incentives. But the framework could be challenging to implement – both practically and politically – in the context of digital markets, where so much of the data collected is personal, and given the regulatory frameworks already in place. Some might also argue that where companies extract substantial profits from the collection of large volumes of personal data, supporting onward transfers of that data is merely a cost of doing business.
Tanja: I addressed these issues in my paper, which points out the risk that when the economic benefits of participating in data sharing are not clear, then it’s difficult to incentivise the provision of high-quality data products. The OECD has also pointed this out (here). Of course regulation can always force supply, but as regulators in other sectors know (telecoms, utilities) that’s hard to get right and is not ideal in differentiated product markets.
The DMA has required the gatekeepers to provide continuous and real-time access to data to authorised third parties free of charge. How is that working out?
Tom: If you are asking me should Article 6(9) of the DMA be viewed as a positive success story, then I would say absolutely, yes! It has been a catalyst for major progress for data portability in digital markets, the likes of which we have not seen before. But if you are asking me whether the data portability tools could have been even more effective if the gatekeepers were offered carrots as well as sticks, I would probably agree.
Let’s just say hypothetically that we did agree a fee was justified for creating the right incentives for data holders, wouldn’t that just kill any startup that came along trying to create a new type of service? Digital services often struggle with strong network effects, and fees for data access could really stifle the kind of innovation that policy makers are looking to unleash. From my experience at a startup data intermediary, a fee each time a user wanted to share their data would have made the business completely non-viable in those early stages. Successful data transfer takes two: by improving the incentives for data holders, won’t we reduce the ability or incentive for data recipients to participate?
Tanja: Yes, that is certainly a risk. In new markets, where users need to experience the value of a proposition before it becomes more mainstream, initial discounts can be important, ultimately to drive volume. So initially, high input prices, even if cost reflective, might be a challenge.
I can see that.
Once higher adoption is achieved, and learning effects happen in competitive markets, prices tend to come down as average costs reduce. We’ve seen this in mobile data since the iPhone’s launch in 2008, and in technologies from batteries to LEDs. So the issue is one of upfront cost when the uncertain rewards come later.
If both data holder and data recipient can see the potential in a proposition, teething problems can typically be resolved without regulatory intervention where there isn‘t market power. Firms in competitive markets holding data will want to establish long-term partnerships with firms having know-how that can help them create viable products and potentially include low or zero entry prices as part of the business case, with future pay-offs shared between partners.
And even where there is market power, it’s important to ensure that investment in data and the infrastructure that supports it will continue to be funded. Where pricing ends up being imposed this should align with incentives to achieve that and long-term contracts can also play a role here.
Tom: I suppose it also depends on the type of data we are talking about. I am on board with your proposed more flexible approach where data sharing could undermine business models. I would actually question whether such a requirement is justified in the first place, regardless of fee structures. For example, at DTI we have been talking a lot lately about data portability from LLMs and AI assistants, including the need to capture both sides of conversation histories. But I absolutely draw the line when it comes to underlying model weights and parameters that are the individual company’s valuable IP. Forcing the sharing of that kind of information sets a harmful precedent and would be difficult to compensate for. Is this the kind of thing you mean?
Tanja: Here it’s likely to be harder to land on a one size fits all solution. As an economist I’d say the challenge with legislation in this area is that it is hard to arrive at economically meaningful legal distinctions between different types of data. Whereas some legal distinctions have been hard-coded into the EU Data Act, there is still an opportunity for economically meaningful distinctions to be drawn in the future implementing regulations for the Data Use and Access Act in the UK, and potentially also the EU Financial Data Access regulation (FIDA).
As you say, when it comes to model weights – ultimately also information in digital form – IP rights will likely kick in. It appears that some legislation might jar with that. Forcing openness by imposing regulation that potentially interferes with IP rights or database rights are clearly highly risky for incentives to invest in these in the first place.
Tom: Where do you see this going then? As the UK brings through firm proposals for an Open Finance Scheme, do you expect your framework for charging to be applied? And what about in some of the other schemes like digital that are perhaps less sector specific?
Tanja: I certainly hope the framework we created will be useful, yes.
Open data initiatives such as smart cities suggest there is huge scope for voluntary open data and long-term commercial partnerships between data holders and data recipients, enabling the sharing of risk and reward. A key distinction will be the presence of absence of market power or other market failures, and where there isn’t, government policy and regulation should facilitate rather than determine outcomes.
As it spans all sectors, smart data is unlikely to be a one-size fits all policy – and different use cases and sectors will come with different opportunities, challenges and risks.