Differences in substantive application of Article 102 TFEU and the DMA concretized: ‘Privacy policy tying’ under Article 102 TFEU or the opt-in rule for data combination and cross use in article 5 (2) of the DMA

Disclaimer: All opinions in this blog reflect the views of the author, not of the Dutch DPA.

In our digital markets, there are more and more concerns that big tech firms use their dominant position to conduct practices that could potentially be harmful to consumers and competitors. One of these practices is to present the consumer with a take-it-or-leave-it option before he or she is allowed to use a service: if you do not consent to the fact that the firm can combine and cross-use your personal data from the service with data from other services, you will not be allowed to use the service. Where firms use their dominant position to make such practices possible and when this makes the position of these firms even stronger so the market ‘tips’, the regulator could be inclined to prohibit the practice on the basis of competition law or other forms of market regulation. In the following, I introduce this practice (I). Then, I explain that the mainstream way of approaching the aforementioned practice under Article 102 TFEU has raised critique (II). Then I show how this critique could be taken away by another way of applying Article 102 TFEU, namely under the theory of harm of ‘privacy-policy tying’(III). Next, I explain that article 5 (2) of the DMA, which directly deals with the practice at hand, shows many similarities with the criticized approach of Article 102 TFEU (IV). Finally, I argue that the application of this article does not take away the critique that has been raised, which could potentially have negative consequences (V).  

I. Introduction

Our increasingly digitized economy is becoming more and more data-driven. This means that firms that can collect the widest variety of data from (different) services and that are able to combine or cross-use this data in the quickest way possible, could have a competitive advantage. Data advantages could namely be used to develop services of supreme quality, indicated by characteristics such as for example a high degree of personalization.

A concern is that big tech firms can use their dominant position to conduct practices that make the collection and use of data less demanding. One of them is to provide the consumer with a take-it-or-leave-it option before allowing him or her to use a service. This option entails one in which the consumer must consent to the fact that the firm can collect data from other services – and that this data can be combined, and cross-used in (other) services – before he or she can use a service. A simple example of this practice is Meta allowing you to use Facebook on the condition that it can collect your data via cookies on third-party websites and that it can use this data to personalize your Instagram feed. There are concerns that this practice might trigger a feedback loop that results in an even stronger position of these large firms. Eventually, this could lead to a situation in which it is difficult for other firms to compete. In other words, there is a concern that the market ‘tips’ and the ‘winner-takes-it-all’. Where this is the case, several legislative frameworks could be used to prohibit making the consent for the combination and cross use of data from different services conditional upon the purchase of a service. One of them being Article 102 of the Treaty on the Functioning of the European Union (TFEU) that sanctions abuses of dominance. Another one being the Digital Markets Act (DMA).

II. The criticized Facebook v. Bundeskartellamt case

The only situation in which the aforementioned practice has been assessed as an abuse of dominance, is in the Facebook v. Bundeskartellamt case. In this case, the German competition authority took a criticized approach that focussed on the exploitative effects of the practice. These are the direct effects it had on the consumer. The Bundeskartellamt and several legal scholars argue that the practice should lead to an exploitative abuse, because its direct effect on the consumer is the diminishment of privacy protection (and a GDPR infringement). Namely, consumers have no reasonable choice to go for less far going data processing in order to use the service, which negatively affects its quality.

Some scholars have criticized the approach in the Facebook v. Bundeskartellamt case. Their argumentation is based on the line of thinking that competition law requires more than only the assessment of exploitative effects. According to this way of reasoning, the effects of the practice that indicate the exclusion of competitors should also be assessed (exclusionary effects). Where one cannot prove that as-efficient competitors are excluded, consumers that risk being abusively exploited can just switch to other competitors that offer services on competitive terms. Hence, only where this exploitation is the result of conduct harming competition it should be regarded as an abuse of dominance. To put it briefly, several scholars argue that both exploitative and exclusionary effects must both be proven in order to claim that a practice has a net negative in terms of price, quality, choice and innovation.

That being the case, they criticize the Bundeskartellamt for not assessing how the practice that diminished privacy protection of consumers – and that consequently led the accumulation of the data in the hands of the dominant firm – could have anticompetitively harmed other firms. In their eyes a minimal choice regarding data collection and less quality offered when it concerns the aspect of privacy might not always resemble an overall harmful effect of the practice.

More specifically, they claim that although choice and some quality might be minimized, the practice could resemble pro-competitive behaviour. This could be indicated by for example other aspects of the product that increase in quality (personalization) and lower (or even zero) prices, which could be used to compete with other firms. Consequently, prohibiting this potential pro-competitive behaviour might disturb the market and lead to competitive disadvantages. Which could result in less competition, instead of more.

III. Privacy policy tying as an effects-based solution

In order to take into account the aforementioned critique, one could develop and apply the practice to new theory of harm that measures both exploitative and exclusionary effects. Such a theory of harm has been set out by Condorelli and Padilla as privacy policy tying.

To begin with, under this theory of harm, one must proof the exploitative effects of the practice by showing that the consumer was de facto coerced in purchasing the service bundled with the consent for the combination and cross use of data from different services (privacy policy tying). In other words, one must prove that the consumer had no choice to go for less far going data processing. This would require that the dominant undertaking has presented the consumer with a take-it-or-leave-it option concerning the data processing.

Further, one must prove the actual or potential foreclosure of competition (exclusionary effects). This observation would have to depend on a clear assessment of factors such as the strength of dominance, barriers to entry, network effects, the position of consumers and direct evidence of foreclosure and an exclusionary strategy. For example, in certain cases, economic evidence might indicate that the combination and cross-use of data that is made possible by the practice leads to data-driven network effects and higher entry barriers. These effects could then potentially be used by the dominant undertaking to cross subsidize its offerings in other markets, so these competitors are unable to exert competitive pressure on the dominant firm.

By showing that the practice is anticompetitive, one proves that consumers are withheld from switching to other competitors that offer less personalized services. Such an analysis provides evidence that the consumer cannot reasonably act on the wish to disclose fewer data. In this situation, the practice deviates from a situation of normal competition which leaves the consumer unable to freely decide on its preferences, which is harmful. This might explain why the AG, in its opinion on the Facebook v. Bundeskartellamt case that is now being addressed by the CJEU, found that market power could be a relevant factor in assessing whether the consumer provided free consent for the additional data processing.

Because the theory of privacy policy tying takes into account the effects of the practice on competitors, it provides a thorough, effects based analysis of the conduct. It leaves space to acknowledge other important parameters of competition, such as positive effects on price, quality and innovation of the services. As such, the theory of harm might take away critique from competition scholars that has been raised in paragraph II. However, there might be a reason that authorities have not applied such a theory of harm. Namely, developing such a theory of harm might take considerable resources and time. Exactly this is one of the main issues that the DMA intends to resolve. But does it also take into account the critique to the way of arguing in the Facebook v. Bundeskartellamt case?

IV. Article 5 (2) of the DMA

The DMA is a regulation that attempts to improve the fairness and contestability of digital markets. It does that by addressing the business behaviour of so-called gatekeepers and the core platform services they offer, such as online social networking services and search engines. This implies that – despite its closeness to competition policy – the DMA can be described as sector-specific regulation that is asymmetrically applied.

A major difference with competition law is that the DMA is enforced via ex ante behavioural regulatory provisions, instead of ex post abuse control. One provision specifically regards the practice that is being addressed in the blog, this is article 5 (2) of the DMA. Article 5 (2) of the DMA explains that gatekeepers are never allowed to make the consent for the combination and cross-use of data conditional upon the purchase of a core platform service. One must ask for separate consent in order for this extra processing to be allowed. This means that the consumer must always have a choice to go for less far going data processing in order to make the consent for using the service valid.

While this provision does not require an ex post economic assessment – as is the case when Article 102 TFEU is applied – it shows many similarities with the behavioural remedy that was imposed upon Meta in the Facebook v. Bundeskartellamt case. This has been, according to the Commission and scholars, a source of inspiration when drawing article 5 (2) of the DMA. However, from the discussion that followed the Facebook v. Bundeskartellamt case, we now know that the anti-competitive nature of the prohibited conduct can be regarded as being criticized. This means that the EU legislator has chosen to prohibit behaviour that could potentially be regarded as procompetitive when assessed under Article 102 TFEU under the theory of privacy policy tying.

V. Food for thought

As previously stated, there are incoherencies in the substantive application of the legal frameworks of Article 102 TFEU and Article 5 (2) of the DMA. This could possibly have negative consequences. As we have seen, applying article 5 (2) of the DMA could mean that certain practices that could be procompetitive are sanctioned by default. While this might be a choice that allows the regulator to act more quickly, there is a risk of overregulation. Overregulation might lead to the distortion of competition where other market players potentially benefit from this overregulation. As such, not the market conduct of firms could lead to less competition, but the conduct of the regulator. This would logically not as easily happen where Article 102 TFEU is applied – at least under a theory of privacy-policy tying.

This blog post is intended to create awareness of incoherencies in the substantive application of Article 102 TFEU and Article 5 (2) of the DMA. Differences in substantive application to similar practices of firms in digital markets might create legal uncertainty, something that we do not want in our digital markets. However, where one understands the nature of these incoherencies, one could potentially prevent the legal frameworks from being sub-optimally used. Let’s keep this in mind when we start regulating our digital markets.

Brend Plantinga

Author: Brend Plantinga

Brend Plantinga is a lawyer at the Dutch Data Protection Authority (Autoriteit Persoonsgegevens). He completed a cum laude LLM in Law and Technology and an LLM in Law & Economics at the Utrecht University. His areas of academic interest are the data protection law, other frameworks that regulate digital markets (such as the DMA, DSA and competition law) and algorithmic supervision.

Leave a Reply

Your email address will not be published. Required fields are marked *