The EU Digital Markets Act: A Possible Game Changer in Efforts to Regulate Platforms

When the Commission announced in its 2015 Digital Single Market Strategy that it would look into establishing a “fit for purpose regulatory environment for platforms and intermediaries” , it promised little more than a review of various pre-existing rules. The situation changed dramatically in the 2020 “Shaping Europe’s Digital Future” Communication, where a new Digital Services Act had been announced, as was the introduction of “ex ante rules to ensure that markets characterised by large platforms with significant network effects acting as gatekeepers, remain fair and contestable for innovators.”

The promise to introduce ex ante rules for gatekeeping platforms was little short of revolutionary. Ex ante rules, hitherto only used in telecommunications law, are a hybrid form of competition law, whereby a designated authority (normally national telecoms authorities) identifies actors with significant market power in danger of violating competition rules, and imposes remedies on them in advance. Such rules are, in their very nature, asymmetric (as they do not apply equally to all providers) and sector-specific (as they only apply to telecoms). It is worth noting that the Commission’s plan had always been to gradually reduce the need to rely on sector-specific regulation and have regular competition rules cover the issue. Although this has not happened yet (the 2018 European Electronic Communications Code maintains the model that essentially dates to late 90s), the number of markets the rules apply to have gradually been reduced over the years.

The newly proposed Digital Markets Act (not to be confused with the proposed Digital Services Act which is the revision of the E-Commerce Directive) is, in essence, an ex ante (since sanction are imposed in advance and not as a reaction to past violations) and sector-specific (since only certain platforms are included) instrument. This is interesting both because it is an original solution to a burning problem of platform regulation but also because it has not been attempted before and would bring a host of new challenges. Equally importantly, unlike telecommunications law, which charges national regulatory agencies with the enforcement, DSM places the whole process mostly on the shoulders of the Commission, creating centralization and uniformity but losing flexibility and potentially raising questions about subsidiarity.

This article is an attempt to outline the main operation of the DSM.

1 Subject Matter and Scope

The proposed Regulation is based on the idea that some platforms (gatekeeping platforms) have a major impact over digital markets and create dependencies and potentially unfair behavior. The Commission had already ventured into this field, with a sector-specific 2019 Regulation of platform-to-business trading practices, the present Proposal going both deeper and wider. Nevertheless, the Proposal only applies to “core platform services“, provided by “gatekeeping” platforms. Such platforms can be defined using a combination of qualitative assessment and quantitative metrics, the main idea being that a designated platform would be required to comply with a number of obligations procedurally imposed and controlled by the Commission and removed once the need for them had disappeared.

The Commission’s main point, similarly to the reasoning used in telecoms regulation, is that competition law currently cannot address (or cannot effectively do so) the gatekeeper-problems but that the new law can complement the existing competition law.

The Commission examined several policy options (all requiring EU-level enforcement) which eventually crystallized into three. They range from fixed to flexible:

  • pre-defined list of gatekeepers
  • partially flexible frameworks
  • flexible option based on qualitative thresholds

The Commission chose the second option, containing a closed list of core services with a combination of qualitative and quantitative criteria for provider designation coupled with some direct obligations.

2 What are Gatekeepers?

As per Article 1, the Regulation applies to “core platform services” provided by gatekeepers to business users or end users. The core services are specifically listed in Article 2(2) as:

  • online intermediation services
  • online search engines
  • social networking
  • video-sharing platforms
  • number-independent interpersonal telecoms services
  • operating systems
  • cloud services
  • advertising services
    The definition seems to be wide and encompasses most of the modern platforms.

Furthermore, the Proposal applies to “gatekeeping platforms”, which are those core platforms that fulfill the three cumulative qualitative criteria (Article 3(1), qualified by quantitative elements (in brackets below):

  • have a significant impact on the internal market (presumed to exist if annual EEA turnover is over €6.5 billion in the last three financial years or market capitalisation is over €65mil and services are provided in at least three Member States)
  • operate one or more important gateways to customers (presumed to be the case if a service has over 45mil active end users in the EU and 10000 in the last financial year)
  • have (or will have) an entrenched and durable position in the society (presumed to exist where the thresholds from the previous point exist)

The fulfilled conditions do not themselves trigger sanctions but only open the possibility that the Commission may impose the obligations if it is satisfied that a risk exists in the procedure regulated in the Proposal and actually designates a provider as a gatekeeper within a specific core service (i.e. not in general. Thus Google could, for example, be designated as a core provider within its search engine business but not its office application suite).

The presumptions are rebuttable (Article 3(4)). Furthermore, Article 3(5) allows the Commission to adjust the thresholds, which is a methodology already known from the telecoms sector. The Commission may, as per Article 4, review its decision at any point.

It is clear that both the qualitative and quantitative criteria need to be met but the Commission is entitled, in Article 3(5), to designate as a gatekeeper a provider which does not otherwise meet the quantitative thresholds if criteria such as the size, number of users, entry barriers, economies of scale, user lock-in effects or other structural characteristics are met. It is difficult to think of the mechanism as providing a true “de minimis” rule, since a designation can be made even if quantitative criteria are not met.

In terms of the scope, the Regulation applies to services offered to businesses established in the EU or end users established or located in the EU. It is irrelevant where gatekeepers are located. This extends the field of application to all services normally offered by non-EU platforms.

The Regulation does not apply to general telecoms services although it does to number-independent telecoms services such as messengers. The proposal is intended to be a full harmonization measure in its scope (Article 1(4)). Furthermore, the proposal does not prevent the use of EU or national competition law.

3 Obligations Imposed

For each of the gatekeeper’s core platform services designated by the Commission , a set of ‘hard’ and ‘soft’ obligations exist (Articles 5-6, see Appendix below) coupled with corrective mechanisms (Articles (7-10)). While the former are binding directly, the latter can be “further specified” by the Commission. The seven ‘hard’ obligations (Article 5) are essentially preventing gatekeepers’ anticompetitive behavior towards business users. The eleven ‘soft’ obligations (Article 6) are oriented both towards business and end users. The measures, which need to be “effective”, also need to be in compliance with GDPR, the consumer, product safety and cybersecurity laws. Gatekeepers are given the option of arguing, in Article 8(1), that the measures from Articles 5 and 6 endanger the “the economic viability of the operation” and should, to that extent, be temporarily suspended. Such suspensions will, however, be reviewed yearly. Gatekeepers may also be exempted (Art. 9) for reasons of public morality, health or security.

Particularly significant is the Commission’s ability (Art. 10) to update the obligations of articles 5 and 6 when, having pursued a market investigation (Art. 17), where it finds that there is a need for new obligations that address practices that limit gatekeepers’ contestability or are unfair in the same manner as those in Arts. 5-6. This effectively means that lists in articles 5 and 6 are non-exhaustive. In that sense, a practice “considered to be unfair or limit the contestability of core platform services” exists where the gatekeeper is “obtaining an advantage from business users that is disproportionate to the service provided” or “the contestability of markets is weakened as a consequence of such a practice”.

The procedure for designating a provider of core services a gatekeeper reminds of the similar procedure in the telecoms sector. The Commission opens a market investigation (Article 15) following an “advisory procedure” (art. 32), the result of which may be a decision pursuant to Article 3(7) designating a platform as a gatekeeper. If a gatekeeper does not yet enjoy an entrenched position but may so in the future, only a limited set of obligations may be imposed. Three or more Member States may compel the Commission to initiate an investigation (Article 33).

Where a systematic non-compliance exists (Article 16) “any behavioural or structural remedies which are proportionate to the infringement committed and necessary to ensure compliance” may be imposed. A systematic infringement is required (presumed to exist if three non-compliance decisions/fines were issued), coupled with a finding that the gatekeeper “has further strengthened or extended its gatekeeper position” (presumed to exist if its market importance or importance has increased). While this may seem harsh, in reality it is little more than an indication that regular competition law may be applicable in addition to the sector-specific measures. Structural remedies, however, may only be imposed where “no equally effective behavioural remedy or where any equally effective behavioural remedy would be more burdensome”.

4 Procedure and Enforcement

The Commission is given significant procedural powers, ranging from the ability to request information (Article 19), to the ability to carry out interviews (Article 20) or conduct on-site inspections (Article 21), the latter similar to “dawn raids” known from traditional competition law. Interim measures are also available (Article 22), though only in the context of proceedings already initiated.

Article 23 offers the gatekeepers the possibility to offer commitments and end further proceedings. The Commission has the ability to accept or reject such commitments and may reopen proceedings if they are deemed to be insufficient at a later stage. The Commission itself is tasked with monitoring the obligations (Article 24), and may adopt a non-compliance decision (Article 25). Fines not exceeding 10% of its total turnover in the preceding financial year may be imposed in cases of intentional or negligent failure to comply with substantive obligations (and having the gravity, duration and recurrence in mind) and fines not exceeding 1% on failures to comply with procedural obligations. Since the fines can be calculated taking account of the turnover of the members of an association of undertakings, the fines can be collected from these members in cases of insolvency. Periodic penalty payments not exceeding 5 % of the average daily turnover in the preceding financial year per day may be imposed (Article 27) in order to compel them to comply with certain important obligations (systematic non-compliance, refusal to give information, etc.)

The Commission is assisted by a Digital Markets Advisory Committee.

The Court of Justice of the European Union (CJEU) has unlimited jurisdiction to review decisions by which the Commission has imposed fines or periodic penalty payments and may cancel, reduce or increase the fine or periodic penalty payment imposed.

5 A New Way Forward

While some concerns remain (the novelty of the approach, the lack of a true “de minimis” rule, the enforcement burden moved to the Commission), there are several reasons why the proposed Regulation, if adopted, may bring a fundamental change in the attitude toward platform regulation.

First, the Proposal, though sector-specific, is content-neutral. In other words, it applies to any situation where gatekeeping platforms providing core services have a major impact on the market and create dependencies. It can be deployed equally effectively in situations involving fake news as it can in those involving defamation or IP-infringement. Second, the Proposal relies heavily on the well-tested mechanisms from the telecoms sector and repeat, with some minor adjustments, the designation and enforcement structures from the telecoms world. These have proven to be relatively effective. This brings certainty in enforcement and a degree of predictability for platforms. Third, the Proposal has multiple safeguards built-in, including the duty to hear platforms during proceedings, the ability to impose less burdensome obligations and the power to appeal decision to CJEU. Finally, the fines structure and the enforcement mechanisms, which come from the competition law world, coupled with the sector-specific and ex ante nature of the designation mechanism, bring potentially game-changing efficiency which competition law largely lacks.

It is difficult to escape the feeling that Western democracies are incapable of regulating large platforms. It is equally clear that these platforms, while bringing significant benefits, present insurmountable problems that affect core values of our societies (democracy, free speech, gender equality, privacy, consumer protection). The suggested new approach has the potential to be a valuable tool in the fight to prevent the dystopian future so aptly presented in literature and cinema and slowly becoming a reality.

Appendix: ‘Hard’ and ‘ Soft’ Obligations:

Article 5 Obligations for gatekeepers

In respect of each of its core platform services identified pursuant to Article 3(7), a gatekeeper shall:

(a) refrain from combining personal data sourced from these core platform services with personal data from any other services offered by the gatekeeper or with personal data from third-party services, and from signing in end users to other services of the gatekeeper in order to combine personal data, unless the end user has been presented with the specific choice and provided consent in the sense of Regulation (EU) 2016/679.
(b) allow business users to offer the same products or services to end users through third party online intermediation services at prices or conditions that are different from those offered through the online intermediation services of the gatekeeper;
(c) allow business users to promote offers to end users acquired via the core platform service, and to conclude contracts with these end users regardless of whether for that purpose they use the core platform services of the gatekeeper or not, and allow end users to access and use, through the core platform services of the gatekeeper, content, subscriptions, features or other items by using the software application of a business user, where these items have been acquired by the end users from the relevant business user without using the core platform services of the gatekeeper;
(d) refrain from preventing or restricting business users from raising issues with any relevant public authority relating to any practice of gatekeepers;
(e) refrain from requiring business users to use, offer or interoperate with an identification service of the gatekeeper in the context of services offered by the business users using the core platform services of that gatekeeper;
(f) refrain from requiring business users or end users to subscribe to or register with any other core platform services identified pursuant to Article 3 or which meets the thresholds in Article 3(2)(b) as a condition to access, sign up or register to any of their core platform services identified pursuant to that Article;
(g) provide advertisers and publishers to which it supplies advertising services, upon their request, with information concerning the price paid by the advertiser and publisher, as well as the amount or remuneration paid to the publisher, for the publishing of a given ad and for each of the relevant advertising services provided by the gatekeeper.

Article 6 Obligations for gatekeepers susceptible of being further specified

  1. In respect of each of its core platform services identified pursuant to Article 3(7), a gatekeeper shall:
    (a) refrain from using, in competition with business users, any data not publicly available, which is generated through activities by those business users, including by the end users of these business users, of its core platform services or provided by those business users of its core platform services or by the end users of these business users;
    (b) allow end users to un-install any pre-installed software applications on its core platform service without prejudice to the possibility for a gatekeeper to restrict such un-installation in relation to software applications that are essential for the functioning of the operating system or of the device and which cannot technically be offered on a standalone basis by third-parties;
    (c) allow the installation and effective use of third party software applications or software application stores using, or interoperating with, operating systems of that gatekeeper and allow these software applications or software application stores to be accessed by means other than the core platform services of that gatekeeper. The gatekeeper shall not be prevented from taking proportionate measures to ensure that third party software applications or software application stores do not endanger the integrity of the hardware or operating system provided by the gatekeeper;
    (d) refrain from treating more favourably in ranking services and products offered by the gatekeeper itself or by any third party belonging to the same undertaking compared to similar services or products of third party and apply fair and non-discriminatory conditions to such ranking;
    (e) refrain from technically restricting the ability of end users to switch between and subscribe to different software applications and services to be accessed using the operating system of the gatekeeper, including as regards the choice of Internet access provider for end users;
    (f) allow business users and providers of ancillary services access to and interoperability with the same operating system, hardware or software features that are available or used in the provision by the gatekeeper of any ancillary services;
    (g) provide advertisers and publishers, upon their request and free of charge, with access to the performance measuring tools of the gatekeeper and the information necessary for advertisers and publishers to carry out their own independent verification of the ad inventory;
    (h) provide effective portability of data generated through the activity of a business user or end user and shall, in particular, provide tools for end users to facilitate the exercise of data portability, in line with Regulation EU 2016/679, including by the provision of continuous and real-time access ;
    (i) provide business users, or third parties authorised by a business user, free of charge, with effective, high-quality, continuous and real-time access and use of aggregated or non-aggregated data, that is provided for or generated in the context of the use of the relevant core platform services by those business users
    EN 41 EN
    and the end users engaging with the products or services provided by those business users; for personal data, provide access and use only where directly connected with the use effectuated by the end user in respect of the products or services offered by the relevant business user through the relevant core platform service, and when the end user opts in to such sharing with a consent in the sense of the Regulation (EU) 2016/679; ;
    (j) provide to any third party providers of online search engines, upon their request, with access on fair, reasonable and non-discriminatory terms to ranking, query, click and view data in relation to free and paid search generated by end users on online search engines of the gatekeeper, subject to anonymisation for the query, click and view data that constitutes personal data;
    (k) apply fair and non-discriminatory general conditions of access for business users to its software application store designated pursuant to Article 3 of this Regulation.
  2. For the purposes of point (a) of paragraph 1 data that is not publicly available shall include any aggregated and non-aggregated data generated by business users that can be inferred from, or collected through, the commercial activities of business users or their customers on the core platform service of the gatekeeper.

AG Opinion on YouTube: No New Developments, Only A Summary of Current EU Law on Intermediaries

The EU regime on intermediary liability is a relatively complex set of EU and national laws and court practice that has developed over a number of years. While the basic position in the EU, just like in the USA, has been that bona fide intermediaries are not liable if they do not have active knowledge of the infringing activity and act upon receiving such knowledge, a number of questions remain unclear and have been the target of both CJEU case law and Commission’s efforts to reform EU law. Most prominent among the latter are the Copyright in the DSM Directive, changing the liability regime for content-hosting platforms, and the Digital Services Act, currently in the drafting stage and potentially changing the general regime on intermediary liability.

Among the questions receiving most attention has been the issue of liability of platforms for illegal user-posted content. In the YouTube case (C-682/18) the question referred to the Court was if an “operator of an internet video platform on which videos containing content protected by copyright are made publicly accessible by users without the consent of the rightholders carr[ies] out an act of communication within the meaning of Article 3(1) of Directive 2001/29/EC”. In today’s Opinion, AG Øe deals with a number of interesting points.

The case is important primarily because it clarifies and summarizes the EU position on intermediary liability prior to the application of the new Copyright in the DSM Directive, which has come into force but is not applicable before June 2021. It is worth noting that the basic position for content-carrying platforms in the post-DSM world is governed by Article 17(4) of the DSM Directive, which provides that platforms which have not entered into an agreement granting them authorisation are liable for acts of communication to the public where they have not made best efforts to ensure the unavailability of works the rightholders have notified to them and not acted expeditiously to remove and block content.

The first point of interest is the confirmation that platforms are not directly liable (since they are not primary posters) and do not engage in acts of communication to the public. Any act of automatic classification or other similar identifying act is not an act of communication as long as there is no selection or determination of content. It is worth noting that no detailed discussion about the level of platform’s engagement is provided here. It is not clear at what point might a platform that partially classifies be deemed liable.

Second, the AG points out that EU law does not govern secondary liability – which is the liability for facilitating illegal acts of others (see, for comparison, the US Grokster case). Such liability is entirely governed by national law.

Third, the Opinion confirms the basic EU distinction between active and passive sites. In that sense, an active role, which gives platforms knowledge of and control over the content, removes exemption from liability. A truly passive role, on the other hand, grants the application of the exemptions from Articles 12-15 of the E-Commerce Directive. The active/passive distinction has long been criticized, primarily because its rigidity does not reflect the complexity of modern digital services. The DSM Directive resolves the problem by completely circumventing the discussion and concentrating, instead on the removal mechanisms (filtering) for the platforms presumed to be in the position to have access to such mechanisms, irrespective of their activity level. While this intervention affects only one kind of platform (those hosting user-posted content), in practice it means a change in the regime for a wide range of platforms.

The position thus obtained means that passive intermediaries have access to the exemption for all types of liability and irrespective of whether they are targeted by primary (as posters) or secondary (as facilitators) liability. ECD requires “actual knowledge” or that intermediary is aware of the infringement but this refers to specific information. If this had not been the case, i.e. if pure indication had been sufficient, intermediaries would be incentivized to monitor and remove more vigorously.

In summary, active intermediaries do not benefit from exemptions and are subject to liability. Passive intermediaries do benefit from it but lose it upon receipt of specific information.

Finally, injunctions are always available to rightholders, irrespective of whether they are liable or not. The scope and effect of such injunctions is determined by national law.

In summary, the YouTube Opinion brings little if anything truly new and represents, instead, a welcome clarification of the current CJEU law on intermediary liability. Its main idea – that intermediaries without actual knowledge are not liable – has long been repeated in CJEU’s cases in one form or another. The important question which remains, however, is the operation of the Copyright in the DSM Directive and the many unknowns that remain in terms of the authorization agreements, filtering in the absence of such agreements and how the stakeholder dialogues might play out. In our opinion, the justifiably criticized Copyright in the DSM Directive profoundly disrupts the balance between the E-Commerce and Infosoc directives and creates a hybrid and untested regime which will require years of interpretation. In any case, after June 2021, the EU will have two distinct liability regimes: one applicable to content-sharing platforms and governed by the DSM Directive and the other, applicable to all other cases and  governed by the principles summarized above.

The Department of Justice’s Review of Section 230: Where Does the EU Stand?

The US Department of Justice published today its proposal for a review of Section 230 of the 1996 Communications Decency Act. This proposal is not the only document to have come out today (see Senator Hawley’s somewhat unrealistic idea here) but it is the official US government position. Section 230 – which insulates intermediaries against claims for illegality for content posted by third parties – is arguably among the most important provisions for the Internet ever drafted. Its review will not only have worldwide impact (thanks to the global presence of US Internet companies) but will influence the EU’s attempt to revise its own regime, which is currently going on.

It is worth mentioning that President Trump’s earlier problematic Executive Order, which also targets Section 230, is not the subject of this post (although it detracts from the possible real need to reform Section 230).1

Section 230

S.230 CDA is the US law giving protection from liability to intermediaries who publish third party content. Its Section (1) essentially says that no internet intermediary should be treated as a publisher for content posted by third parties. At the same time, its “Good Samaritan” Section 2 gives immunity to providers who voluntarily take action to remove “obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable” content in good faith, whether such content otherwise enjoys constitutional protection. This provisions protects content-moderation decisions, irrespective of their motives as long as they are taken in “good faith”.

It is important to notice that the protection of S.230(1) is valid whether the defendant intermediary knew about the content, whether it acted in good faith and whether it was neutral or not. The main idea of the section is to eliminate frivolous lawsuits that would otherwise target intermediaries, who are not only frequently better financially suited to respond but also a known target as opposed to often anonymous posters. The system thus created has been fairly robust, with a considerable body of cases to support it.

While S.230 covers regular speech, a different provision – S.512 of the 1998 Digital Millennium Copyright Act (DMCA) – applies to copyright, essentially protecting bona fide intermediaries from copyright infringement lawsuits. Unlike S.230, S.512 requires that intermediaries lack knowledge of the infringements and expeditiously remove the content upon obtaining that knowledge. This provision too is the subject of criticism and calls for a review (on its effectiveness see here).

Somewhat surprisingly, both the Democrats and the Republicans are arguing for modifying or even revoking Section 230, although for the opposite reasons. Joe Biden suggested that it should be revoked completely while Bernie Sanders said that S.230 was written “before the current era of online communities, expression, and technological development” and that large profitable corporations should be held responsible. Meanwhile, Republicans are suggesting that platforms are systematically biased against them (although no evidence for this exists) and suggested measures to curb what they call censorship. Both political groups show a staggering lack of understanding of the underlying reasons for and the actual operation of Section 230, as evidenced not only in flawed proposals but also in a multitude of contradictory statements in the press.

Department of Justice Review

The DOJ document is not a proposal in itself but a draft document outlining its position. The document is based on four principles:

  • large tech platforms are no longer nascent or fragile and are not in need of protection2
  • S.230 has been abused by large platforms to maintain their dominant position.3
  • Core immunity for defamation needs to be preserved to foster free speech
  • Hosting defamatory content needs to be distinguished from enabling criminal activity

Having these in mind, the document indicates four areas for reform:

  1. Incentivising Online Platforms to Address Illicit Content. Here the “Bad Samaritans” would lose immunity. This includes actors who purposefully facilitate criminal activity or material but also those who purposefully “blind themselves and law enforcement to illicit material”. Separate Carve-Outs exist for child abuse, terrorism, and cyber-stalking as well as for actual knowledge or court judgments. The latter is a clear departure from S.230 which provides immunity even where actual knowledge exists.
  2. Clarifying Federal Government Enforcement Capabilities to Address Unlawful Content: clarifying that federal civil enforcement actions are not covered.
  3. Promoting Competition: clarifying that Federal antitrust claims are not covered.
  4. Promoting Open Discourse and Greater Transparency: this would seek to replace the words “otherwise objectionable” in Section 230(c)(2) with “unlawful” and “promotes terrorism.” Further to that, a statutory definition of “good faith” would be provided.

The document proposes both more and less content moderation at the same time.

On one hand, the DOJ wants the removal of immunity for “Bad Samaritans” in situations involving federal criminal law. The effect of this provision would be to increase liability for sites that do not remove material. The material the DOJ wanted moderated is not only terrorist, child pornography or cyber-stalking but also any activity that violates federal criminal law as well as the “purposeful blindness” in relation to such material. The main danger here is the proliferation of frivolous cases where removal is demanded without likelihood of success but with a view to forcing the platform to act upon request.

On the other hand, the proposal would be significantly changing the “Good Samaritan” provisions. The proposal claims that “the new statutory definition would limit immunity for content moderation decisions to those done in accordance with plain and particular terms of service and consistent with public representations. These measures would encourage platforms to be more transparent and accountable to their users.” The document, in tying the removal to terms & conditions and in removing the “otherwise objectionable” wording aims to limit the circumstances under which platforms can moderate content (in line with President Trump’s ideas about Twitter and Facebook). The new setup would mean no protection in cases where platforms remove content that is anything but directly unlawful. In other words, if the platform – exercising its first Amendment rights – removes the content which is not directly unlawful but is, in its view, objectionable (for instance because it is untruthful), it would lose the protection of S.230. This increases the instances in which platforms would expose themselves to liability dramatically.

The contradictory directions in which the Proposal is going are confusing. The First Amendment implications of the removal of “otherwise objectionable” wording are obvious and will be debated extensively. While vague bipartisan support for the reform of S.230 exists, any proposal would first have to be introduced to the Congress and then pass the Democrat-dominated House of Representatives and the Republican-dominated Senate. This is not likely at present.

European Law on Intermediaries Today

European rules on intermediary liability, Articles 12-15 of the 2001 E-Commerce Directive (ECD), are slightly newer than American ones. Unlike their US counterpart, there is no separation between copyright and all other cases – everything is covered by the same set of rules. The text is somewhat simpler and more direct.

The main idea is that information society service providers are not liable in cases where they are mere conduits (Article 12 ECD), where they are caching (Article 13) and where they are hosting (Article 14) material, provided certain conditions are met. On top of that, no general obligation to monitor exists (although monitoring to remove specific illegal content is allowed).

Of particular interest is Article 14 which insulates intermediaries from liability in cases where:

  • the provider does not have actual knowledge of illegal activity or information and, as regards claims for damages, is not aware of facts or circumstances from which the illegal activity or information is apparent; or
  • the provider, upon obtaining such knowledge or awareness, acts expeditiously to remove or to disable access to the information.

Unlike S.230, Article 14 on hosting requires the lack of actual knowledge as well as expeditious removal upon obtaining such knowledge. Importantly, however, where the US law distinguishes the “Good Samaritan” platforms, the EU law was traditionally based on the distinction between active and passive platforms and says nothing about “Samaritans”, good or otherwise.4 The more active a platform, the less likely it is to enjoy the protection of Article 14. While CJEU clarified in eBay that moderation in itself does not automatically bring liability, it did say that an active role in the form of “optimising the presentation” might be such an active role. The Court of Human Rights has been even more strict in its Delfi series of cases, saying that platform’s moderation does mean liability.

The cases interpreting Articles 12-15 have been relatively numerous and have significantly changed the operation of the article, in particular in copyright cases. In spite of that, Articles 12-15 are among the most stable and least controversial of EU digital laws, with the Commission at least superficially still arguing for their preservation.

Proposals for Changes in the EU

While sporadic comments on the need to change Article 12-15 have occasionally been made, the first indication that a more serious reform is in view had been made in the 2015 Digital Single Market Strategy where it was indicated that slow removal of illegal content might necessitate “new measures”.

The next step was made in the highly flawed and controversial Copyright in the DSM Directive, which essentially provides that an online content-sharing service provider performs an act of communication to the public and cannot benefit from Article 14 protection unless it either has a valid agreement with the rightholders or employs “high industry standards of professional diligence” and “best efforts” to ensure the removal of works for which the rightholders have provided information.

In 2020, the new Digital Strategy was passed. It directly promises to look into the “responsibilities of online platforms and information service providers” and to “reinforce the oversight over platforms’ content policies”. While no indication exists of what the draft Digital Services Act (coming later in 2020) might contain, the recent Inception Impact Assessment papers give a somewhat clearer picture. They indicate that the EU is considering several policy options.

The first is to essentially maintain the present regime, with the E-Commerce Directive as the main instrument and the Recommendation on the Illegal Content, Copyright in the DSM Directive, the AVMSD and the Terrorist Content Regulation as the sector-specific measures. The second would be a relatively limited intervention to make the procedural obligations of the 2018 Recommendation on illegal content binding. The third is to make a more comprehensive change, modernising the E-Commerce Directive. In the Commission’s own words this would

clarify and upgrade the liability and safety rules for digital services and remove disincentives for their voluntary actions to address illegal content, goods or services they intermediate, in particular in what concerns online platform services. Definitions of what is illegal online would be based on other legal acts at EU and national level.

It would also mean “specific, binding and proportionate obligations, specifying the different responsibilities in particular for online platform services”. Significantly, “further asymmetric obligations” might be needed. The asymmetry referred to here means the difference between types and sizes of platforms. In other words, not all rules would apply to all platforms. At this point a list of specific obligations is introduced:

  • harmonised obligations to maintain ‘notice-and-action’ systems covering all types of illegal goods, content, and services, as well as ‘know your customer’ schemes for commercial users of marketplaces
  • rules ensuring effective cooperation of digital service providers with the relevant authorities and ‘trusted flaggers’ (e.g. the INHOPE hotlines for a swifter removal of child sexual abuse material) and reporting, as appropriate
  • risk assessments could be required from online platforms for issues related to exploitation of their services to disseminate some categories of harmful, but not illegal, content, such as disinformation
  • more effective redress and protection against unjustified removal for legitimate content and goods online
  • a set of transparency and reporting obligations related to the these processes
  • transparency, reporting and independent audit obligations to ensure accountability with regards to algorithmic systems in order to ensure better oversight.

Of particular interest is the idea that “gatekeeping” platforms might have to be subject to ex ante rules. This is in line with the indication that asymmetric rules might be needed. The ex ante regime is presently applied in EU telecommunications law where certain ideas from competition law (significant market power) are taken and applied to impose remedies on market actors in danger of violating competition rule. Not only are these applied asymmetrically – to some and not to others – but they are also imposed before a violation occurs (to prevent it).

Three policy options are considered here. The first is to revise the horizontal framework set in the Platform-to-Business Regulation. The second is to adopt a horizontal framework empowering regulators to collect information from large online platforms acting as gatekeepers. The third and most interesting is the potential introduction of an ex ante regulatory framework. This, in turn, would have two sub-options. The first would be a black list of prohibited practices. The second would be the “adoption of tailor-made remedies addressed to large online platforms acting as gatekeepers on a case-by-case basis where necessary and justified”. “Platform-specific non-personal data access obligations, specific requirements regarding personal data portability, or interoperability requirements” are given as examples of remedies.

None of the options are mutually exclusive.

Concluding Remarks

In the view of this author, the Commission’s attempt to reform the E-Commerce Directive could prove more focused and less problematic than their US counterpart in each of the scenarios outlined above. Whether this is the case depends mainly on the ability to preserve articles 12 – 15 which have proved robust. In our view, the Copyright in the DSM attempt to water down this protection in Article 17 was misguided and should be removed for reasons that have been extensively debated in literature.

The use of the term “responsibility” in this and a number of other documents might suggest the desire to limit the proliferation of the illegal content but is in the view of this author vague and problematic. That some platforms (the ‘large’ ones) act illegally may seem superficially obvious and may elicit calls for intervention and more active behaviour but intermediaries are still predominantly just that – intermediaries. They are usually accused of inertia in removing content, illegal or otherwise, not of political or economic bias. The fact that a whole set of tools from the arsenal of copyright, competition, criminal, administrative and tax laws exist – and are not used – should limit the EU’s desire to add to the arsenal. Nevertheless, we believe that two factors are important and may signify the success of future EU rules.

First, the EU is attempting to achieve the move towards more “responsible” platforms in significantly different ways than the US. Rather than relying on S.230 equivalent itself, or attempting an omnibus provision to replace the ECD, it passed a number of laws and soft laws on illegal, terrorist and problematic content. While this may appear to be more flexible and avoid the big political clashes, it also ushers a specific form of rule-by-decree where recommendations (with threats of further legislative action) are used to force platforms into more responsible behaviour. If the recommendations are turned into directives and regulations with the proper democratic and regulatory oversight (and this is one of the policy options), this problem disappears and the flexibility of the modular solution remains. Put in different terms, since the reality is complex, the laws need to be complex and specific too.

Second, the suggestions that ex ante sector-specific asymmetric remedies might be applied to gatekeeping platforms is original and potentially capable of solving the problems arising from disparities in platform size, type, purposes and business model. The danger is that rules so drafted have not been tested in anything but the telecoms sector (where the EU has several decades experience) and would need careful drafting and even more careful monitoring.

In our view, what is needed is evidence-supported fact-based sector-specific intervention with the use of experimental methods in cases where everything else fails. Not only does this preserve the immensely important liability insulation but it also achieves the specific goals when and where needed.

  1. Trump’s order came about after Twitter marked two of his posts with their fact-check stamp. The First Amendment of the US Constitution protects against government attempts to abridge the freedom of speech but also protects those private companies’ moderation as a form of speech. Section 230 allows content moderation making acts such as Twitter’s lawful. ↩︎

  2. This is a vague reference to the fact that platforms are of different size and impact. See below for possible EU solution to this problem. ↩︎

  3. In view of this author this is wrong and confuses dominance and abuse thereof – which may or may not be an issue – with the abuse of Section 230. ↩︎

  4. While this was also used in the US, it was made more sophisticated in court. ↩︎

More on the Rise of Robots: Why Regulators Should Help Spread Robotics and Why We Ought to Embrace Robots

if we do not have a clear idea what problems the ‘robot laws’ are supposed to solve, we should almost certainly not have any robot laws

Last time, I looked at AI and robots in general and concluded that fear – including nonsensical statements about killer robots – has been the dominant paradigm through which humans have seen robotics and artificial intelligence. I have concluded that EU’s new policy on AI contains some useful approaches but may potentially fall victim to that same fear. In this brief addendum, I will look more closely at robots and argue that the same reasoning applies to them and that a more courageous approach – in which law can play a positive role – can be taken.

Unlike AI, which is perceived as pervasive but is poorly understood, robots form a more clear picture in the popular mind. It should then come as a surprise to see that EU has little to nothing to say about them. While large sections of AI policy papers discussed last time apply to robotics, clear policy statements and visions about robotics are absent. Instead, one gets the funding initiatives, a base for knowledge-sharing and cooperation and a flagship initiative on robotics. None amount to a coherent policy.

Central to the debate about robots and the positive role they are to play is the question of why (not how) regulation should step in. A typical misconception is that lawmakers need to solve ethical questions in order to improve daily life. (Often bundled with that are issues such as legal personality for robots). According to this view, the problem is the lawmakers’ poor understanding of the technology and their lack of ability to make critical decisions about ethics. I would argue that ethics, debates about legal personality or liability for rampaging robots have little to do with the problem and are distracting from the broader picture. Robots are, simply put, not the killer machines of our imagination. Nothing illustrates this more than the latest health crisis.

In the midst of the Corona crisis, a simple fact – that robots do not get sick – was overlooked. Robots perform a vast and ever-increasing number of tasks, conveniently eliminating humans where no humans should stand. In a post-pandemic economy, robots have the ability to fill in the gaps where humans are not allowed or not able to interact. Robots serve in the delivery chains for our many online purchases. Robots help make the goods which we consume. Robots are facilitators in every step in the food production and distribution chain.

Innovation is not the keyword associated with robots. Instead, robots are thought of as facilitators in the value chain. Nevertheless, robots have been used in astonishingly new ways, innovating sectors one would not associate them with. Robots are saving the food supply chain. Robots can help treat Coronavirus patients. Robots innovate transportation. Robots improve industrial safety. Robots help fight climate change.

Looking at new German industrial policy, Lars Klingbell, of the German Social Democratic Party, arguing agains the fear mantra, says that an offensive industrial policy leads to “good jobs, new technologies and social prosperity result—in that order”. Robots and AI bring jobs and growth in an ageing society.

On the average, studies have found real albeit limited negative effects of robotics. A MIT study found that “adding one additional robot per 1,000 workers reduced the national employment-to-population ratio by about 0.2 percent, with some areas of the U.S. affected far more than others.” These effects are not imagined although they are minor and almost certainly offset by the positive effects the robotics bring. In fact, there is evidence that robots are much less disruptive than believed and that most of the disruptive effects already occurred generations ago. Another study found that robots cause few industrial accidents. Over a 30-year span, 37 robot-related accidents occurred with 27 resulting in a worker’s death. This is a minuscule number compared to many thousand worker fatalities resulting from non-robot related incidents.

Various models for regulating robots have been proposed. We suggest that any future regulation should take two starting points. First, the existing regulatory regime is largely adequate. Instead of assuming that we need to develop new rules, we should take the functional approach and address the problems in an ad hoc manner when and if they arise. Attempting to pass universal rules would risk that we address unrelated phenomena. It is only the robots’ ability to be autonomous that present a risk and the technology that would make them truly autonomous is not ready. Second, it is doubtful that ‘robot laws’ can meaningfully be separated from laws applying to AI or technology in general. Put differently, if we do not have a clear idea what problems the ‘robot laws’ are supposed to solve, we should almost certainly not have any robot laws. Robotic governance, as a way of providing a framework for dealing with autonomous devices should be a more adequate way of thinking about the problem.

Even when we are fascinated by robots, we remain concerned about questions such as dignity, responsibility and liberty. These worries should not be dismissed idly. I would argue, however, that nothing that we have not already faced is made worse by robotics. If we fall victim to killer robots, it will be because we have always already fallen victim to technology.

the bonds that technology imposes upon us will be broken not through the absence of technology but though better understanding of its meaning.

Under the circumstances, robotics have demonstrated that our approach needs at least to be modified and become more courageous. While bias and privacy issues need to be addressed, as does the fear that automation will replace disrupted jobs, robotics are inherently no more disruptive than other forms of technology tasked with turning nature into a resource. As Heidegger taught us, the bonds that technology imposes upon us will be broken not through the absence of technology but though better understanding of its meaning.


Robots and AI in EU Law & Policy: A Brief Comment

Robots and AI in EU Law & Policy: A Brief Comment

When Karel Capek coined the term ‘robot’ in his 1920 play R.U.R., the idea of mechanical servants was not new. For thousands of years, mankind played with the idea of building artificial companions, contraptions that would serve them, fulfilling their wishes and taking upon themselves the tasks their creators thought difficult or demeaning. Capek’s play established an important idea, one that would be dominant in the 20th century and that follows us today – that machines are not to be trusted. From Capek’s own R.U.R. to Fritz Lang’s 1927 Metropolis, to Do Androids Dream of Electric Sheep?, The Terminator series and The Matrix, Western culture is full of the images of rogue machines. The robot as an automaton full of potential but also ever ready to rebel against its creator has informed the few attempts to understand how such a threat – real or imagined – might be regulated.

To the fear of machines can also be added the general fear of artificial intelligence (AI), often confused with robots.1 Here, the threat of autonomous machines or deadly robots has given place to fear of machines making decisions that affect humans without another human being able to interpolate. Fascinated with the ability of algorithms to improve efficiency, we are also fascinated with the threats algorithm-mediated democracy presents. While dystopian images of control through AI easily produce revulsion, the reality – as is often the case – is more complex and more subtle.

The question we are asking in here is simple: how has the European regulator reacted to these two phenomena? The problem has again caught the attention of the public after recent EU efforts to form a more coherent policy (for comments, see here, here and here).

The 2018 saw the Communication on artificial intelligence for Europe. While efforts existed before this, the instrument is the first attempt to provide a coherent response to the challenges of AI and robotics in the EU. Prior to that, the most significant rule was Article 15 of the 1995 Data Protection Directive which, consistent with the ‘fear’ paradigm, provided that no person should be subject to a decision producing significant legal effects “based solely on automated processing of data”.

It is, perhaps, interesting and telling that the 2020 Digital Strategy contains no ideas on regulating AI other than promising a white paper, and does not mention robots at all. The 2018 AI Communication, on the other hand, contains three fundamental pillars:

  • Being ahead of technological developments and encouraging uptake by the public and private sectors
  • Preparing for socio-economic changes brought about by AI
  • Ensuring an appropriate ethical and legal framework

As part of the third pillar, the Commission published in 2020 the promised White Paper as well as a Report on safety and liability implications of AI, the Internet of Things and Robotics. Both documents form part of the 2020 Digital Strategy and EU’s vision but also make the first coherent EU policy on AI and robotics.

The third pillar combines initiatives from different legal fields, promising to, among other things, draft AI ethics guidelines, a reinterpretation of the Product Liability Directive and liability and safety frameworks for AI, Internet of Things and robotics. The 2020 White Paper says this of regulating AI:

While AI can do much good, including by making products and processes safer, it can also do harm. This harm might be both material (safety and health of individuals, including loss of life, damage to property) and immaterial (loss of privacy, limitations to the right of freedom of expression, human dignity, discrimination for instance in access to employment), and can relate to a wide variety of risks. A regulatory framework should concentrate on how to minimise the various risks of potential harm, in particular the most significant ones.

The “most significant” risks are then named as those relating to fundamental rights and those relating to safety and liability issues. For each, a number of relatively vague examples are given followed by suggested “possible adjustments” in the EU regulatory framework.

Significantly, the low-risk and high-risk approach to AI activities is introduced. It is meant to ensure that the regulatory intervention is proportionate. The essential idea here is that high-risk activities need to conform to safety and fairness and data-protection requirements while low-risk ones ought to be regulated significantly more leniently. The high-risk sectors are those where “given the characteristics of the activities typically undertaken, significant risks can be expected to occur”. Healthcare, transport, energy and public sector are given as examples. The second criterion is where technology is used in such a manner that significant risks are likely to arise. “Injury, death or significant material or immaterial damage” are given as examples.

In terms of liability, the Commission plays with some familiar concepts, such as reapportioning the burden of proof to the defendant, strict liability in certain cases and reapportioning the liability to the unit best placed to take it.

In terms of the enforcement, the White Paper suggests maintaining the current decentralised enforcement structures and a sector-specific mechanisms (so that e.g. pharmaceutical authorities maintain competences relating to pharmaceuticals etc.).

The low/high risk approach has been criticised, mainly on the account of it being difficult to distinguish between various types of risk. We would suggest, however, that the approach is reasonable. Most activities related to robotics and AI can either be handled through existing legislation or fall in the low-risk domain. No radical legislation is suggested while a clear contrast is kept between low-risk and high-risk activities.

Some risks in premature regulation exist. Commenting on the White Paper, the Global Digital Foundation paper suggests that the attempt to regulate both in a similar measure is based on a misguided notion of human-like AI producing harmful effects. Instead, the paper suggests, the AI affecting safety (transport, health, etc.) should be regulated while AI affecting human rights can rely on the already existing rules on non-discrimination. An attempt to regulate the latter might, it is argued, result in various fairness mechanisms being built in, attempting to achieve a degree of neutrality and minimise discrimination. Such an attempt might achieve the opposite result as AI is moved more in the direction of the dystopian we know and recognise.

“The essence of technology is by no means anything technological” says Heidegger. The essence of Artificial Intelligence and robotics is also not technological but lies in our relationship to it. In that sense, we are delivered to the mercy of AI and robotics only if we regard it as something neutral. While our fascination with AI and robots has for decades been tempered by our fear, our modern views are more confused. The robots of yesterday and the limited uses to which AI could be put until so recently has given way to pervasiveness and, with it, more confusion. Such confusion requires a measure of recognition and honesty. At the moment, robots and AI are an opportunity, the threats are limited and the need for direct intervention confined to most radical cases. The EU policy has taken the right step in formulating a balanced approach. If there is anything that is able to taint the picture it will be the thinking based on fear.

  1. While robotics is a branch of technology that deals with programming autonomous machines, those that do work “by themselves”, AI is a branch of computer science tasked, in words of John McCarthy, with “making a machine behave in ways that would be called intelligent if a human were so behaving”. Finally, machine learning is the ability to learn without being explicitly programmed. ↩︎

New Commission Digital Strategy – What Does it Mean?

A leaked version of the Commission’s new digital strategy has been published today on Euractiv. It is worth noting that there has been some pressure on Commissioner Vestager to come forward not only with a timeframe for the reform of the competition framework in the EU but also its digital laws. The present draft should be seen in light of the EU’s efforts to be more competitive on the global stage.

For all practical purposes, this document is meant as a replacement of the 2015 Digital Single Market Strategy and is, as such, very important. This post is not meant as an analysis of all of its main points, but wants instead to thrwo light on three potentially questionable ideas.

The first is that “principles that apply to our traditional industry […] also have to apply to digital industries”. Further to that “existing laws that govern the behavior of traditional industries need to be adapted to the specific circumstances under which new digital business models operate.” I have argued in a recent article that functional equivalence – the desire to apply legacy regulatory models to new problems – lies at the core of EU regulatory effort. But, the “like should be regulated alike” adage is wrong in principle and can be dangerous in practice. Functional equivalence causes innovative and disruptive services to be subject to small and incremental regulatory changes rather than the necessary complete remodeling. In its crudest form, functional equivalence has meant literal copying of solutions from legacy technologies. Disruption is the motor vehicle of the modern economy. It is in its nature to create new realities that demand new solutions. Three questions must be answered before functional equivalence can be applied:

    • is the disruptive service innovative?
    • does the traditional framework make it impossible or significantly hamper it?
    • are there any other reasons (e.g. public policy) for subjecting it to traditional framework?

If the answer to all three questions is positive, the lawmaker should refrain from using functional equivalence. In its present form, the demand to apply traditional solutions is out of place.

Possibly the most significant change (and, ironically, the one that is as far away from functional equivalence as possible) is the “assessment of options for an ex ante regulatory framework for gate-keeping platforms with significant network effects as part of Digital Services Act Package”. This seemingly innocuous remark hides a potentially revolutionary idea. The ex ante sector specific regulation is the current regulatory model applied to telecommunications (and telecommunications only). Traditional competition law applies ex post – it identifies a problem that has already occurred and applies a remedy to it. Telecommunications regulation, gradually liberalized in 80s and regulated from the 90s onwards, required a significantly different regime. It was no longer enough to wait for the failure to occur to then address it. It was necessary to identify potential market failures in advance and then apply appropriate remedies in order to prevent future occurrences. A hybrid regime was thus developed. While the guiding principles and market definition came from traditional telecoms laws, the enforcement mechanism was based on ex ante application of remedies. The ultimate aim – as yet unachieved – was for only the competition laws to apply.

The current proposal essentially would presumably introduce something very similar for gate-keeping platforms. A preliminary assessment of market power of relevant platforms would be conducted. Based on that assessment, a set of remedies would be applied to those markets or individual platforms identified as having a significant market power (SMP).

The approach outlined above has effectively been in use since early 90s in EU telecoms law. It is, in principle, possible to apply it to platforms. In some aspects, these platforms resemble telecoms operators. A number of them are dominant globally or regionally, a fair number compete only with a small number of alternative providers and a significant number are can either not be replaced or are perceived as irreplaceable by their users. The remedies applied to the problems are very specific: access to facilities, regulated pricing, etc. Remedies that would be applied to platforms would have to be agreed on separately and would almost certainly be very different from those existing in telecoms world. No indication is given in the strategy document of what they might look like. On the other hand, the opinion on whether ex ante sector-specific regulation has really been effective is divided. While there is some basis for claiming that access to existing facilities has been improved, it also seems that the framework has not been equally good in spurring innovation. Applying the model to platforms would be something hitherto untested with most of the knoweldge from the telecoms world being inapplicable.

The third point of interest is the diversity of the instruments, approaches and enforcement mechanisms offered. The paper contains four focus areas: technology that works for people, a fair and competitive digital economy, a digital and sustainable society and an international dimension. In each, a set of diverse key actions is proposed (not all are listed in this post).

The first, technology that works for people, contains the Digital Services Act, announced in Commissioner von der Leyen’s program. Unsurprisingly, the act, which is meant to replace the central E-Commerce Directive, is supposed to increase responsibility of online platforms – a task which will undoubtedly create as much political tension as the DSM Copyright Directive. At the same time, though, artificial intelligence, which features prominently in the Commission’s program, with the promise of “legislation for a coordinated European approach on the human and ethical implications of Artificial Intelligence” has only been addressed through a promised White Paper. Furthermore, media and democracy action plans are promised as are digital education plan and “initiatives” on platform workers.

In the efforts to achieve fair and competitive digital economy, the Commission promises a Communication on an EU data strategy, data framework for data governance and a Data Act on B2G data sharing. Furthermore, initiatives on digital capacities, Gigabit connectivity and taxation are offered. The most prominent part of this section is the “possible adaptation” of EU competition law mentioned above.

The sustainability part, practically non-existent in the previous initiatives, contains a number of interesting initiatives including carbon-neutral data centers, a Circular device initiative, improved EU health records and 5G corridors for automated mobility and railways.

The overwhelming conclusion is that this is a document which is less focused on rigid legal solutions and more exploratory in seeking innovative approaches to governance. While its predecessor targeted the three EU regulatory siloes (telecoms, e-commerce and AVMS), suggesting changes in each, the leaked draft is problem-centered and horizontal in its approach. Its insistence on “transparency, accountability, empowerment and inclusion” are also to be welcome. Two of the many measures at least, if achieved, would have a significant impact. The first is the Digital Services Act. The second is the ex ante regulatory model for platforms.

At the same time, the Commission seems to underestimate the degree to which it is falling behind in 5G and next-generation technologies. Only two measures, 5G corridors and 5G cost reduction, have some substance to them. Little is said about deployment and take-up challenges or the many and diverse regulatory obstacles. Even less is said about regional differences.

Is the paper visionary? It does not appear to be. Is the paper significantly different than the 2015 DSM Strategy? Possibly. It is more global in focus, its aims are less clichéd, its goals are stated more clearly. Are the measures proposed potentially achievable? This is difficult to say. Two of its most important contributions, the Digital Services Act and the possible new competition regime, are highly politically charged and technically difficult. The rest dissipates into a see of white papers, action plans and initiatives. It is unlikely that even the majority would have effect but some might. This is where the problems arise. While it is true that E-Commerce Directive dates to 2001 and that a rethinking of the approach might be needed, an achievement here would possibly be significantly less important in the long term than improving 5G deployment, creating a good basis for an AI-based economy or understanding the link between technology and sustainability (that goes beyond recycling and carbon-neutral data centers). The paper is a good outline for rethinking the present challenges but presents a hazy and confused vision of Europe in 10, 15 or 20 years.

The CJEU AirBnB Judgment: Another Look at Composite Services in the EU

On December 19, the CJEU decided the highly-anticipated AirBnB case. The case arose out of a reference for a preliminary ruling from the Tribunal de grande instance de Paris, who wanted to know if a service “consisting in connecting hosts with accommodation to rent with persons seeking that type of accommodation” (such as AirBnB) constitutes an information society service and is thus subject to all the benefits that the E-Commerce Directive (ECD) provides.1

The case underlines a much deeper and more important question which is: should platforms as providers of composite services (those consisting of electronic and non-electronic parts) be subject to e-commerce rules, sector-specific rules, or both.
The consequences can be dramatic. A transport platform such as Uber would typically claim that it acts as an e-commerce service only and plays little to no part in the provision of actual transport services, which are then carried out by independent contractors. They claim that, as such, they should not be subject to local transport, labour or other laws in relation to matters such as licensing, working conditions, insurance, etc. If, on the other hand, an argument is accepted that platforms are akin to transport, accommodation and other services, they become subject to a variety of sector-specific laws making them potentially less competitive. The incumbent service providers (such as traditional taxi associations, hotels, etc.) have lobbied with varying degrees of success.

The first significant take on the problem has been the Uber cases (C-434/15, see here and C-320/16, see here). AG Szpunar’s main argument in those cases has concentrated on, first, determining whether the service is composite or not and, second, in finding the essential element of the composite service. If the service is not composite but obviously non-electronic, the ECD does not apply at all. Likewise, if the service only has an electronic element, the problem disappears. On the other hand, for true composite services, where the part not covered by ECD (e.g. transport laws) affects the composite service, the relationship between the two parts will have to be analysed to determine the extent to which different regimes apply. Any other conclusion would render the EU liberalising efforts completely meaningless as the non-electronic part would always trump the electronic one.

The key tool in determining which part is dominant, in AG’s opinion, is finding whether the electronic activities have self-standing economic value or not. If they do, the full liberalising effect of the ECD must be applied at least to the electronic part. Where this is not the case, a further analysis is needed to determine which part is dominant in a relationship of dependance. In order to determine whether this influence exists, elements such as price determination, safety, working conditions, the ability to work for other companies, etc. would be looked at. In Uber, not only was the platform dependent on the transport element (and therefore not self-standing economically) but Uber had decisive influence over that transport element. In Uber, the AG opined, the platform part is entirely dependent on the transport service as Uber would have no value without it. In a way, it is Uber itself that is the provider of transport services.

The court followed the AG in its decision.

In its AirBnB opinion, the AG Szpunar reiterated the main arguments of the Uber case but also refined them further. The two decisive criteria that need to be looked at in order to determine if a service is an information society service are a) if the platform offers services having a material content and b) if service provider exercises decisive influence on the conditions under which such services are provided. The first criterion looks, essentially, at whether the service is composite or not. If it is not, ECD applies. Unlike Uber, which does not exist without the non-material part, the link between AirBnB and the services it provides is more tenuous. Accommodation providers are not tied to the Uber platform but are free to provide their services elsewhere or offer them on several platforms at the same time. The picture is similar in respect of the second criterion. Unlike Uber, AirBnB had significantly more limited control over the non-electronic part of the composite service.

The main dilemma that has permeated the debate about platforms as providers of composite service has been and remains: how can innovation that the disruptive services bring be protected while maintaining the fair competition in the market. The elements of the AG’s answer are found in paragraphs 61-68 of the Opinion. On one hand, it would be wrong for the suppliers of innovative services to be excluded from the benefits of ECD simply because they have created a composite service, otherwise also subject to sector-specific laws. On the other, it would be wrong for the providers of such services to be privileged solely on the account of the ECD applicability, from which the others do not benefit. The answer lies not in the fact that non-electronic services are connected to electronic but in the degree of control of the platform over the non-electronic part. The more significant that control is, the less likely is the provider to benefit from the ECD regime and the more drawn it is within the influence of sector-specific laws.

The Court in its judgment followed the AG’s main points. AirBnB does not provide accommodation but helps the providers and the seekers find each other. Furthermore, AirBnB is not essential in this respect: other websites or, indeed, offline services can be used.

There is little to be surprised about this. The judgment does not contradict Uber but clarifies its main points. At present, the decision tree is as follows:

  • Are services only electronic or non-electronic? If either, the respective regimes would apply.
  • If services are compositve, does the electronic part have self-standing value? If yes, apply ECD.
  • If not, find the dominant part by applying a variety of criteria to determine the level of influence of one over the other.

This seems to be a reasonable approach. Anything else would fall in either of the two extremes: the automatic subjecting of disruptive services to sector-specific laws or the unchecked ability to circumvent sector-specific laws simply by claiming the status of an electronic platform.

There is no doubt that a variety of cases would arise in the future where it would be difficult to apply the Court’s criteria. This would be a result of the complexities that the reality presents. Different parts of composite services are often intermingled to an extent that calls legal simplifications into question. National courts would then have to look into the level of control that platforms exercise over non-electronic parts and that exercise is not necessarily a simple one.

The conclusion must be that modern services simply do not lend themselves to “legacy” solutions designed for a different world. The present solution, as applied by CJEU, is temporary. If it truly finds it necessary, the legislator at either national or EU level would have to agree on new rules specific to different platforms. We do not, at present, know that this is the case and this is not necessarily an invitation to go in that direction but is, at best, a warning that a time may come where the patching up of the present regime will no longer be sufficient.

  1. For a more detailed analysis see my article: Savin, A. (2019). Electronic Services with a Non-electronic Component and their Regulation in EU Law. Journal of Internet Law, 23(3), 1, 14-27. ↩︎

The CJEU Facebook Judgment on Filtering with Global Effect: Clarifying Some Misunderstandings

On October 3 the CJEU delivered the judgment (text of the judgment and AG Szpunar’s opinion) in the C-18/18 Eva Glawischnig-Piesczek v Facebook Ireland Ltd. The case concerned a request by an Austrian politician for an interim measure removing a defamatory post already declared as such in separate court proceedings in Austria. The reference for a preliminary ruling requested the interpretation of Article 15 of the Electronic Commerce Directive (ECD, text here). In specific, the question was whether the article prohibits

ordering a host provider to remove information which it stores, the content of which is identical to the content of information which was previously declared to be illegal, or to block access to that information, irrespective of who requested the storage of that information;

ordering a host provider to remove information which it stores, the content of which is equivalent to the content of information which was previously declared to be illegal, or to block access to that information, and

extending the effects of that injunction worldwide.

The most important part of the reference is whether the injunctive relief issued by a national (Austrian) court should be of limited territorial scope (worldwide, EU or local). Also important is the nature of the elimination that can be requested (“identical” vs “equivalent” content). A number of dramatic interpretations have been seen in the media (see also here and here) and a basic clarification is in order (see my earlier post on AG Szpunar’s Opinion here).

1) The ECD insulates bona fide intermediaries from liability when they expeditiously remove the problematic material. Although Facebook is a hosting provider in terms of Article 14 ECD(3) and Recital 45, national courts may issue interim measures requesting that illegal material be removed. Although Article 14 controls the liability regime and sets its boundaries, it does not control the possibility for the material to be removed through various judicial and administrative measures. A non-liable intermediary can be forced to remove material through injunctive relief requested in national courts. This is not a new position in EU law and is based not only in ECD (dating to 2001) but is also recognised in CJEU case law (see Husovec’s study on injunctions against intermediaries in EU law here). Furthermore, Facebook’s refusal to remove the material presumably also removed its insulation (which was not the subject of this case).

2) Article 15 ECD prohibits content monitoring, the idea being that only prior knowledge or subsequent reluctance can bring liability. Intermediaries are, therefore, not expected to take active steps to filter content. On the contrary, CJEU has been clear in prohibiting general filtering, limiting any such measures to specific content. The question in the present case is whether Article 15 might interfere with the request to remove the defamatory content. The Court says that it does not, saying that the explicit purpose of Article 15 is to prohibit general but allow specific monitoring which may be necessary for law enforcement purposes. Specific is for this case defined as

“a particular piece of information stored by the host provider concerned at the request of a certain user of its social network, the content of which was examined and assessed by a court having jurisdiction in the Member State, which, following its assessment, declared it to be illegal.”

In that sense, it is OK to request the blocking of “identical” content in the future which is here the content “essentially conveying the same message”. The court is specific in reiterating that such monitoring cannot be general in nature. An injunction requesting that all posts of certain nature be filtered (e.g. by type of content, region, poster, etc.) would be generic and thus contrary to Article 15.

3) Much has been made in the media of the real or potential extraterritorial effect of the injunction in question. EU itself does not itself provide any injunctive relief, extraterritorial or otherwise. Article 35 of the Brussels I Recast regulation is explicit in stating that provisional measures depend on the laws of Member States even in case where litigation is ongoing in a different state. The Court in this case is simply stating that Article 18 ECD, which says that “Member States shall ensure that court actions available under national law concerning information society services’ activities allow for the rapid adoption of measures, including interim measures, designed to terminate any alleged infringement and to prevent any further impairment of the interests involved” does not prevent the worldwide effect of injunctions. It does not say anything about the desirability of such injunctions or their potential effect in worldwide digital trade. Put simply, if Austrian court itself had no basis in its national law to issue a worldwide injunction, EU law could not provide it with such a basis. Equally important is the Court’s refusal to enter into debating the merits of such worldwide injunctions. “It is up to Member States to ensure that the measures which they adopt and which produce effects worldwide take due account of [international law].” This is the right approach as CJEU manifestly lacks jurisdiction on this issue.

4) The main difference between Advocate Gerenal Szpunar’s opinion and the final judgment is in the treatment of “identical” versus “equivalent”. The AG’s opinion allows the monitoring to take place on all the information of all the users on the platforms for “identical” information but only on the disseminator’s account for “equivalent” information. This is both justified and reasonable. No such distinction exists in the Court’s judgment, which allows monitoring for both identical and equivalent. Furthermore, the AG insists that monitoring of “equivalent” information be “clear, precise and foreseeable” and that it be proportionate and in respect of fundamental rights. Again, the Court’s judgment mentions none of these limitations. Instead, it opts for the more formalistic approach, stating that “equivalent” information must be “essentially unchanged compared with the content which gave rise to the finding”. As long as the content is “essentially” the same, the manner of monitoring is not relevant. The Court’s opting for the narrower and less balanced view might conceivably lead to problems.

5) One of the most important reservations was voiced against the Court’s insistence on the balancing role the filtering is supposed to play. The worry is that filtering mechanisms are inherently unable to exercise the right balance between different fundamental freedoms (such as reputation or freedom of expression). The danger does not arise from the Court’s interpretation and likely not from national law’s use of extraterritorial measures but from the EU legislation and soft law. The new EU law on copyright, for example, fundamentally misunderstands its own relationship with the ECD and effectively requires monitoring in open violation of Article 15 and CJEU’s case law on filtering. Furthermore, various forms of soft law (see my earlier post here) is directed at platforms which would need to engage in various forms of monitoring and filtering. It is true that the Court itself refers to Facebook’s “recourse to automated search tools and technologies” but does not endorse them. The Court does not insist on filtering, does not propose filtering techniques of a particular kind or form and does not explicitly offer any balancing guidelines. In our opinion, it is right to be silent on these issues as anything else would be second-guessing the lawmaker.

There are plenty of reasons to worry about the EU’s muddled approach to platforms and filtering (see my article here) but the Court’s constitutionally-limited role means it does not have the mystical powers that the general public ascribes to it.

6) Some confusion may arise with the CJEU’s recent case on a somewhat connected (although substantially different) issue. In C-507/17 Google v CNIL, the Court ruled that the operator of a search engine is not required (on the meaning of this, see here) to carry out a de-referencing on all versions of its search engine but only on EU-based ones. This case is based on privacy laws (the old Data Protection Directive) and is in no way connected to the present situation.

More important, perhaps, are the possible differences between the Court’s approach to intermediaries in general and its approach to injunctions in copyright cases. While the former is rudimentary, the latter is significantly more detailed. In any case, it is doubtful if injunctions arising out of EU data protection, copyright and e-commerce laws ought to be subject to the same treatment. Such approach would make little sense and would be practically messy and difficult to justify.

* * *

In summary, all the Court did in the present case was to say that injunctive relief based on an already existing court decision (which, in turn, is based on national law) is not contrary to EU law. It did not create this relief nor did it argue for its extraterritorial effect (or otherwise). Attempts to extrapolate this approach to all filtering cases are misguided and are based on a fundamental misunderstanding of how the EU law operates. The public’s anger should be directed to the Commission and its muddled and incoherent approach to platforms and its inability to produce a coherent law for the future Digital Single Market.

The EU Digital Services Act: What it is and Why it Shouldn’t Happen

Ursula von der Leyen, the president-elect of the European Commission, has recently published political guidelines for 2019-2024. Those who have been careful enough to read the document would have noticed that “a Europe fit for the digital age” is one of the six political goals the president-elect wants to achieve. Among various statements populating the section on digital Europe, the following is found:

A new Digital Services Act will upgrade our liability and safety rules for digital platforms, services and products, and complete our Digital Single Market.

The words should have claimed the attention of professionals and businesses alike. They are remarkable not only for their terseness but also for naming the act, thus indicating that preparations are well underway.

Just a few days later a document leak confirmed that DSM Steering Group is engaged in drafting the the EU Digital Services Act that would serve as a basis for:

The two ideas signalled here are interesting each in their own right.

The E-Commerce Directive, dating to 2001, and based on the ideas from the late 90s, has served remarkably well. Similar to the Clinton/Magaziner approach in the US, the directive is based on the ‘no-regulation-for-regulation’s sake’ principle and on the laissez faire approach of regulating only where there was a specific need. The two main ideas it is based on are home country control (the idea that information society services (ISSs) should be regulated in the home country only) and the heavy insulation of bona fide ISSs from liability. The reasons for the Directive’s relative longevity can be found both in its flexible character and in the political difficulties which its potential revision would initiate. As a framework instrument for the entire e-commerce regulatory ‘silo’, the Directive had been designed to last.

But, that some of the fundamental principles the Directive is based on would eventually have to be revisited became all too apparent already in the 2015 when the Digital Single Market Strategy had been published. There, the Commission indicated that

It is not always easy to define the limits on what intermediaries can do with the content that they transmit, store or host before losing the possibility to benefit from the exemptions from liability set out in the e-Commerce Directive.

Crucially, the Commission shifted the focus from ISSs to platforms.1 Soon thereafter, the language in the many policy documents on platforms changed. Platforms, the Commission claimed, need to act “responsibly” if they are to continue to benefit from insulation. In its highly controversial Copyright in the DSM Directive the Commission suggests that even ISSs falling under Article 14 ECD need to have effective protective technologies and that they cannot rely on the article if they do not. ‘Active’ providers cannot rely on the protection as they are not responsible enough in the Commission’s mind.

When the two ideas are joined, the picture becomes to emerge: the Commission would like the ECD REFIT exercise – which seems to be overdue – to result in a more nuanced approach, recognising that only responsible platforms can be protected and revising the insulation regime.

What does the preparatory document reveal about the Commission’s ambition and the scope of the potential intervention?

Five problems are listed:

  • a) divergent rules for online services in Member States. This item signals the existence of divergent rules in Member States, some of which have already engaged in regulated issues as diverse as hate speech, advertising or social networks.
  • b) outdated rules and regulatory gaps. The second item indicates that the ECD rules no longer “adequately reflect the technical, social and economic reality of today’s services“. In particular the concepts of active and passive providers are labelled as being out of date. Furthermore, the document claims that some online intermediaries simply do not know what regime they are under.
  • c) insufficient incentives to tackle online harms and protect legal content. Here the claim is that platforms are disincentivized to act proactively and that small and medium platforms face regulatory risk as a result.
  • d) ineffective public oversight. This item indicates that there is no dedicated “platform” regulator which would exercise oversight in “content moderation or advertising transparency”
  • e) high entry barriers for innovative services. The last item talks of “no legally binding, controlled way for regulatory experimentation with innovative services” currently in existence.

The document is clear in proposing the scope of application to include “all digital services, and in particular online platforms.” For each of the crucial ECD components, something new is proposed.

  1. It is proposed that home country control be kept and its scope extended. This would now include “consumer protection, commercial communications and contract laws” but also services established in the third countries. Finally, it is also proposed that any exceptions be narrowly interpreted. This is a problem as consumers and contract laws are largely outside the scope of the “coordinated filed”. It is not clear whether Member States would accept such a dramatic expansion of the operation of the article. It is even less clear why the extension is suggested as home country control generated little to no case law and even less problems in practice.
  2. The documents names ISSs as still relevant. It suggests, however, that there are “grey areas” and names them as “ISPs, cloud services, content delivery networks, domain name services, social media services, search engines, collaborative economy platforms, online advertising services, and digital services built on electronic contracts and distributed ledgers.” This is a remarkable claim as the list includes almost all intermediaries in operation today, which amounts to a claim that the concept of information society services is inadequate. This claim is not substantiated. The mention of the European Electronic Communications Code (EECC) is a nod to convergence.It is suggested that future ISS services here may be defined “on the basis of a large or significant market status, complementing the competition threshold of dominance”. This effectively brings in the ad hoc sector-specific regulation of the kind applied to telecommunications services. This approach would require that digital service providers be classified as having the correct market status or power before regulation would be applied to them. Ex ante regulation is only imposed on those with the required market power. There are numerous problems with this idea but two are particularly significant. First, the ad hoc regime in the telecoms sector has always been a temporary measure going toward full application fo competition law. In e-commerce law, competition rules are already fully functional and little to nothing would be gained by this exercise. Second, the market analysis process would inevitably have to be conducted by various national authorities designated for the purpose which would, in turn lead to insurmountable practical problems and divergence, thus eliminating any positive effects achieved.
  3. The liability provisions of ECD would be updated. The “harmonised graduated and conditional exemption” approach is suggested kept but with additions. First, the case-law would be used to update the present issues in Articles 12-15. Second, new rules or clarifications of the principles to “collaborative economy services, cloud services, content delivery networks, domain name services, etc.” would be needed. The notions of “active” and “passive” hosts would be replaced with notions of “editorial functions, actual knowledge and the degree of control”. Finally, an exemption for proactive measures would be introduced.The changes suggested here essentially fall into two categories. The non-problematic ones result from CJEU’s case-law on intermediaries. While that case-law is not without problems in itself,2 it has largely followed the contours of Articles 12-15. The more problematic are specific rules on platforms. It is not clear which of these “special” categories would need special rules and what these would aim to achieve. It is even less clear what liability regime would be imposed on them and if the disastrous Copyright in the DSM proactive filtering would find its way here too. It seems that it would, as it is not clear how it would be possible to proactively and “responsibly” catch alleged illegalities without expensive (and potentially unreliable) AI solutions. Even more worryingly, no suggestion is made here (as it was in DSM Directive) that smaller platforms would be exempt.

    It seems that the drafters of the document operate with the false assumption that active/passive dichotomy is the basis of EU case law on intermediaries. It is not. While there are cases where this approach (otherwise originating in the USA) is used, the CJEU cases are more nuanced and speak of levels and types of engagement, precisely in line with that the document otherwise demands.

  4. The document pays lip service to the prohibition of general monitoring of Article 15. However, it suggests that “algorithms for automated filtering technologies” should be considered for better “transparency and accountability”. Filtering, in principle, may be specific and general. The CJEU case law suggests that general filtering is prohibited while specific is allowed. The problem is that the document goes beyond specific filtering and suggest that AI technologies essentially playing the role of general monitoring are OK. One cannot have both. Either the prohibition on general monitoring is maintained OR AI and filtering solutions are allowed. They cannot coexist.
  5. Tailored and EU-wide notice-and-action rules are suggested. These are have already been introduced in the Illegal Content Communication. Binding transparency obligations are suggested as are options for “algorithmic recommendation systems of public relevance”.
  6. New regulatory structure is suggested with “a central regulator, a decentralised system, or an extension of powers of existing regulatory authorities” all being considered. Any of the three solutions would be problematic. Centralised regulators are difficult or impossible to achieve in any area of shared competence. The decades of experience the EU gained in the telecoms sector is a testimony to this. The decentralised system is possible but would require a prior harmonisation of competences which is politically just marginally easier to achieve than a central authority. Finally, extending the powers of the existing authorities may be viable but would not serve the Single Market purposes proclaimed in this document and elsewhere.

While there may be a number of problems with various suggestions made in the document, the main criticism can be summarised as follows:

  • no convincing reasons are given for abandoning the approach based on information society services (ISSs) and moving to platforms. While it is certainly true that confusion exists (both in terms of fully digital and composite services) as to what is or is not an ISS, any move needs to be justified. Platforms are ill-defined and fluid (both in the EU and elsewhere) and vary from one-man blogs to multi-billion dollar global conglomerates. There are no convincing reasons to use them as replacement for ISSs. The confusion is compounded by the insistence on keeping the ISSs as regulatory units while insisting that almost everything on the Web today is a “grey area” and needs a different treatment.
  • the liability regime in Articles 12-15 has proven adequate as have various kinds of relief (including injunctive). The CJEU case-law adequately managed to deal with different aspects of ISS liability and managed to apply Articles 12-15 to modern phenomena. Any change to this regime must be based on throughly-researched and very specific suggestions. While it is good that the drafters seek to incorporate the CJEU cases, their suggestions as to the liability in other situations are superficial at best. Equally worrying is their refusal to address the criticism already directed at filtering solutions in the DSM proposal. While few would disagree with the claim that the Facebook and others need to act “more responsibly”, this does not extend to the claim that all platforms need to nor does it equal the obligation to filter. That the drafters know this is confirmed in their problematic suggestion that market status should determine the scope of regulatory burden.
  • the document demonstrates the lack of understanding (and even lack of interest in) the modern phenomena such as blockchain technologies or AI. The former are mentioned with a vague suggestion that some regulation may be needed but without any conviction as to what the policy goals should be.
  • The bundling of such diverse problems as copyright infringement, illegal speech, hate speech, advertising, etc. under one umbrella is a mistake. Experience has taught us that the difference between them justify differences in the regulatory approach. While convergence in real life suggests that regulatory convergence may also be necessary, this is neither the declared nor the actual aim of the potential Digital Services Act. On the contrary, the document is actively averse to convergence problems and suggests that current regulatory silos be kept.
  • Finally, the suggestion that single regulator should be possible is politically naive

The reform of the ECD, more than any other issue, needs to address two issues, if it is to be successful.

The first is the effect of convergence on regulation. In other words, we need to know how are converged services to be regulated. The present document is as far from solving this problem as can be possible. The proposal is just a reform of the E-Commerce silo that maintains that very silo. Telecoms, audio-video and e-commerce have each their own regulatory circles, often with separate regulators. No attempt has been made to address this, either in the EECC or here.

The second is: what types of regulatory approaches (including soft-law, standardisation, etc.) should be used for governing modern digital services in order to stimulate innovation while protecting the categories of population that need to be protected. Again, the present document makes no attempt to solve this question as it sticks to old-fashioned black-letter law. Modern digital services are inherently disruptive and may require completely different governance structures. The Commission seems to be confused, mixing soft and hard law, general and subject-specific, new and legacy, often in the same documents, sometimes even in the same sentence.

  1. On why this may be problematic in itself, see my article
  2. See Martin Husovec, Injunctions Against Intermediaries in the European Union, CUP 2017.

Why Advocate General Szpunar is Right to Suggest Facebook can be Ordered to Remove Material Worldwide

In March 2018, Oberster Gerichtshof of Austria submitted a request for a preliminary ruling based on a case generated when a disparaging comment about an Austrian politician was published on Facebook. When Facebook refused to remove the comment, a request was submitted to Austrian court, requesting that an injunction be issued essentially demanding that Facebook deletes the content. Advocate General Szpunar’s Opinion in the case was published on June 4.

The question originally referred to CJEU was if Article 15 of the E-Commerce Directive precludes an injunction requesting the removal of allegedly illegal content, and whether such an injunction can have a worldwide effect. In other words, the question is not only if Article 15 (prohibition of general monitoring) precludes injunctions such as the one at hand but also, if it does not, should such injunctions be issued with Member State-only or worldwide effect.

While it may, at first sight, appear odd that the referring court is asking about Article 15 (monitoring) as opposed to Article 14 (hosting), the logic behind the request, however, should not be too difficult to follow. Article 14 only provides immunity to bona fide intermediaries, i.e. those who are not aware of the infringing content and who expeditiously remove. On the other hand, those who are made aware of it, and subsequently refused to remove, lose the liability insulation. Since Facebook explicitly refused the request to remove, the question revolves around the legality of an injunction (assuming the post itself is, indeed, illegal).

Since the injunction would impose an obligation to monitor the content (in order to identify what needs to removed), the framing of the question makes sense. On the other hand, the AG does point out that an injunction imposing the general obligation to monitor content of a certain type (in other to identify the offending content), would have the effect of removing the protection provided by Article 14. In other words, general obligation to monitor is illegal under Article 15. For the sake of clarity, nothing in AG Szpunar’s Opinion suggests that general obligation to monitor is either desirable or, indeed, lawful.

Moving on to specific obligation to monitor, the AG points out that specific monitoring is explicitly allowed in Recital 47 of the E-Commerce Directive. Articles 14(3) and 18, furthermore, explicitly recognize that prevention is an important aim in the Directive and no prevention would be possible without some degree of monitoring. Crucially,

in order not to result in the imposition of a general obligation, a monitoring obligation must, as seems to follow from the judgment in L’Oréal and Others, satisfy additional requirements, namely it must concern infringements of the same nature by the same recipient of the same rights.

It is not allowed to issue an injunction requesting that the provider monitor for infringements that are like the one at hand, are inspired by it or, indeed, are perpetrated by different users. All of this would be general monitoring. The AG’s reading of the Directive and case-law, put simply, is that monitoring targeting a specific infringement is allowed, whereas general monitoring is not. This position is firmly embedded in the E-Commerce Directive.

The referring court, importantly, also asked if information identical to that being requested should also be removed. In AG’s words, a social network platform can be ordered to seek and identify, among all the information disseminated by users of that platform, “the information identical to the information that was characterised as illegal by a court that has issued that injunction.” The answer to this is equally clear. When doing so, the social network can only be required to monitor the information disseminated by the user who disseminated the original info.

In respect of the territorial scope of the obligation, the Advocate General makes two crucial observations. The first is that the obligation in question (defamation) is not based on EU law. Second, Article 15 of the E-Commerce Directive does not regulate the territorial effect of injunctions. In case of the first, had the obligation been based on EU law, that law would determine its own territorial scope – extraterritorial or otherwise. In case of the second, had Article 15, or indeed the E-Commerce Directive, something to say about its scope of application, that could be used to determine the territorial scope of the injunctions. Further to that, although Brussels I (Recast) regulation regulates jurisdiction in cases of defamation, and allows preliminary measures, it does not say anything about the territorial scope of these measures. Put simply, since the EU law says nothing about the territorial scope of the injunction, it remains for the national (Austrian) law to resolve this issue.

As it stands, it is difficult to argue against Advocate General’s reasoning. A different conclusion would mean that a national court’s order to remove the illegal content would simply be circumvented by using the Article 15 argument and claiming that any action to identify the content would amount to “monitoring”. That could not have been the intention of the drafters. The monitoring that Facebook is obliged to engage in is limited to the specific post and equivalent comments from the same user. This is still very different from a general obligation to monitor which would require that all content be monitored to identify various real and potential infringements of a particular kind.

Article 15 prohibits general monitoring in respect of information society services covered in Articles 12-14 of the Directive. Where these articles do not apply, neither does the obligation to general monitoring. As Martin Husovec observes, however,1 the CJEU had the prohibition of general monitoring transplanted into copyright enforcement in the Scarlet Extended judgement. But, while this may indicate that CJEU believes general monitoring to be invasive, it says nothing about specific measures. The case law is remarkably clear and consistent in terms of specific monitoring. The Scarlet Extended case is precise in what constitutes illegal general monitoring in relation to filtering but says nothing of specific measures. The only outstanding question can be whether a particular form of action demanded in a court order amounts to general or specific monitoring. On the other hand, that specific measures of monitoring are allowed has been clearly confirmed in the UPC case. Finally, as AG Szpunar himself argued in McFadden case, and as he repeats in this Opinion, in order for the specific monitoring to be legal, it has to be limited in terms of subject and duration.

If there is something that needs clarification then it is the nature of the “similar” measures and the effort that must be made to make sure that specific monitoring is, indeed, limited in time and scope. In terms of the former, the present Opinion suggest that “equivalent” comments from the “same user” can be covered but nothing else. This is somewhat in line with the Court’s cases law to date. In terms of the latter, Member States already seem to take different approaches to injunctions with some (notably Germany) being markedly broader in their attempts to impose monitoring obligations. While one could possibly wish that clearer guidelines come form the Court, the Facebook judgement introduces nothing new in terms of the existing law. It is true that distinguishing between general and specific monitoring may be a difficult issue to resolve in specific cases. It is also possible to take issue with the EU policy on monitoring and to argue in favor or against the general/specific method. But, until that provision is modified, the Court should follow the AG’s opinion.

  1. Martin Husovec, Injunctions Against Intermediaries in the European Union (CUP 2017), p. 118