The Department of Justice’s Review of Section 230: Where Does the EU Stand?

The US Department of Justice published today its proposal for a review of Section 230 of the 1996 Communications Decency Act. This proposal is not the only document to have come out today (see Senator Hawley’s somewhat unrealistic idea here) but it is the official US government position. Section 230 – which insulates intermediaries against claims for illegality for content posted by third parties – is arguably among the most important provisions for the Internet ever drafted. Its review will not only have worldwide impact (thanks to the global presence of US Internet companies) but will influence the EU’s attempt to revise its own regime, which is currently going on.

It is worth mentioning that President Trump’s earlier problematic Executive Order, which also targets Section 230, is not the subject of this post (although it detracts from the possible real need to reform Section 230).1

Section 230

S.230 CDA is the US law giving protection from liability to intermediaries who publish third party content. Its Section (1) essentially says that no internet intermediary should be treated as a publisher for content posted by third parties. At the same time, its “Good Samaritan” Section 2 gives immunity to providers who voluntarily take action to remove “obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable” content in good faith, whether such content otherwise enjoys constitutional protection. This provisions protects content-moderation decisions, irrespective of their motives as long as they are taken in “good faith”.

It is important to notice that the protection of S.230(1) is valid whether the defendant intermediary knew about the content, whether it acted in good faith and whether it was neutral or not. The main idea of the section is to eliminate frivolous lawsuits that would otherwise target intermediaries, who are not only frequently better financially suited to respond but also a known target as opposed to often anonymous posters. The system thus created has been fairly robust, with a considerable body of cases to support it.

While S.230 covers regular speech, a different provision – S.512 of the 1998 Digital Millennium Copyright Act (DMCA) – applies to copyright, essentially protecting bona fide intermediaries from copyright infringement lawsuits. Unlike S.230, S.512 requires that intermediaries lack knowledge of the infringements and expeditiously remove the content upon obtaining that knowledge. This provision too is the subject of criticism and calls for a review (on its effectiveness see here).

Somewhat surprisingly, both the Democrats and the Republicans are arguing for modifying or even revoking Section 230, although for the opposite reasons. Joe Biden suggested that it should be revoked completely while Bernie Sanders said that S.230 was written “before the current era of online communities, expression, and technological development” and that large profitable corporations should be held responsible. Meanwhile, Republicans are suggesting that platforms are systematically biased against them (although no evidence for this exists) and suggested measures to curb what they call censorship. Both political groups show a staggering lack of understanding of the underlying reasons for and the actual operation of Section 230, as evidenced not only in flawed proposals but also in a multitude of contradictory statements in the press.

Department of Justice Review

The DOJ document is not a proposal in itself but a draft document outlining its position. The document is based on four principles:

  • large tech platforms are no longer nascent or fragile and are not in need of protection2
  • S.230 has been abused by large platforms to maintain their dominant position.3
  • Core immunity for defamation needs to be preserved to foster free speech
  • Hosting defamatory content needs to be distinguished from enabling criminal activity

Having these in mind, the document indicates four areas for reform:

  1. Incentivising Online Platforms to Address Illicit Content. Here the “Bad Samaritans” would lose immunity. This includes actors who purposefully facilitate criminal activity or material but also those who purposefully “blind themselves and law enforcement to illicit material”. Separate Carve-Outs exist for child abuse, terrorism, and cyber-stalking as well as for actual knowledge or court judgments. The latter is a clear departure from S.230 which provides immunity even where actual knowledge exists.
  2. Clarifying Federal Government Enforcement Capabilities to Address Unlawful Content: clarifying that federal civil enforcement actions are not covered.
  3. Promoting Competition: clarifying that Federal antitrust claims are not covered.
  4. Promoting Open Discourse and Greater Transparency: this would seek to replace the words “otherwise objectionable” in Section 230(c)(2) with “unlawful” and “promotes terrorism.” Further to that, a statutory definition of “good faith” would be provided.

The document proposes both more and less content moderation at the same time.

On one hand, the DOJ wants the removal of immunity for “Bad Samaritans” in situations involving federal criminal law. The effect of this provision would be to increase liability for sites that do not remove material. The material the DOJ wanted moderated is not only terrorist, child pornography or cyber-stalking but also any activity that violates federal criminal law as well as the “purposeful blindness” in relation to such material. The main danger here is the proliferation of frivolous cases where removal is demanded without likelihood of success but with a view to forcing the platform to act upon request.

On the other hand, the proposal would be significantly changing the “Good Samaritan” provisions. The proposal claims that “the new statutory definition would limit immunity for content moderation decisions to those done in accordance with plain and particular terms of service and consistent with public representations. These measures would encourage platforms to be more transparent and accountable to their users.” The document, in tying the removal to terms & conditions and in removing the “otherwise objectionable” wording aims to limit the circumstances under which platforms can moderate content (in line with President Trump’s ideas about Twitter and Facebook). The new setup would mean no protection in cases where platforms remove content that is anything but directly unlawful. In other words, if the platform – exercising its first Amendment rights – removes the content which is not directly unlawful but is, in its view, objectionable (for instance because it is untruthful), it would lose the protection of S.230. This increases the instances in which platforms would expose themselves to liability dramatically.

The contradictory directions in which the Proposal is going are confusing. The First Amendment implications of the removal of “otherwise objectionable” wording are obvious and will be debated extensively. While vague bipartisan support for the reform of S.230 exists, any proposal would first have to be introduced to the Congress and then pass the Democrat-dominated House of Representatives and the Republican-dominated Senate. This is not likely at present.

European Law on Intermediaries Today

European rules on intermediary liability, Articles 12-15 of the 2001 E-Commerce Directive (ECD), are slightly newer than American ones. Unlike their US counterpart, there is no separation between copyright and all other cases – everything is covered by the same set of rules. The text is somewhat simpler and more direct.

The main idea is that information society service providers are not liable in cases where they are mere conduits (Article 12 ECD), where they are caching (Article 13) and where they are hosting (Article 14) material, provided certain conditions are met. On top of that, no general obligation to monitor exists (although monitoring to remove specific illegal content is allowed).

Of particular interest is Article 14 which insulates intermediaries from liability in cases where:

  • the provider does not have actual knowledge of illegal activity or information and, as regards claims for damages, is not aware of facts or circumstances from which the illegal activity or information is apparent; or
  • the provider, upon obtaining such knowledge or awareness, acts expeditiously to remove or to disable access to the information.

Unlike S.230, Article 14 on hosting requires the lack of actual knowledge as well as expeditious removal upon obtaining such knowledge. Importantly, however, where the US law distinguishes the “Good Samaritan” platforms, the EU law was traditionally based on the distinction between active and passive platforms and says nothing about “Samaritans”, good or otherwise.4 The more active a platform, the less likely it is to enjoy the protection of Article 14. While CJEU clarified in eBay that moderation in itself does not automatically bring liability, it did say that an active role in the form of “optimising the presentation” might be such an active role. The Court of Human Rights has been even more strict in its Delfi series of cases, saying that platform’s moderation does mean liability.

The cases interpreting Articles 12-15 have been relatively numerous and have significantly changed the operation of the article, in particular in copyright cases. In spite of that, Articles 12-15 are among the most stable and least controversial of EU digital laws, with the Commission at least superficially still arguing for their preservation.

Proposals for Changes in the EU

While sporadic comments on the need to change Article 12-15 have occasionally been made, the first indication that a more serious reform is in view had been made in the 2015 Digital Single Market Strategy where it was indicated that slow removal of illegal content might necessitate “new measures”.

The next step was made in the highly flawed and controversial Copyright in the DSM Directive, which essentially provides that an online content-sharing service provider performs an act of communication to the public and cannot benefit from Article 14 protection unless it either has a valid agreement with the rightholders or employs “high industry standards of professional diligence” and “best efforts” to ensure the removal of works for which the rightholders have provided information.

In 2020, the new Digital Strategy was passed. It directly promises to look into the “responsibilities of online platforms and information service providers” and to “reinforce the oversight over platforms’ content policies”. While no indication exists of what the draft Digital Services Act (coming later in 2020) might contain, the recent Inception Impact Assessment papers give a somewhat clearer picture. They indicate that the EU is considering several policy options.

The first is to essentially maintain the present regime, with the E-Commerce Directive as the main instrument and the Recommendation on the Illegal Content, Copyright in the DSM Directive, the AVMSD and the Terrorist Content Regulation as the sector-specific measures. The second would be a relatively limited intervention to make the procedural obligations of the 2018 Recommendation on illegal content binding. The third is to make a more comprehensive change, modernising the E-Commerce Directive. In the Commission’s own words this would

clarify and upgrade the liability and safety rules for digital services and remove disincentives for their voluntary actions to address illegal content, goods or services they intermediate, in particular in what concerns online platform services. Definitions of what is illegal online would be based on other legal acts at EU and national level.

It would also mean “specific, binding and proportionate obligations, specifying the different responsibilities in particular for online platform services”. Significantly, “further asymmetric obligations” might be needed. The asymmetry referred to here means the difference between types and sizes of platforms. In other words, not all rules would apply to all platforms. At this point a list of specific obligations is introduced:

  • harmonised obligations to maintain ‘notice-and-action’ systems covering all types of illegal goods, content, and services, as well as ‘know your customer’ schemes for commercial users of marketplaces
  • rules ensuring effective cooperation of digital service providers with the relevant authorities and ‘trusted flaggers’ (e.g. the INHOPE hotlines for a swifter removal of child sexual abuse material) and reporting, as appropriate
  • risk assessments could be required from online platforms for issues related to exploitation of their services to disseminate some categories of harmful, but not illegal, content, such as disinformation
  • more effective redress and protection against unjustified removal for legitimate content and goods online
  • a set of transparency and reporting obligations related to the these processes
  • transparency, reporting and independent audit obligations to ensure accountability with regards to algorithmic systems in order to ensure better oversight.

Of particular interest is the idea that “gatekeeping” platforms might have to be subject to ex ante rules. This is in line with the indication that asymmetric rules might be needed. The ex ante regime is presently applied in EU telecommunications law where certain ideas from competition law (significant market power) are taken and applied to impose remedies on market actors in danger of violating competition rule. Not only are these applied asymmetrically – to some and not to others – but they are also imposed before a violation occurs (to prevent it).

Three policy options are considered here. The first is to revise the horizontal framework set in the Platform-to-Business Regulation. The second is to adopt a horizontal framework empowering regulators to collect information from large online platforms acting as gatekeepers. The third and most interesting is the potential introduction of an ex ante regulatory framework. This, in turn, would have two sub-options. The first would be a black list of prohibited practices. The second would be the “adoption of tailor-made remedies addressed to large online platforms acting as gatekeepers on a case-by-case basis where necessary and justified”. “Platform-specific non-personal data access obligations, specific requirements regarding personal data portability, or interoperability requirements” are given as examples of remedies.

None of the options are mutually exclusive.

Concluding Remarks

In the view of this author, the Commission’s attempt to reform the E-Commerce Directive could prove more focused and less problematic than their US counterpart in each of the scenarios outlined above. Whether this is the case depends mainly on the ability to preserve articles 12 – 15 which have proved robust. In our view, the Copyright in the DSM attempt to water down this protection in Article 17 was misguided and should be removed for reasons that have been extensively debated in literature.

The use of the term “responsibility” in this and a number of other documents might suggest the desire to limit the proliferation of the illegal content but is in the view of this author vague and problematic. That some platforms (the ‘large’ ones) act illegally may seem superficially obvious and may elicit calls for intervention and more active behaviour but intermediaries are still predominantly just that – intermediaries. They are usually accused of inertia in removing content, illegal or otherwise, not of political or economic bias. The fact that a whole set of tools from the arsenal of copyright, competition, criminal, administrative and tax laws exist – and are not used – should limit the EU’s desire to add to the arsenal. Nevertheless, we believe that two factors are important and may signify the success of future EU rules.

First, the EU is attempting to achieve the move towards more “responsible” platforms in significantly different ways than the US. Rather than relying on S.230 equivalent itself, or attempting an omnibus provision to replace the ECD, it passed a number of laws and soft laws on illegal, terrorist and problematic content. While this may appear to be more flexible and avoid the big political clashes, it also ushers a specific form of rule-by-decree where recommendations (with threats of further legislative action) are used to force platforms into more responsible behaviour. If the recommendations are turned into directives and regulations with the proper democratic and regulatory oversight (and this is one of the policy options), this problem disappears and the flexibility of the modular solution remains. Put in different terms, since the reality is complex, the laws need to be complex and specific too.

Second, the suggestions that ex ante sector-specific asymmetric remedies might be applied to gatekeeping platforms is original and potentially capable of solving the problems arising from disparities in platform size, type, purposes and business model. The danger is that rules so drafted have not been tested in anything but the telecoms sector (where the EU has several decades experience) and would need careful drafting and even more careful monitoring.

In our view, what is needed is evidence-supported fact-based sector-specific intervention with the use of experimental methods in cases where everything else fails. Not only does this preserve the immensely important liability insulation but it also achieves the specific goals when and where needed.


  1. Trump’s order came about after Twitter marked two of his posts with their fact-check stamp. The First Amendment of the US Constitution protects against government attempts to abridge the freedom of speech but also protects those private companies’ moderation as a form of speech. Section 230 allows content moderation making acts such as Twitter’s lawful. ↩︎

  2. This is a vague reference to the fact that platforms are of different size and impact. See below for possible EU solution to this problem. ↩︎

  3. In view of this author this is wrong and confuses dominance and abuse thereof – which may or may not be an issue – with the abuse of Section 230. ↩︎

  4. While this was also used in the US, it was made more sophisticated in court. ↩︎

More on the Rise of Robots: Why Regulators Should Help Spread Robotics and Why We Ought to Embrace Robots

if we do not have a clear idea what problems the ‘robot laws’ are supposed to solve, we should almost certainly not have any robot laws

Last time, I looked at AI and robots in general and concluded that fear – including nonsensical statements about killer robots – has been the dominant paradigm through which humans have seen robotics and artificial intelligence. I have concluded that EU’s new policy on AI contains some useful approaches but may potentially fall victim to that same fear. In this brief addendum, I will look more closely at robots and argue that the same reasoning applies to them and that a more courageous approach – in which law can play a positive role – can be taken.

Unlike AI, which is perceived as pervasive but is poorly understood, robots form a more clear picture in the popular mind. It should then come as a surprise to see that EU has little to nothing to say about them. While large sections of AI policy papers discussed last time apply to robotics, clear policy statements and visions about robotics are absent. Instead, one gets the funding initiatives, a base for knowledge-sharing and cooperation and a flagship initiative on robotics. None amount to a coherent policy.

Central to the debate about robots and the positive role they are to play is the question of why (not how) regulation should step in. A typical misconception is that lawmakers need to solve ethical questions in order to improve daily life. (Often bundled with that are issues such as legal personality for robots). According to this view, the problem is the lawmakers’ poor understanding of the technology and their lack of ability to make critical decisions about ethics. I would argue that ethics, debates about legal personality or liability for rampaging robots have little to do with the problem and are distracting from the broader picture. Robots are, simply put, not the killer machines of our imagination. Nothing illustrates this more than the latest health crisis.

In the midst of the Corona crisis, a simple fact – that robots do not get sick – was overlooked. Robots perform a vast and ever-increasing number of tasks, conveniently eliminating humans where no humans should stand. In a post-pandemic economy, robots have the ability to fill in the gaps where humans are not allowed or not able to interact. Robots serve in the delivery chains for our many online purchases. Robots help make the goods which we consume. Robots are facilitators in every step in the food production and distribution chain.

Innovation is not the keyword associated with robots. Instead, robots are thought of as facilitators in the value chain. Nevertheless, robots have been used in astonishingly new ways, innovating sectors one would not associate them with. Robots are saving the food supply chain. Robots can help treat Coronavirus patients. Robots innovate transportation. Robots improve industrial safety. Robots help fight climate change.

Looking at new German industrial policy, Lars Klingbell, of the German Social Democratic Party, arguing agains the fear mantra, says that an offensive industrial policy leads to “good jobs, new technologies and social prosperity result—in that order”. Robots and AI bring jobs and growth in an ageing society.

On the average, studies have found real albeit limited negative effects of robotics. A MIT study found that “adding one additional robot per 1,000 workers reduced the national employment-to-population ratio by about 0.2 percent, with some areas of the U.S. affected far more than others.” These effects are not imagined although they are minor and almost certainly offset by the positive effects the robotics bring. In fact, there is evidence that robots are much less disruptive than believed and that most of the disruptive effects already occurred generations ago. Another study found that robots cause few industrial accidents. Over a 30-year span, 37 robot-related accidents occurred with 27 resulting in a worker’s death. This is a minuscule number compared to many thousand worker fatalities resulting from non-robot related incidents.

Various models for regulating robots have been proposed. We suggest that any future regulation should take two starting points. First, the existing regulatory regime is largely adequate. Instead of assuming that we need to develop new rules, we should take the functional approach and address the problems in an ad hoc manner when and if they arise. Attempting to pass universal rules would risk that we address unrelated phenomena. It is only the robots’ ability to be autonomous that present a risk and the technology that would make them truly autonomous is not ready. Second, it is doubtful that ‘robot laws’ can meaningfully be separated from laws applying to AI or technology in general. Put differently, if we do not have a clear idea what problems the ‘robot laws’ are supposed to solve, we should almost certainly not have any robot laws. Robotic governance, as a way of providing a framework for dealing with autonomous devices should be a more adequate way of thinking about the problem.

Even when we are fascinated by robots, we remain concerned about questions such as dignity, responsibility and liberty. These worries should not be dismissed idly. I would argue, however, that nothing that we have not already faced is made worse by robotics. If we fall victim to killer robots, it will be because we have always already fallen victim to technology.

the bonds that technology imposes upon us will be broken not through the absence of technology but though better understanding of its meaning.

Under the circumstances, robotics have demonstrated that our approach needs at least to be modified and become more courageous. While bias and privacy issues need to be addressed, as does the fear that automation will replace disrupted jobs, robotics are inherently no more disruptive than other forms of technology tasked with turning nature into a resource. As Heidegger taught us, the bonds that technology imposes upon us will be broken not through the absence of technology but though better understanding of its meaning.

 

Robots and AI in EU Law & Policy: A Brief Comment

Robots and AI in EU Law & Policy: A Brief Comment

When Karel Capek coined the term ‘robot’ in his 1920 play R.U.R., the idea of mechanical servants was not new. For thousands of years, mankind played with the idea of building artificial companions, contraptions that would serve them, fulfilling their wishes and taking upon themselves the tasks their creators thought difficult or demeaning. Capek’s play established an important idea, one that would be dominant in the 20th century and that follows us today – that machines are not to be trusted. From Capek’s own R.U.R. to Fritz Lang’s 1927 Metropolis, to Do Androids Dream of Electric Sheep?, The Terminator series and The Matrix, Western culture is full of the images of rogue machines. The robot as an automaton full of potential but also ever ready to rebel against its creator has informed the few attempts to understand how such a threat – real or imagined – might be regulated.

To the fear of machines can also be added the general fear of artificial intelligence (AI), often confused with robots.1 Here, the threat of autonomous machines or deadly robots has given place to fear of machines making decisions that affect humans without another human being able to interpolate. Fascinated with the ability of algorithms to improve efficiency, we are also fascinated with the threats algorithm-mediated democracy presents. While dystopian images of control through AI easily produce revulsion, the reality – as is often the case – is more complex and more subtle.

The question we are asking in here is simple: how has the European regulator reacted to these two phenomena? The problem has again caught the attention of the public after recent EU efforts to form a more coherent policy (for comments, see here, here and here).

The 2018 saw the Communication on artificial intelligence for Europe. While efforts existed before this, the instrument is the first attempt to provide a coherent response to the challenges of AI and robotics in the EU. Prior to that, the most significant rule was Article 15 of the 1995 Data Protection Directive which, consistent with the ‘fear’ paradigm, provided that no person should be subject to a decision producing significant legal effects “based solely on automated processing of data”.

It is, perhaps, interesting and telling that the 2020 Digital Strategy contains no ideas on regulating AI other than promising a white paper, and does not mention robots at all. The 2018 AI Communication, on the other hand, contains three fundamental pillars:

  • Being ahead of technological developments and encouraging uptake by the public and private sectors
  • Preparing for socio-economic changes brought about by AI
  • Ensuring an appropriate ethical and legal framework

As part of the third pillar, the Commission published in 2020 the promised White Paper as well as a Report on safety and liability implications of AI, the Internet of Things and Robotics. Both documents form part of the 2020 Digital Strategy and EU’s vision but also make the first coherent EU policy on AI and robotics.

The third pillar combines initiatives from different legal fields, promising to, among other things, draft AI ethics guidelines, a reinterpretation of the Product Liability Directive and liability and safety frameworks for AI, Internet of Things and robotics. The 2020 White Paper says this of regulating AI:

While AI can do much good, including by making products and processes safer, it can also do harm. This harm might be both material (safety and health of individuals, including loss of life, damage to property) and immaterial (loss of privacy, limitations to the right of freedom of expression, human dignity, discrimination for instance in access to employment), and can relate to a wide variety of risks. A regulatory framework should concentrate on how to minimise the various risks of potential harm, in particular the most significant ones.

The “most significant” risks are then named as those relating to fundamental rights and those relating to safety and liability issues. For each, a number of relatively vague examples are given followed by suggested “possible adjustments” in the EU regulatory framework.

Significantly, the low-risk and high-risk approach to AI activities is introduced. It is meant to ensure that the regulatory intervention is proportionate. The essential idea here is that high-risk activities need to conform to safety and fairness and data-protection requirements while low-risk ones ought to be regulated significantly more leniently. The high-risk sectors are those where “given the characteristics of the activities typically undertaken, significant risks can be expected to occur”. Healthcare, transport, energy and public sector are given as examples. The second criterion is where technology is used in such a manner that significant risks are likely to arise. “Injury, death or significant material or immaterial damage” are given as examples.

In terms of liability, the Commission plays with some familiar concepts, such as reapportioning the burden of proof to the defendant, strict liability in certain cases and reapportioning the liability to the unit best placed to take it.

In terms of the enforcement, the White Paper suggests maintaining the current decentralised enforcement structures and a sector-specific mechanisms (so that e.g. pharmaceutical authorities maintain competences relating to pharmaceuticals etc.).

The low/high risk approach has been criticised, mainly on the account of it being difficult to distinguish between various types of risk. We would suggest, however, that the approach is reasonable. Most activities related to robotics and AI can either be handled through existing legislation or fall in the low-risk domain. No radical legislation is suggested while a clear contrast is kept between low-risk and high-risk activities.

Some risks in premature regulation exist. Commenting on the White Paper, the Global Digital Foundation paper suggests that the attempt to regulate both in a similar measure is based on a misguided notion of human-like AI producing harmful effects. Instead, the paper suggests, the AI affecting safety (transport, health, etc.) should be regulated while AI affecting human rights can rely on the already existing rules on non-discrimination. An attempt to regulate the latter might, it is argued, result in various fairness mechanisms being built in, attempting to achieve a degree of neutrality and minimise discrimination. Such an attempt might achieve the opposite result as AI is moved more in the direction of the dystopian we know and recognise.

“The essence of technology is by no means anything technological” says Heidegger. The essence of Artificial Intelligence and robotics is also not technological but lies in our relationship to it. In that sense, we are delivered to the mercy of AI and robotics only if we regard it as something neutral. While our fascination with AI and robots has for decades been tempered by our fear, our modern views are more confused. The robots of yesterday and the limited uses to which AI could be put until so recently has given way to pervasiveness and, with it, more confusion. Such confusion requires a measure of recognition and honesty. At the moment, robots and AI are an opportunity, the threats are limited and the need for direct intervention confined to most radical cases. The EU policy has taken the right step in formulating a balanced approach. If there is anything that is able to taint the picture it will be the thinking based on fear.


  1. While robotics is a branch of technology that deals with programming autonomous machines, those that do work “by themselves”, AI is a branch of computer science tasked, in words of John McCarthy, with “making a machine behave in ways that would be called intelligent if a human were so behaving”. Finally, machine learning is the ability to learn without being explicitly programmed. ↩︎

New Commission Digital Strategy – What Does it Mean?

A leaked version of the Commission’s new digital strategy has been published today on Euractiv. It is worth noting that there has been some pressure on Commissioner Vestager to come forward not only with a timeframe for the reform of the competition framework in the EU but also its digital laws. The present draft should be seen in light of the EU’s efforts to be more competitive on the global stage.

For all practical purposes, this document is meant as a replacement of the 2015 Digital Single Market Strategy and is, as such, very important. This post is not meant as an analysis of all of its main points, but wants instead to thrwo light on three potentially questionable ideas.

The first is that “principles that apply to our traditional industry […] also have to apply to digital industries”. Further to that “existing laws that govern the behavior of traditional industries need to be adapted to the specific circumstances under which new digital business models operate.” I have argued in a recent article that functional equivalence – the desire to apply legacy regulatory models to new problems – lies at the core of EU regulatory effort. But, the “like should be regulated alike” adage is wrong in principle and can be dangerous in practice. Functional equivalence causes innovative and disruptive services to be subject to small and incremental regulatory changes rather than the necessary complete remodeling. In its crudest form, functional equivalence has meant literal copying of solutions from legacy technologies. Disruption is the motor vehicle of the modern economy. It is in its nature to create new realities that demand new solutions. Three questions must be answered before functional equivalence can be applied:

    • is the disruptive service innovative?
    • does the traditional framework make it impossible or significantly hamper it?
    • are there any other reasons (e.g. public policy) for subjecting it to traditional framework?

If the answer to all three questions is positive, the lawmaker should refrain from using functional equivalence. In its present form, the demand to apply traditional solutions is out of place.

Possibly the most significant change (and, ironically, the one that is as far away from functional equivalence as possible) is the “assessment of options for an ex ante regulatory framework for gate-keeping platforms with significant network effects as part of Digital Services Act Package”. This seemingly innocuous remark hides a potentially revolutionary idea. The ex ante sector specific regulation is the current regulatory model applied to telecommunications (and telecommunications only). Traditional competition law applies ex post – it identifies a problem that has already occurred and applies a remedy to it. Telecommunications regulation, gradually liberalized in 80s and regulated from the 90s onwards, required a significantly different regime. It was no longer enough to wait for the failure to occur to then address it. It was necessary to identify potential market failures in advance and then apply appropriate remedies in order to prevent future occurrences. A hybrid regime was thus developed. While the guiding principles and market definition came from traditional telecoms laws, the enforcement mechanism was based on ex ante application of remedies. The ultimate aim – as yet unachieved – was for only the competition laws to apply.

The current proposal essentially would presumably introduce something very similar for gate-keeping platforms. A preliminary assessment of market power of relevant platforms would be conducted. Based on that assessment, a set of remedies would be applied to those markets or individual platforms identified as having a significant market power (SMP).

The approach outlined above has effectively been in use since early 90s in EU telecoms law. It is, in principle, possible to apply it to platforms. In some aspects, these platforms resemble telecoms operators. A number of them are dominant globally or regionally, a fair number compete only with a small number of alternative providers and a significant number are can either not be replaced or are perceived as irreplaceable by their users. The remedies applied to the problems are very specific: access to facilities, regulated pricing, etc. Remedies that would be applied to platforms would have to be agreed on separately and would almost certainly be very different from those existing in telecoms world. No indication is given in the strategy document of what they might look like. On the other hand, the opinion on whether ex ante sector-specific regulation has really been effective is divided. While there is some basis for claiming that access to existing facilities has been improved, it also seems that the framework has not been equally good in spurring innovation. Applying the model to platforms would be something hitherto untested with most of the knoweldge from the telecoms world being inapplicable.

The third point of interest is the diversity of the instruments, approaches and enforcement mechanisms offered. The paper contains four focus areas: technology that works for people, a fair and competitive digital economy, a digital and sustainable society and an international dimension. In each, a set of diverse key actions is proposed (not all are listed in this post).

The first, technology that works for people, contains the Digital Services Act, announced in Commissioner von der Leyen’s program. Unsurprisingly, the act, which is meant to replace the central E-Commerce Directive, is supposed to increase responsibility of online platforms – a task which will undoubtedly create as much political tension as the DSM Copyright Directive. At the same time, though, artificial intelligence, which features prominently in the Commission’s program, with the promise of “legislation for a coordinated European approach on the human and ethical implications of Artificial Intelligence” has only been addressed through a promised White Paper. Furthermore, media and democracy action plans are promised as are digital education plan and “initiatives” on platform workers.

In the efforts to achieve fair and competitive digital economy, the Commission promises a Communication on an EU data strategy, data framework for data governance and a Data Act on B2G data sharing. Furthermore, initiatives on digital capacities, Gigabit connectivity and taxation are offered. The most prominent part of this section is the “possible adaptation” of EU competition law mentioned above.

The sustainability part, practically non-existent in the previous initiatives, contains a number of interesting initiatives including carbon-neutral data centers, a Circular device initiative, improved EU health records and 5G corridors for automated mobility and railways.

The overwhelming conclusion is that this is a document which is less focused on rigid legal solutions and more exploratory in seeking innovative approaches to governance. While its predecessor targeted the three EU regulatory siloes (telecoms, e-commerce and AVMS), suggesting changes in each, the leaked draft is problem-centered and horizontal in its approach. Its insistence on “transparency, accountability, empowerment and inclusion” are also to be welcome. Two of the many measures at least, if achieved, would have a significant impact. The first is the Digital Services Act. The second is the ex ante regulatory model for platforms.

At the same time, the Commission seems to underestimate the degree to which it is falling behind in 5G and next-generation technologies. Only two measures, 5G corridors and 5G cost reduction, have some substance to them. Little is said about deployment and take-up challenges or the many and diverse regulatory obstacles. Even less is said about regional differences.

Is the paper visionary? It does not appear to be. Is the paper significantly different than the 2015 DSM Strategy? Possibly. It is more global in focus, its aims are less clichéd, its goals are stated more clearly. Are the measures proposed potentially achievable? This is difficult to say. Two of its most important contributions, the Digital Services Act and the possible new competition regime, are highly politically charged and technically difficult. The rest dissipates into a see of white papers, action plans and initiatives. It is unlikely that even the majority would have effect but some might. This is where the problems arise. While it is true that E-Commerce Directive dates to 2001 and that a rethinking of the approach might be needed, an achievement here would possibly be significantly less important in the long term than improving 5G deployment, creating a good basis for an AI-based economy or understanding the link between technology and sustainability (that goes beyond recycling and carbon-neutral data centers). The paper is a good outline for rethinking the present challenges but presents a hazy and confused vision of Europe in 10, 15 or 20 years.

The CJEU AirBnB Judgment: Another Look at Composite Services in the EU

On December 19, the CJEU decided the highly-anticipated AirBnB case. The case arose out of a reference for a preliminary ruling from the Tribunal de grande instance de Paris, who wanted to know if a service “consisting in connecting hosts with accommodation to rent with persons seeking that type of accommodation” (such as AirBnB) constitutes an information society service and is thus subject to all the benefits that the E-Commerce Directive (ECD) provides.1

The case underlines a much deeper and more important question which is: should platforms as providers of composite services (those consisting of electronic and non-electronic parts) be subject to e-commerce rules, sector-specific rules, or both.
The consequences can be dramatic. A transport platform such as Uber would typically claim that it acts as an e-commerce service only and plays little to no part in the provision of actual transport services, which are then carried out by independent contractors. They claim that, as such, they should not be subject to local transport, labour or other laws in relation to matters such as licensing, working conditions, insurance, etc. If, on the other hand, an argument is accepted that platforms are akin to transport, accommodation and other services, they become subject to a variety of sector-specific laws making them potentially less competitive. The incumbent service providers (such as traditional taxi associations, hotels, etc.) have lobbied with varying degrees of success.

The first significant take on the problem has been the Uber cases (C-434/15, see here and C-320/16, see here). AG Szpunar’s main argument in those cases has concentrated on, first, determining whether the service is composite or not and, second, in finding the essential element of the composite service. If the service is not composite but obviously non-electronic, the ECD does not apply at all. Likewise, if the service only has an electronic element, the problem disappears. On the other hand, for true composite services, where the part not covered by ECD (e.g. transport laws) affects the composite service, the relationship between the two parts will have to be analysed to determine the extent to which different regimes apply. Any other conclusion would render the EU liberalising efforts completely meaningless as the non-electronic part would always trump the electronic one.

The key tool in determining which part is dominant, in AG’s opinion, is finding whether the electronic activities have self-standing economic value or not. If they do, the full liberalising effect of the ECD must be applied at least to the electronic part. Where this is not the case, a further analysis is needed to determine which part is dominant in a relationship of dependance. In order to determine whether this influence exists, elements such as price determination, safety, working conditions, the ability to work for other companies, etc. would be looked at. In Uber, not only was the platform dependent on the transport element (and therefore not self-standing economically) but Uber had decisive influence over that transport element. In Uber, the AG opined, the platform part is entirely dependent on the transport service as Uber would have no value without it. In a way, it is Uber itself that is the provider of transport services.

The court followed the AG in its decision.

In its AirBnB opinion, the AG Szpunar reiterated the main arguments of the Uber case but also refined them further. The two decisive criteria that need to be looked at in order to determine if a service is an information society service are a) if the platform offers services having a material content and b) if service provider exercises decisive influence on the conditions under which such services are provided. The first criterion looks, essentially, at whether the service is composite or not. If it is not, ECD applies. Unlike Uber, which does not exist without the non-material part, the link between AirBnB and the services it provides is more tenuous. Accommodation providers are not tied to the Uber platform but are free to provide their services elsewhere or offer them on several platforms at the same time. The picture is similar in respect of the second criterion. Unlike Uber, AirBnB had significantly more limited control over the non-electronic part of the composite service.

The main dilemma that has permeated the debate about platforms as providers of composite service has been and remains: how can innovation that the disruptive services bring be protected while maintaining the fair competition in the market. The elements of the AG’s answer are found in paragraphs 61-68 of the Opinion. On one hand, it would be wrong for the suppliers of innovative services to be excluded from the benefits of ECD simply because they have created a composite service, otherwise also subject to sector-specific laws. On the other, it would be wrong for the providers of such services to be privileged solely on the account of the ECD applicability, from which the others do not benefit. The answer lies not in the fact that non-electronic services are connected to electronic but in the degree of control of the platform over the non-electronic part. The more significant that control is, the less likely is the provider to benefit from the ECD regime and the more drawn it is within the influence of sector-specific laws.

The Court in its judgment followed the AG’s main points. AirBnB does not provide accommodation but helps the providers and the seekers find each other. Furthermore, AirBnB is not essential in this respect: other websites or, indeed, offline services can be used.

There is little to be surprised about this. The judgment does not contradict Uber but clarifies its main points. At present, the decision tree is as follows:

  • Are services only electronic or non-electronic? If either, the respective regimes would apply.
  • If services are compositve, does the electronic part have self-standing value? If yes, apply ECD.
  • If not, find the dominant part by applying a variety of criteria to determine the level of influence of one over the other.

This seems to be a reasonable approach. Anything else would fall in either of the two extremes: the automatic subjecting of disruptive services to sector-specific laws or the unchecked ability to circumvent sector-specific laws simply by claiming the status of an electronic platform.

There is no doubt that a variety of cases would arise in the future where it would be difficult to apply the Court’s criteria. This would be a result of the complexities that the reality presents. Different parts of composite services are often intermingled to an extent that calls legal simplifications into question. National courts would then have to look into the level of control that platforms exercise over non-electronic parts and that exercise is not necessarily a simple one.

The conclusion must be that modern services simply do not lend themselves to “legacy” solutions designed for a different world. The present solution, as applied by CJEU, is temporary. If it truly finds it necessary, the legislator at either national or EU level would have to agree on new rules specific to different platforms. We do not, at present, know that this is the case and this is not necessarily an invitation to go in that direction but is, at best, a warning that a time may come where the patching up of the present regime will no longer be sufficient.


  1. For a more detailed analysis see my article: Savin, A. (2019). Electronic Services with a Non-electronic Component and their Regulation in EU Law. Journal of Internet Law, 23(3), 1, 14-27. ↩︎

The CJEU Facebook Judgment on Filtering with Global Effect: Clarifying Some Misunderstandings

On October 3 the CJEU delivered the judgment (text of the judgment and AG Szpunar’s opinion) in the C-18/18 Eva Glawischnig-Piesczek v Facebook Ireland Ltd. The case concerned a request by an Austrian politician for an interim measure removing a defamatory post already declared as such in separate court proceedings in Austria. The reference for a preliminary ruling requested the interpretation of Article 15 of the Electronic Commerce Directive (ECD, text here). In specific, the question was whether the article prohibits

ordering a host provider to remove information which it stores, the content of which is identical to the content of information which was previously declared to be illegal, or to block access to that information, irrespective of who requested the storage of that information;

ordering a host provider to remove information which it stores, the content of which is equivalent to the content of information which was previously declared to be illegal, or to block access to that information, and

extending the effects of that injunction worldwide.

The most important part of the reference is whether the injunctive relief issued by a national (Austrian) court should be of limited territorial scope (worldwide, EU or local). Also important is the nature of the elimination that can be requested (“identical” vs “equivalent” content). A number of dramatic interpretations have been seen in the media (see also here and here) and a basic clarification is in order (see my earlier post on AG Szpunar’s Opinion here).

1) The ECD insulates bona fide intermediaries from liability when they expeditiously remove the problematic material. Although Facebook is a hosting provider in terms of Article 14 ECD(3) and Recital 45, national courts may issue interim measures requesting that illegal material be removed. Although Article 14 controls the liability regime and sets its boundaries, it does not control the possibility for the material to be removed through various judicial and administrative measures. A non-liable intermediary can be forced to remove material through injunctive relief requested in national courts. This is not a new position in EU law and is based not only in ECD (dating to 2001) but is also recognised in CJEU case law (see Husovec’s study on injunctions against intermediaries in EU law here). Furthermore, Facebook’s refusal to remove the material presumably also removed its insulation (which was not the subject of this case).

2) Article 15 ECD prohibits content monitoring, the idea being that only prior knowledge or subsequent reluctance can bring liability. Intermediaries are, therefore, not expected to take active steps to filter content. On the contrary, CJEU has been clear in prohibiting general filtering, limiting any such measures to specific content. The question in the present case is whether Article 15 might interfere with the request to remove the defamatory content. The Court says that it does not, saying that the explicit purpose of Article 15 is to prohibit general but allow specific monitoring which may be necessary for law enforcement purposes. Specific is for this case defined as

“a particular piece of information stored by the host provider concerned at the request of a certain user of its social network, the content of which was examined and assessed by a court having jurisdiction in the Member State, which, following its assessment, declared it to be illegal.”

In that sense, it is OK to request the blocking of “identical” content in the future which is here the content “essentially conveying the same message”. The court is specific in reiterating that such monitoring cannot be general in nature. An injunction requesting that all posts of certain nature be filtered (e.g. by type of content, region, poster, etc.) would be generic and thus contrary to Article 15.

3) Much has been made in the media of the real or potential extraterritorial effect of the injunction in question. EU itself does not itself provide any injunctive relief, extraterritorial or otherwise. Article 35 of the Brussels I Recast regulation is explicit in stating that provisional measures depend on the laws of Member States even in case where litigation is ongoing in a different state. The Court in this case is simply stating that Article 18 ECD, which says that “Member States shall ensure that court actions available under national law concerning information society services’ activities allow for the rapid adoption of measures, including interim measures, designed to terminate any alleged infringement and to prevent any further impairment of the interests involved” does not prevent the worldwide effect of injunctions. It does not say anything about the desirability of such injunctions or their potential effect in worldwide digital trade. Put simply, if Austrian court itself had no basis in its national law to issue a worldwide injunction, EU law could not provide it with such a basis. Equally important is the Court’s refusal to enter into debating the merits of such worldwide injunctions. “It is up to Member States to ensure that the measures which they adopt and which produce effects worldwide take due account of [international law].” This is the right approach as CJEU manifestly lacks jurisdiction on this issue.

4) The main difference between Advocate Gerenal Szpunar’s opinion and the final judgment is in the treatment of “identical” versus “equivalent”. The AG’s opinion allows the monitoring to take place on all the information of all the users on the platforms for “identical” information but only on the disseminator’s account for “equivalent” information. This is both justified and reasonable. No such distinction exists in the Court’s judgment, which allows monitoring for both identical and equivalent. Furthermore, the AG insists that monitoring of “equivalent” information be “clear, precise and foreseeable” and that it be proportionate and in respect of fundamental rights. Again, the Court’s judgment mentions none of these limitations. Instead, it opts for the more formalistic approach, stating that “equivalent” information must be “essentially unchanged compared with the content which gave rise to the finding”. As long as the content is “essentially” the same, the manner of monitoring is not relevant. The Court’s opting for the narrower and less balanced view might conceivably lead to problems.

5) One of the most important reservations was voiced against the Court’s insistence on the balancing role the filtering is supposed to play. The worry is that filtering mechanisms are inherently unable to exercise the right balance between different fundamental freedoms (such as reputation or freedom of expression). The danger does not arise from the Court’s interpretation and likely not from national law’s use of extraterritorial measures but from the EU legislation and soft law. The new EU law on copyright, for example, fundamentally misunderstands its own relationship with the ECD and effectively requires monitoring in open violation of Article 15 and CJEU’s case law on filtering. Furthermore, various forms of soft law (see my earlier post here) is directed at platforms which would need to engage in various forms of monitoring and filtering. It is true that the Court itself refers to Facebook’s “recourse to automated search tools and technologies” but does not endorse them. The Court does not insist on filtering, does not propose filtering techniques of a particular kind or form and does not explicitly offer any balancing guidelines. In our opinion, it is right to be silent on these issues as anything else would be second-guessing the lawmaker.

There are plenty of reasons to worry about the EU’s muddled approach to platforms and filtering (see my article here) but the Court’s constitutionally-limited role means it does not have the mystical powers that the general public ascribes to it.

6) Some confusion may arise with the CJEU’s recent case on a somewhat connected (although substantially different) issue. In C-507/17 Google v CNIL, the Court ruled that the operator of a search engine is not required (on the meaning of this, see here) to carry out a de-referencing on all versions of its search engine but only on EU-based ones. This case is based on privacy laws (the old Data Protection Directive) and is in no way connected to the present situation.

More important, perhaps, are the possible differences between the Court’s approach to intermediaries in general and its approach to injunctions in copyright cases. While the former is rudimentary, the latter is significantly more detailed. In any case, it is doubtful if injunctions arising out of EU data protection, copyright and e-commerce laws ought to be subject to the same treatment. Such approach would make little sense and would be practically messy and difficult to justify.

* * *

In summary, all the Court did in the present case was to say that injunctive relief based on an already existing court decision (which, in turn, is based on national law) is not contrary to EU law. It did not create this relief nor did it argue for its extraterritorial effect (or otherwise). Attempts to extrapolate this approach to all filtering cases are misguided and are based on a fundamental misunderstanding of how the EU law operates. The public’s anger should be directed to the Commission and its muddled and incoherent approach to platforms and its inability to produce a coherent law for the future Digital Single Market.

The EU Digital Services Act: What it is and Why it Shouldn’t Happen

Ursula von der Leyen, the president-elect of the European Commission, has recently published political guidelines for 2019-2024. Those who have been careful enough to read the document would have noticed that “a Europe fit for the digital age” is one of the six political goals the president-elect wants to achieve. Among various statements populating the section on digital Europe, the following is found:

A new Digital Services Act will upgrade our liability and safety rules for digital platforms, services and products, and complete our Digital Single Market.

The words should have claimed the attention of professionals and businesses alike. They are remarkable not only for their terseness but also for naming the act, thus indicating that preparations are well underway.

Just a few days later a document leak confirmed that DSM Steering Group is engaged in drafting the the EU Digital Services Act that would serve as a basis for:

The two ideas signalled here are interesting each in their own right.

The E-Commerce Directive, dating to 2001, and based on the ideas from the late 90s, has served remarkably well. Similar to the Clinton/Magaziner approach in the US, the directive is based on the ‘no-regulation-for-regulation’s sake’ principle and on the laissez faire approach of regulating only where there was a specific need. The two main ideas it is based on are home country control (the idea that information society services (ISSs) should be regulated in the home country only) and the heavy insulation of bona fide ISSs from liability. The reasons for the Directive’s relative longevity can be found both in its flexible character and in the political difficulties which its potential revision would initiate. As a framework instrument for the entire e-commerce regulatory ‘silo’, the Directive had been designed to last.

But, that some of the fundamental principles the Directive is based on would eventually have to be revisited became all too apparent already in the 2015 when the Digital Single Market Strategy had been published. There, the Commission indicated that

It is not always easy to define the limits on what intermediaries can do with the content that they transmit, store or host before losing the possibility to benefit from the exemptions from liability set out in the e-Commerce Directive.

Crucially, the Commission shifted the focus from ISSs to platforms.1 Soon thereafter, the language in the many policy documents on platforms changed. Platforms, the Commission claimed, need to act “responsibly” if they are to continue to benefit from insulation. In its highly controversial Copyright in the DSM Directive the Commission suggests that even ISSs falling under Article 14 ECD need to have effective protective technologies and that they cannot rely on the article if they do not. ‘Active’ providers cannot rely on the protection as they are not responsible enough in the Commission’s mind.

When the two ideas are joined, the picture becomes to emerge: the Commission would like the ECD REFIT exercise – which seems to be overdue – to result in a more nuanced approach, recognising that only responsible platforms can be protected and revising the insulation regime.

What does the preparatory document reveal about the Commission’s ambition and the scope of the potential intervention?

Five problems are listed:

  • a) divergent rules for online services in Member States. This item signals the existence of divergent rules in Member States, some of which have already engaged in regulated issues as diverse as hate speech, advertising or social networks.
  • b) outdated rules and regulatory gaps. The second item indicates that the ECD rules no longer “adequately reflect the technical, social and economic reality of today’s services“. In particular the concepts of active and passive providers are labelled as being out of date. Furthermore, the document claims that some online intermediaries simply do not know what regime they are under.
  • c) insufficient incentives to tackle online harms and protect legal content. Here the claim is that platforms are disincentivized to act proactively and that small and medium platforms face regulatory risk as a result.
  • d) ineffective public oversight. This item indicates that there is no dedicated “platform” regulator which would exercise oversight in “content moderation or advertising transparency”
  • e) high entry barriers for innovative services. The last item talks of “no legally binding, controlled way for regulatory experimentation with innovative services” currently in existence.

The document is clear in proposing the scope of application to include “all digital services, and in particular online platforms.” For each of the crucial ECD components, something new is proposed.

  1. It is proposed that home country control be kept and its scope extended. This would now include “consumer protection, commercial communications and contract laws” but also services established in the third countries. Finally, it is also proposed that any exceptions be narrowly interpreted. This is a problem as consumers and contract laws are largely outside the scope of the “coordinated filed”. It is not clear whether Member States would accept such a dramatic expansion of the operation of the article. It is even less clear why the extension is suggested as home country control generated little to no case law and even less problems in practice.
  2. The documents names ISSs as still relevant. It suggests, however, that there are “grey areas” and names them as “ISPs, cloud services, content delivery networks, domain name services, social media services, search engines, collaborative economy platforms, online advertising services, and digital services built on electronic contracts and distributed ledgers.” This is a remarkable claim as the list includes almost all intermediaries in operation today, which amounts to a claim that the concept of information society services is inadequate. This claim is not substantiated. The mention of the European Electronic Communications Code (EECC) is a nod to convergence.It is suggested that future ISS services here may be defined “on the basis of a large or significant market status, complementing the competition threshold of dominance”. This effectively brings in the ad hoc sector-specific regulation of the kind applied to telecommunications services. This approach would require that digital service providers be classified as having the correct market status or power before regulation would be applied to them. Ex ante regulation is only imposed on those with the required market power. There are numerous problems with this idea but two are particularly significant. First, the ad hoc regime in the telecoms sector has always been a temporary measure going toward full application fo competition law. In e-commerce law, competition rules are already fully functional and little to nothing would be gained by this exercise. Second, the market analysis process would inevitably have to be conducted by various national authorities designated for the purpose which would, in turn lead to insurmountable practical problems and divergence, thus eliminating any positive effects achieved.
  3. The liability provisions of ECD would be updated. The “harmonised graduated and conditional exemption” approach is suggested kept but with additions. First, the case-law would be used to update the present issues in Articles 12-15. Second, new rules or clarifications of the principles to “collaborative economy services, cloud services, content delivery networks, domain name services, etc.” would be needed. The notions of “active” and “passive” hosts would be replaced with notions of “editorial functions, actual knowledge and the degree of control”. Finally, an exemption for proactive measures would be introduced.The changes suggested here essentially fall into two categories. The non-problematic ones result from CJEU’s case-law on intermediaries. While that case-law is not without problems in itself,2 it has largely followed the contours of Articles 12-15. The more problematic are specific rules on platforms. It is not clear which of these “special” categories would need special rules and what these would aim to achieve. It is even less clear what liability regime would be imposed on them and if the disastrous Copyright in the DSM proactive filtering would find its way here too. It seems that it would, as it is not clear how it would be possible to proactively and “responsibly” catch alleged illegalities without expensive (and potentially unreliable) AI solutions. Even more worryingly, no suggestion is made here (as it was in DSM Directive) that smaller platforms would be exempt.

    It seems that the drafters of the document operate with the false assumption that active/passive dichotomy is the basis of EU case law on intermediaries. It is not. While there are cases where this approach (otherwise originating in the USA) is used, the CJEU cases are more nuanced and speak of levels and types of engagement, precisely in line with that the document otherwise demands.

  4. The document pays lip service to the prohibition of general monitoring of Article 15. However, it suggests that “algorithms for automated filtering technologies” should be considered for better “transparency and accountability”. Filtering, in principle, may be specific and general. The CJEU case law suggests that general filtering is prohibited while specific is allowed. The problem is that the document goes beyond specific filtering and suggest that AI technologies essentially playing the role of general monitoring are OK. One cannot have both. Either the prohibition on general monitoring is maintained OR AI and filtering solutions are allowed. They cannot coexist.
  5. Tailored and EU-wide notice-and-action rules are suggested. These are have already been introduced in the Illegal Content Communication. Binding transparency obligations are suggested as are options for “algorithmic recommendation systems of public relevance”.
  6. New regulatory structure is suggested with “a central regulator, a decentralised system, or an extension of powers of existing regulatory authorities” all being considered. Any of the three solutions would be problematic. Centralised regulators are difficult or impossible to achieve in any area of shared competence. The decades of experience the EU gained in the telecoms sector is a testimony to this. The decentralised system is possible but would require a prior harmonisation of competences which is politically just marginally easier to achieve than a central authority. Finally, extending the powers of the existing authorities may be viable but would not serve the Single Market purposes proclaimed in this document and elsewhere.

While there may be a number of problems with various suggestions made in the document, the main criticism can be summarised as follows:

  • no convincing reasons are given for abandoning the approach based on information society services (ISSs) and moving to platforms. While it is certainly true that confusion exists (both in terms of fully digital and composite services) as to what is or is not an ISS, any move needs to be justified. Platforms are ill-defined and fluid (both in the EU and elsewhere) and vary from one-man blogs to multi-billion dollar global conglomerates. There are no convincing reasons to use them as replacement for ISSs. The confusion is compounded by the insistence on keeping the ISSs as regulatory units while insisting that almost everything on the Web today is a “grey area” and needs a different treatment.
  • the liability regime in Articles 12-15 has proven adequate as have various kinds of relief (including injunctive). The CJEU case-law adequately managed to deal with different aspects of ISS liability and managed to apply Articles 12-15 to modern phenomena. Any change to this regime must be based on throughly-researched and very specific suggestions. While it is good that the drafters seek to incorporate the CJEU cases, their suggestions as to the liability in other situations are superficial at best. Equally worrying is their refusal to address the criticism already directed at filtering solutions in the DSM proposal. While few would disagree with the claim that the Facebook and others need to act “more responsibly”, this does not extend to the claim that all platforms need to nor does it equal the obligation to filter. That the drafters know this is confirmed in their problematic suggestion that market status should determine the scope of regulatory burden.
  • the document demonstrates the lack of understanding (and even lack of interest in) the modern phenomena such as blockchain technologies or AI. The former are mentioned with a vague suggestion that some regulation may be needed but without any conviction as to what the policy goals should be.
  • The bundling of such diverse problems as copyright infringement, illegal speech, hate speech, advertising, etc. under one umbrella is a mistake. Experience has taught us that the difference between them justify differences in the regulatory approach. While convergence in real life suggests that regulatory convergence may also be necessary, this is neither the declared nor the actual aim of the potential Digital Services Act. On the contrary, the document is actively averse to convergence problems and suggests that current regulatory silos be kept.
  • Finally, the suggestion that single regulator should be possible is politically naive

The reform of the ECD, more than any other issue, needs to address two issues, if it is to be successful.

The first is the effect of convergence on regulation. In other words, we need to know how are converged services to be regulated. The present document is as far from solving this problem as can be possible. The proposal is just a reform of the E-Commerce silo that maintains that very silo. Telecoms, audio-video and e-commerce have each their own regulatory circles, often with separate regulators. No attempt has been made to address this, either in the EECC or here.

The second is: what types of regulatory approaches (including soft-law, standardisation, etc.) should be used for governing modern digital services in order to stimulate innovation while protecting the categories of population that need to be protected. Again, the present document makes no attempt to solve this question as it sticks to old-fashioned black-letter law. Modern digital services are inherently disruptive and may require completely different governance structures. The Commission seems to be confused, mixing soft and hard law, general and subject-specific, new and legacy, often in the same documents, sometimes even in the same sentence.

  1. On why this may be problematic in itself, see my article https://www.sciencedirect.com/science/article/pii/S0267364918303145?via%3Dihub
  2. See Martin Husovec, Injunctions Against Intermediaries in the European Union, CUP 2017.

Why Advocate General Szpunar is Right to Suggest Facebook can be Ordered to Remove Material Worldwide

In March 2018, Oberster Gerichtshof of Austria submitted a request for a preliminary ruling based on a case generated when a disparaging comment about an Austrian politician was published on Facebook. When Facebook refused to remove the comment, a request was submitted to Austrian court, requesting that an injunction be issued essentially demanding that Facebook deletes the content. Advocate General Szpunar’s Opinion in the case was published on June 4.

The question originally referred to CJEU was if Article 15 of the E-Commerce Directive precludes an injunction requesting the removal of allegedly illegal content, and whether such an injunction can have a worldwide effect. In other words, the question is not only if Article 15 (prohibition of general monitoring) precludes injunctions such as the one at hand but also, if it does not, should such injunctions be issued with Member State-only or worldwide effect.

While it may, at first sight, appear odd that the referring court is asking about Article 15 (monitoring) as opposed to Article 14 (hosting), the logic behind the request, however, should not be too difficult to follow. Article 14 only provides immunity to bona fide intermediaries, i.e. those who are not aware of the infringing content and who expeditiously remove. On the other hand, those who are made aware of it, and subsequently refused to remove, lose the liability insulation. Since Facebook explicitly refused the request to remove, the question revolves around the legality of an injunction (assuming the post itself is, indeed, illegal).

Since the injunction would impose an obligation to monitor the content (in order to identify what needs to removed), the framing of the question makes sense. On the other hand, the AG does point out that an injunction imposing the general obligation to monitor content of a certain type (in other to identify the offending content), would have the effect of removing the protection provided by Article 14. In other words, general obligation to monitor is illegal under Article 15. For the sake of clarity, nothing in AG Szpunar’s Opinion suggests that general obligation to monitor is either desirable or, indeed, lawful.

Moving on to specific obligation to monitor, the AG points out that specific monitoring is explicitly allowed in Recital 47 of the E-Commerce Directive. Articles 14(3) and 18, furthermore, explicitly recognize that prevention is an important aim in the Directive and no prevention would be possible without some degree of monitoring. Crucially,

in order not to result in the imposition of a general obligation, a monitoring obligation must, as seems to follow from the judgment in L’Oréal and Others, satisfy additional requirements, namely it must concern infringements of the same nature by the same recipient of the same rights.

It is not allowed to issue an injunction requesting that the provider monitor for infringements that are like the one at hand, are inspired by it or, indeed, are perpetrated by different users. All of this would be general monitoring. The AG’s reading of the Directive and case-law, put simply, is that monitoring targeting a specific infringement is allowed, whereas general monitoring is not. This position is firmly embedded in the E-Commerce Directive.

The referring court, importantly, also asked if information identical to that being requested should also be removed. In AG’s words, a social network platform can be ordered to seek and identify, among all the information disseminated by users of that platform, “the information identical to the information that was characterised as illegal by a court that has issued that injunction.” The answer to this is equally clear. When doing so, the social network can only be required to monitor the information disseminated by the user who disseminated the original info.

In respect of the territorial scope of the obligation, the Advocate General makes two crucial observations. The first is that the obligation in question (defamation) is not based on EU law. Second, Article 15 of the E-Commerce Directive does not regulate the territorial effect of injunctions. In case of the first, had the obligation been based on EU law, that law would determine its own territorial scope – extraterritorial or otherwise. In case of the second, had Article 15, or indeed the E-Commerce Directive, something to say about its scope of application, that could be used to determine the territorial scope of the injunctions. Further to that, although Brussels I (Recast) regulation regulates jurisdiction in cases of defamation, and allows preliminary measures, it does not say anything about the territorial scope of these measures. Put simply, since the EU law says nothing about the territorial scope of the injunction, it remains for the national (Austrian) law to resolve this issue.

As it stands, it is difficult to argue against Advocate General’s reasoning. A different conclusion would mean that a national court’s order to remove the illegal content would simply be circumvented by using the Article 15 argument and claiming that any action to identify the content would amount to “monitoring”. That could not have been the intention of the drafters. The monitoring that Facebook is obliged to engage in is limited to the specific post and equivalent comments from the same user. This is still very different from a general obligation to monitor which would require that all content be monitored to identify various real and potential infringements of a particular kind.

Article 15 prohibits general monitoring in respect of information society services covered in Articles 12-14 of the Directive. Where these articles do not apply, neither does the obligation to general monitoring. As Martin Husovec observes, however,1 the CJEU had the prohibition of general monitoring transplanted into copyright enforcement in the Scarlet Extended judgement. But, while this may indicate that CJEU believes general monitoring to be invasive, it says nothing about specific measures. The case law is remarkably clear and consistent in terms of specific monitoring. The Scarlet Extended case is precise in what constitutes illegal general monitoring in relation to filtering but says nothing of specific measures. The only outstanding question can be whether a particular form of action demanded in a court order amounts to general or specific monitoring. On the other hand, that specific measures of monitoring are allowed has been clearly confirmed in the UPC case. Finally, as AG Szpunar himself argued in McFadden case, and as he repeats in this Opinion, in order for the specific monitoring to be legal, it has to be limited in terms of subject and duration.

If there is something that needs clarification then it is the nature of the “similar” measures and the effort that must be made to make sure that specific monitoring is, indeed, limited in time and scope. In terms of the former, the present Opinion suggest that “equivalent” comments from the “same user” can be covered but nothing else. This is somewhat in line with the Court’s cases law to date. In terms of the latter, Member States already seem to take different approaches to injunctions with some (notably Germany) being markedly broader in their attempts to impose monitoring obligations. While one could possibly wish that clearer guidelines come form the Court, the Facebook judgement introduces nothing new in terms of the existing law. It is true that distinguishing between general and specific monitoring may be a difficult issue to resolve in specific cases. It is also possible to take issue with the EU policy on monitoring and to argue in favor or against the general/specific method. But, until that provision is modified, the Court should follow the AG’s opinion.

  1. Martin Husovec, Injunctions Against Intermediaries in the European Union (CUP 2017), p. 118

The European Electronic Communications Code – Where are We Now, Where Are We Going?

After a lengthy process lasting over two years, the new European Electronic Communications Code (EECC) had been adopted on December 18, 2018. The Directive proposing the EECC had been first published in 2016 as part of wider attempts to reform EU digital laws on the content and carrier layers, which had been promised in the 2015 Digital Single Market Strategy. The EECC is now in force and would need to be implemented by the end of 2020. This short post is an attempt to summarise the biggest changes that the new regulatory framework brings and to highlight some of its weaknesses.

The 2015 DSM Strategy, analysing the need for improvements in telecoms, points out that the sector suffers from “isolated national markets, a lack of regulatory consistency and predictability across the EU, particularly for radio spectrum, and lack of sufficient investment notably in rural areas”. In order to remedy the situation, and in particular “deliver access to high-performance fixed and wireless broadband infrastructure“, reform was needed. A casual reader of the less-than-2-page-long part of the Strategy dedicated to telecoms would be left confused. Other than a call for a better spectrum policy and more investment in high-speed networks (both of which had also been repeatedly called for in earlier papers and are, therefore, not particularly new), such a reader would not be able to see whether the present EU regulatory framework is functional and what the EU’s position is in comparison to other developed economies. To that reader, the fact that EU lags behind rivals in high-speed broadband deployment and take-up and is also behind on 5G development would not be apparent.

In order to understand the importance of the EECC and see if it can answer these challenges, it is necessary to give an overview of the most important features of the (still applicable) 2009 regulatory framework. Three elements, in particular, are of notice:

  • First, the EU regulatory framework is based on competition law principles (significant market power, potential abuse, remedies, etc.) but is, in reality, a separate (sector-specific) system of rules which applies in parallel with regular competition rules. While the ultimate (declared) aim is for competition law only to apply, the EU is not at that stage yet. The main reason why sector-specific rules are needed is that competition law remedies problems ex post, after they arise, while intervention is needed ex ante, before distortions appear.
  • Second, the main regulatory method is ex ante application of remedies to market actors with significant market power (SMP). This type of regulation is asymmetric by definition as it applies only to SMP undertakings. In practice, the regulatory effort had concentrated on the incumbents – former state-owned telecoms companies which have been opened up for competition in the 80s.
  • Third, the EU approach is primarily a service-based competition model where the regulator encourages entrants to offer competitive services through access imposed through ex ante regulation on the incumbents. Contrary to that, in an infrastructure-based competition model, competitors are incentivised to build their own infrastructure.

The 2009 regulatory framework is largely based on 2002 one, with significant elements dating from even earlier laws. Since it is important to understand the extent to which EECC brings novelties, the following can be stated:

  • The EECC is a codification measure which puts the four main directives making the 2009 framework (Framework, Authorisation, Access and Universal Service directives)1 from the 2002 and 2009 frameworks into one package.2 While this potentially removes the confusion which arises from many amendments which have accumulated over the years, it does not bring an entirely new text.
  • The EECC is fundamentally based on the same ideas and principles the 2002 and 2009 frameworks are based on. The changes are frequently minor and often only cosmetic and the articles have largely been replicated. More importantly, still, the fundamental regulatory ideas are the same as those in the previous frameworks. The EECC is fundamentally still a sector-specific, ex ante and asymmetric system which targets enterprises with significant market power. Although the number of regulated markets has gradually been reduced over the years to the effect that only the wholesale side is properly regulated, full competition does not exist.

Although one may wish to debate the prudence of extending the present regulatory model to future telecoms, it may be worth stating that not all changes are purely cosmetic. Other than the codification/simplification, the EECC introduces some changes which are geared towards improving investment.

  • In particular, attempts to encourage investment in next-generation networks have been made. The measures essentially attempt to exempt enterprises from SMP regulatory regime in cases where they commit to building new networks. The co-investment provision of Article 76 allows NRAs to accepts commitments from undertakings wishing to allow co-investment (through co-ownership, risk sharing or purchase agreements).
  • Significant attempts have been made to harmonise and manage radio spectrum (Articles 28, 35-37, 45-55). Market entry for new players and shared use of the radio spectrum, in particular, should be easier.
  • A modest attempt to regulate OTT services has been made. Rather than subsume all such services to the full scope of telecoms rules, only some of them are included and only in a limited number of cases. This is, in principle, a good and measured approach.3

We find that the modestly-framed EECC fails to address the primary challenges the EU telecoms is facing today:

  • The EU’s telecommunications capital investments are relatively low and are falling further. Regulatory burden is not the only factor to consider but it seems that it certainly plays a role. Telecoms expenditure is lower for EU27 than for either the USA or some Asian countries. The modest changes that the EECC brings are not structural and not overtly pro-investment.
  • There are reasons to believe that the model based on access and price controls which the EU has chosen may deliver better broadband products in the short to medium term but does not deliver next-generation networks (there is good empirical evidence for this too). The problem of how to reach acceptable NGA deployment and take-up rates remains unaddressed.
  • Overall, the industry still seems over-regulated.
  • The value of EU telecoms companies has halved from 2012 to 2018 while that of the US and Asian companies increased. The cost of rolling-out full fibre and 5G is estimated at €500 billion and is likely to be significantly more. Few incentives are given in the EECC to EU telecoms companies to do this. The EU is investing less in telecoms than anywhere else and, unless more fundamental changes are made, the EU companies will move more into retail.
  1. The ePrivacy Directive is subject to a separate proposal, see here.
  2. For the codified version of the 2009 framework see here.
  3. It should be noted, however, that the proposed ePrivacy Regulation calls for much more comprehensive coverage of OTTs.

EU Proposal for a Regulation Preventing the Dissemination of Terrorist Content Online: An Overview

The Commission announced yesterday a proposal for the Regulation Preventing the Dissemination of Terrorist Content Online. Somebody not following the developments closely might get an impression that this is a stand-alone initiative. In reality, the Proposal follows both a broader drive to regulate platforms, announced in the 2015 DSM Strategy, followed up in the 2016 Communication on platforms, and further elaborated in ‘soft law’ 2017 Communication and 2018 Recommendation on illegal content online.

The desire to regulate “platforms” is, in itself, problematic. Platforms are neither natural subjects for IT regulators (unlike telecoms networks & services or information society services), nor sufficiently clearly defined to lend themselves to straightforward regulation. They differ in size, scope, type and impact and use a vast plethora of business models. The EU’s continuous drive to regulate them without exploring deeper implications of such approach is worrying.

The present Proposal aims to “prevent the misuse of hosting services for the dissemination of terrorist content online”. It imposes duties of care on hosting services to prevent dissemination of terrorist content and measures on Member States to identify the content and remove it. The Proposal has the same wide scope as GDPR, applying to all hosting providers targeting services in the Union, irrespective of their place of establishment.

The definition of terrorist offences is taken from the 2017 Directive on terrorism. Terrorist content, which is the Proposal’s main target, is defined as inciting, advocating, encouraging, promoting or instructing on terrorist offences so defined in the 2017 Directive. Hosting service providers are obliged to take action against dissemination and include provisions to that effect in their terms and conditions.

The interesting (and controversial) part of the Proposal appears in the form of “removal orders” of Article 4. These are non-voluntary demands issued by competent authorities and directed at hosting service providers. National authorities are allowed to require providers to remove content and disable access to it and the latter would be required to do so “within one hour from receipt of the removal order”. This obligation seems to be applicable irrespective of the size of the site or the hours under which it is manned. A statement of reasons is available but only upon request from the provider and cannot, in any case, delay the removal order. If the hosting provider disagrees with the order due to “manifest errors” or because it needs “clarification”, the removal is postponed until such clarification is provided. In addition to the removal orders, the authorities may send voluntary requests called “referrals” (Article 5) which providers are free to assess against their own terms and conditions but need not remove.

Another controversial feature are “proactive measures” that Article 6 demands hosting providers make. The providers need to take “effective and proportionate” measures, “where appropriate”, while assessing risks and taking fundamental rights into consideration. Once a removal order of Article 4 has been issued, though, special proactive measures that relate to the hosting provider which has been the subject of such order kick in, demanding that they submit annual reports about prevention of re-upload and detection, removal and disabling of content. Where deemed insufficient, further proactive measures may be required and imposed forcefully.

Article 8 demands transparency from hosting providers, including the use of terms and conditions while Article 9 requests human oversight of automated removal measures. The rest of the Proposal contains detailed measures on complaints, cooperation, implementation and enforcement.

Particularly interesting is the status of hosting providers located outside the EU. They are, under Article 16, required to designate a legal representative for the purposes of compliance. Under Article 15, where a provider does not have a place of establishment in the EU, it is the place of residence or establishment of the legal representative that is the place relevant for enforcement purposes.

The penalties set out in Article 18 are for Member States to determine but are obliged to punish systematic failures by 4% of the hosting provider’s global turnover.
There are, in my view, three main problems with the proposal:

  • first, the Proposal derogates specifically (Recital 19) from the obligation not to monitor, set out in Article 15 of the E-Commerce Directive. This is a dangerous and poorly justified precedent. Although the recital does speak of “balancing” in such cases, it seems as if a mere fact that a label “terrorist” had been stuck on particular content, at a particular provider, would trigger a derogation the effect and duration of which is unknown and not justified or considered by Article 15.
  • second, complying with obligations in the Proposal, and in particular Article 6, will require the use of filters, since relying on human moderators would be disproportionately expensive. Filtering technology works sporadically and tends to ‘catch’ legitimate content, often opening more problems than it solves.
  • third, the cost-benefit side of the question is obscure in the proposal. Terrorist content tends to appear and disappear quickly and is rarely simply placed on platforms, for them to conveniently remove. It tends to shift from one account to another and from one platform to another. Burdening the hosts with 24/7 removal and monitoring obligations may look justified but is unlikely to give desired results and may, in worst cases, lead to governmental abuse. It is further unclear whether non-EU providers would simply withdraw from the EU (as some did as a result of GDPR’s extraterritorial reach).

In summary, the problem is more complex than the regulator would like to believe and may require creative thinking, the use of co-regulation and technical solutions of the next generation rather than heavy-handed removal and penalty system.