The Commission announced yesterday a proposal for the Regulation Preventing the Dissemination of Terrorist Content Online. Somebody not following the developments closely might get an impression that this is a stand-alone initiative. In reality, the Proposal follows both a broader drive to regulate platforms, announced in the 2015 DSM Strategy, followed up in the 2016 Communication on platforms, and further elaborated in ‘soft law’ 2017 Communication and 2018 Recommendation on illegal content online.
The desire to regulate “platforms” is, in itself, problematic. Platforms are neither natural subjects for IT regulators (unlike telecoms networks & services or information society services), nor sufficiently clearly defined to lend themselves to straightforward regulation. They differ in size, scope, type and impact and use a vast plethora of business models. The EU’s continuous drive to regulate them without exploring deeper implications of such approach is worrying.
The present Proposal aims to “prevent the misuse of hosting services for the dissemination of terrorist content online”. It imposes duties of care on hosting services to prevent dissemination of terrorist content and measures on Member States to identify the content and remove it. The Proposal has the same wide scope as GDPR, applying to all hosting providers targeting services in the Union, irrespective of their place of establishment.
The definition of terrorist offences is taken from the 2017 Directive on terrorism. Terrorist content, which is the Proposal’s main target, is defined as inciting, advocating, encouraging, promoting or instructing on terrorist offences so defined in the 2017 Directive. Hosting service providers are obliged to take action against dissemination and include provisions to that effect in their terms and conditions.
The interesting (and controversial) part of the Proposal appears in the form of “removal orders” of Article 4. These are non-voluntary demands issued by competent authorities and directed at hosting service providers. National authorities are allowed to require providers to remove content and disable access to it and the latter would be required to do so “within one hour from receipt of the removal order”. This obligation seems to be applicable irrespective of the size of the site or the hours under which it is manned. A statement of reasons is available but only upon request from the provider and cannot, in any case, delay the removal order. If the hosting provider disagrees with the order due to “manifest errors” or because it needs “clarification”, the removal is postponed until such clarification is provided. In addition to the removal orders, the authorities may send voluntary requests called “referrals” (Article 5) which providers are free to assess against their own terms and conditions but need not remove.
Another controversial feature are “proactive measures” that Article 6 demands hosting providers make. The providers need to take “effective and proportionate” measures, “where appropriate”, while assessing risks and taking fundamental rights into consideration. Once a removal order of Article 4 has been issued, though, special proactive measures that relate to the hosting provider which has been the subject of such order kick in, demanding that they submit annual reports about prevention of re-upload and detection, removal and disabling of content. Where deemed insufficient, further proactive measures may be required and imposed forcefully.
Article 8 demands transparency from hosting providers, including the use of terms and conditions while Article 9 requests human oversight of automated removal measures. The rest of the Proposal contains detailed measures on complaints, cooperation, implementation and enforcement.
Particularly interesting is the status of hosting providers located outside the EU. They are, under Article 16, required to designate a legal representative for the purposes of compliance. Under Article 15, where a provider does not have a place of establishment in the EU, it is the place of residence or establishment of the legal representative that is the place relevant for enforcement purposes.
The penalties set out in Article 18 are for Member States to determine but are obliged to punish systematic failures by 4% of the hosting provider’s global turnover.
There are, in my view, three main problems with the proposal:
- first, the Proposal derogates specifically (Recital 19) from the obligation not to monitor, set out in Article 15 of the E-Commerce Directive. This is a dangerous and poorly justified precedent. Although the recital does speak of “balancing” in such cases, it seems as if a mere fact that a label “terrorist” had been stuck on particular content, at a particular provider, would trigger a derogation the effect and duration of which is unknown and not justified or considered by Article 15.
- second, complying with obligations in the Proposal, and in particular Article 6, will require the use of filters, since relying on human moderators would be disproportionately expensive. Filtering technology works sporadically and tends to ‘catch’ legitimate content, often opening more problems than it solves.
- third, the cost-benefit side of the question is obscure in the proposal. Terrorist content tends to appear and disappear quickly and is rarely simply placed on platforms, for them to conveniently remove. It tends to shift from one account to another and from one platform to another. Burdening the hosts with 24/7 removal and monitoring obligations may look justified but is unlikely to give desired results and may, in worst cases, lead to governmental abuse. It is further unclear whether non-EU providers would simply withdraw from the EU (as some did as a result of GDPR’s extraterritorial reach).
In summary, the problem is more complex than the regulator would like to believe and may require creative thinking, the use of co-regulation and technical solutions of the next generation rather than heavy-handed removal and penalty system.