The US Department of Justice published today its proposal for a review of Section 230 of the 1996 Communications Decency Act. This proposal is not the only document to have come out today (see Senator Hawley’s somewhat unrealistic idea here) but it is the official US government position. Section 230 – which insulates intermediaries against claims for illegality for content posted by third parties – is arguably among the most important provisions for the Internet ever drafted. Its review will not only have worldwide impact (thanks to the global presence of US Internet companies) but will influence the EU’s attempt to revise its own regime, which is currently going on.
It is worth mentioning that President Trump’s earlier problematic Executive Order, which also targets Section 230, is not the subject of this post (although it detracts from the possible real need to reform Section 230).1
S.230 CDA is the US law giving protection from liability to intermediaries who publish third party content. Its Section (1) essentially says that no internet intermediary should be treated as a publisher for content posted by third parties. At the same time, its “Good Samaritan” Section 2 gives immunity to providers who voluntarily take action to remove “obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable” content in good faith, whether such content otherwise enjoys constitutional protection. This provisions protects content-moderation decisions, irrespective of their motives as long as they are taken in “good faith”.
It is important to notice that the protection of S.230(1) is valid whether the defendant intermediary knew about the content, whether it acted in good faith and whether it was neutral or not. The main idea of the section is to eliminate frivolous lawsuits that would otherwise target intermediaries, who are not only frequently better financially suited to respond but also a known target as opposed to often anonymous posters. The system thus created has been fairly robust, with a considerable body of cases to support it.
While S.230 covers regular speech, a different provision – S.512 of the 1998 Digital Millennium Copyright Act (DMCA) – applies to copyright, essentially protecting bona fide intermediaries from copyright infringement lawsuits. Unlike S.230, S.512 requires that intermediaries lack knowledge of the infringements and expeditiously remove the content upon obtaining that knowledge. This provision too is the subject of criticism and calls for a review (on its effectiveness see here).
Somewhat surprisingly, both the Democrats and the Republicans are arguing for modifying or even revoking Section 230, although for the opposite reasons. Joe Biden suggested that it should be revoked completely while Bernie Sanders said that S.230 was written “before the current era of online communities, expression, and technological development” and that large profitable corporations should be held responsible. Meanwhile, Republicans are suggesting that platforms are systematically biased against them (although no evidence for this exists) and suggested measures to curb what they call censorship. Both political groups show a staggering lack of understanding of the underlying reasons for and the actual operation of Section 230, as evidenced not only in flawed proposals but also in a multitude of contradictory statements in the press.
Department of Justice Review
The DOJ document is not a proposal in itself but a draft document outlining its position. The document is based on four principles:
- large tech platforms are no longer nascent or fragile and are not in need of protection2
- S.230 has been abused by large platforms to maintain their dominant position.3
- Core immunity for defamation needs to be preserved to foster free speech
- Hosting defamatory content needs to be distinguished from enabling criminal activity
Having these in mind, the document indicates four areas for reform:
- Incentivising Online Platforms to Address Illicit Content. Here the “Bad Samaritans” would lose immunity. This includes actors who purposefully facilitate criminal activity or material but also those who purposefully “blind themselves and law enforcement to illicit material”. Separate Carve-Outs exist for child abuse, terrorism, and cyber-stalking as well as for actual knowledge or court judgments. The latter is a clear departure from S.230 which provides immunity even where actual knowledge exists.
- Clarifying Federal Government Enforcement Capabilities to Address Unlawful Content: clarifying that federal civil enforcement actions are not covered.
- Promoting Competition: clarifying that Federal antitrust claims are not covered.
- Promoting Open Discourse and Greater Transparency: this would seek to replace the words “otherwise objectionable” in Section 230(c)(2) with “unlawful” and “promotes terrorism.” Further to that, a statutory definition of “good faith” would be provided.
The document proposes both more and less content moderation at the same time.
On one hand, the DOJ wants the removal of immunity for “Bad Samaritans” in situations involving federal criminal law. The effect of this provision would be to increase liability for sites that do not remove material. The material the DOJ wanted moderated is not only terrorist, child pornography or cyber-stalking but also any activity that violates federal criminal law as well as the “purposeful blindness” in relation to such material. The main danger here is the proliferation of frivolous cases where removal is demanded without likelihood of success but with a view to forcing the platform to act upon request.
On the other hand, the proposal would be significantly changing the “Good Samaritan” provisions. The proposal claims that “the new statutory definition would limit immunity for content moderation decisions to those done in accordance with plain and particular terms of service and consistent with public representations. These measures would encourage platforms to be more transparent and accountable to their users.” The document, in tying the removal to terms & conditions and in removing the “otherwise objectionable” wording aims to limit the circumstances under which platforms can moderate content (in line with President Trump’s ideas about Twitter and Facebook). The new setup would mean no protection in cases where platforms remove content that is anything but directly unlawful. In other words, if the platform – exercising its first Amendment rights – removes the content which is not directly unlawful but is, in its view, objectionable (for instance because it is untruthful), it would lose the protection of S.230. This increases the instances in which platforms would expose themselves to liability dramatically.
The contradictory directions in which the Proposal is going are confusing. The First Amendment implications of the removal of “otherwise objectionable” wording are obvious and will be debated extensively. While vague bipartisan support for the reform of S.230 exists, any proposal would first have to be introduced to the Congress and then pass the Democrat-dominated House of Representatives and the Republican-dominated Senate. This is not likely at present.
European Law on Intermediaries Today
European rules on intermediary liability, Articles 12-15 of the 2001 E-Commerce Directive (ECD), are slightly newer than American ones. Unlike their US counterpart, there is no separation between copyright and all other cases – everything is covered by the same set of rules. The text is somewhat simpler and more direct.
The main idea is that information society service providers are not liable in cases where they are mere conduits (Article 12 ECD), where they are caching (Article 13) and where they are hosting (Article 14) material, provided certain conditions are met. On top of that, no general obligation to monitor exists (although monitoring to remove specific illegal content is allowed).
Of particular interest is Article 14 which insulates intermediaries from liability in cases where:
- the provider does not have actual knowledge of illegal activity or information and, as regards claims for damages, is not aware of facts or circumstances from which the illegal activity or information is apparent; or
- the provider, upon obtaining such knowledge or awareness, acts expeditiously to remove or to disable access to the information.
Unlike S.230, Article 14 on hosting requires the lack of actual knowledge as well as expeditious removal upon obtaining such knowledge. Importantly, however, where the US law distinguishes the “Good Samaritan” platforms, the EU law was traditionally based on the distinction between active and passive platforms and says nothing about “Samaritans”, good or otherwise.4 The more active a platform, the less likely it is to enjoy the protection of Article 14. While CJEU clarified in eBay that moderation in itself does not automatically bring liability, it did say that an active role in the form of “optimising the presentation” might be such an active role. The Court of Human Rights has been even more strict in its Delfi series of cases, saying that platform’s moderation does mean liability.
The cases interpreting Articles 12-15 have been relatively numerous and have significantly changed the operation of the article, in particular in copyright cases. In spite of that, Articles 12-15 are among the most stable and least controversial of EU digital laws, with the Commission at least superficially still arguing for their preservation.
Proposals for Changes in the EU
While sporadic comments on the need to change Article 12-15 have occasionally been made, the first indication that a more serious reform is in view had been made in the 2015 Digital Single Market Strategy where it was indicated that slow removal of illegal content might necessitate “new measures”.
The next step was made in the highly flawed and controversial Copyright in the DSM Directive, which essentially provides that an online content-sharing service provider performs an act of communication to the public and cannot benefit from Article 14 protection unless it either has a valid agreement with the rightholders or employs “high industry standards of professional diligence” and “best efforts” to ensure the removal of works for which the rightholders have provided information.
In 2020, the new Digital Strategy was passed. It directly promises to look into the “responsibilities of online platforms and information service providers” and to “reinforce the oversight over platforms’ content policies”. While no indication exists of what the draft Digital Services Act (coming later in 2020) might contain, the recent Inception Impact Assessment papers give a somewhat clearer picture. They indicate that the EU is considering several policy options.
The first is to essentially maintain the present regime, with the E-Commerce Directive as the main instrument and the Recommendation on the Illegal Content, Copyright in the DSM Directive, the AVMSD and the Terrorist Content Regulation as the sector-specific measures. The second would be a relatively limited intervention to make the procedural obligations of the 2018 Recommendation on illegal content binding. The third is to make a more comprehensive change, modernising the E-Commerce Directive. In the Commission’s own words this would
clarify and upgrade the liability and safety rules for digital services and remove disincentives for their voluntary actions to address illegal content, goods or services they intermediate, in particular in what concerns online platform services. Definitions of what is illegal online would be based on other legal acts at EU and national level.
It would also mean “specific, binding and proportionate obligations, specifying the different responsibilities in particular for online platform services”. Significantly, “further asymmetric obligations” might be needed. The asymmetry referred to here means the difference between types and sizes of platforms. In other words, not all rules would apply to all platforms. At this point a list of specific obligations is introduced:
- harmonised obligations to maintain ‘notice-and-action’ systems covering all types of illegal goods, content, and services, as well as ‘know your customer’ schemes for commercial users of marketplaces
- rules ensuring effective cooperation of digital service providers with the relevant authorities and ‘trusted flaggers’ (e.g. the INHOPE hotlines for a swifter removal of child sexual abuse material) and reporting, as appropriate
- risk assessments could be required from online platforms for issues related to exploitation of their services to disseminate some categories of harmful, but not illegal, content, such as disinformation
- more effective redress and protection against unjustified removal for legitimate content and goods online
- a set of transparency and reporting obligations related to the these processes
- transparency, reporting and independent audit obligations to ensure accountability with regards to algorithmic systems in order to ensure better oversight.
Of particular interest is the idea that “gatekeeping” platforms might have to be subject to ex ante rules. This is in line with the indication that asymmetric rules might be needed. The ex ante regime is presently applied in EU telecommunications law where certain ideas from competition law (significant market power) are taken and applied to impose remedies on market actors in danger of violating competition rule. Not only are these applied asymmetrically – to some and not to others – but they are also imposed before a violation occurs (to prevent it).
Three policy options are considered here. The first is to revise the horizontal framework set in the Platform-to-Business Regulation. The second is to adopt a horizontal framework empowering regulators to collect information from large online platforms acting as gatekeepers. The third and most interesting is the potential introduction of an ex ante regulatory framework. This, in turn, would have two sub-options. The first would be a black list of prohibited practices. The second would be the “adoption of tailor-made remedies addressed to large online platforms acting as gatekeepers on a case-by-case basis where necessary and justified”. “Platform-specific non-personal data access obligations, specific requirements regarding personal data portability, or interoperability requirements” are given as examples of remedies.
None of the options are mutually exclusive.
In the view of this author, the Commission’s attempt to reform the E-Commerce Directive could prove more focused and less problematic than their US counterpart in each of the scenarios outlined above. Whether this is the case depends mainly on the ability to preserve articles 12 – 15 which have proved robust. In our view, the Copyright in the DSM attempt to water down this protection in Article 17 was misguided and should be removed for reasons that have been extensively debated in literature.
The use of the term “responsibility” in this and a number of other documents might suggest the desire to limit the proliferation of the illegal content but is in the view of this author vague and problematic. That some platforms (the ‘large’ ones) act illegally may seem superficially obvious and may elicit calls for intervention and more active behaviour but intermediaries are still predominantly just that – intermediaries. They are usually accused of inertia in removing content, illegal or otherwise, not of political or economic bias. The fact that a whole set of tools from the arsenal of copyright, competition, criminal, administrative and tax laws exist – and are not used – should limit the EU’s desire to add to the arsenal. Nevertheless, we believe that two factors are important and may signify the success of future EU rules.
First, the EU is attempting to achieve the move towards more “responsible” platforms in significantly different ways than the US. Rather than relying on S.230 equivalent itself, or attempting an omnibus provision to replace the ECD, it passed a number of laws and soft laws on illegal, terrorist and problematic content. While this may appear to be more flexible and avoid the big political clashes, it also ushers a specific form of rule-by-decree where recommendations (with threats of further legislative action) are used to force platforms into more responsible behaviour. If the recommendations are turned into directives and regulations with the proper democratic and regulatory oversight (and this is one of the policy options), this problem disappears and the flexibility of the modular solution remains. Put in different terms, since the reality is complex, the laws need to be complex and specific too.
Second, the suggestions that ex ante sector-specific asymmetric remedies might be applied to gatekeeping platforms is original and potentially capable of solving the problems arising from disparities in platform size, type, purposes and business model. The danger is that rules so drafted have not been tested in anything but the telecoms sector (where the EU has several decades experience) and would need careful drafting and even more careful monitoring.
In our view, what is needed is evidence-supported fact-based sector-specific intervention with the use of experimental methods in cases where everything else fails. Not only does this preserve the immensely important liability insulation but it also achieves the specific goals when and where needed.
Trump’s order came about after Twitter marked two of his posts with their fact-check stamp. The First Amendment of the US Constitution protects against government attempts to abridge the freedom of speech but also protects those private companies’ moderation as a form of speech. Section 230 allows content moderation making acts such as Twitter’s lawful. ↩︎
This is a vague reference to the fact that platforms are of different size and impact. See below for possible EU solution to this problem. ↩︎
In view of this author this is wrong and confuses dominance and abuse thereof – which may or may not be an issue – with the abuse of Section 230. ↩︎
While this was also used in the US, it was made more sophisticated in court. ↩︎