Download as pdf or txt
Download as pdf or txt
You are on page 1of 12

Content Mediation by US Online Platforms

Dealing with content that is harmful but not illegal

J. Scott Marcus1
Senior Fellow, Bruegel

The US has experienced an intense debate in recent years over free speech on the internet. Are
online digital platforms doing enough to filter out content that is illegal? Are they doing enough to
filter out content that is harmful to society, or harmful to disadvantaged groups within society, even
if not specifically illegal? Are they perhaps doing not too little filtering, but too much? Who decides
what content is harmful, and on what basis? And how does all of this relate to freedom of
expression?

We believe that the need for better, more timely and more effective monitoring of content and of
participation in online digital platforms in the USA is manifest and urgent.

The legal arguments over these points have been voluminous. Far more could be said that we will say
here. In the interest of clarity and brevity, this research note seeks to focus as much as possible on
key policy aspects rather than on detailed legal analysis; however, the legal aspects cannot be
ignored.

When an American says that it “would take an act of Congress” to solve a problem, he or she means
that it is practically impossible. In today’s politically divisive climate, agreement on new legislation
would be even more difficult than usual.2 For that reason, this research note seeks solutions that
appear to be realistically implementable under current law.

We will also be exploring how the European Union (EU) seeks to address similar issues, and will
attempt to explain US regulatory practice in terms that are familiar to an international audience. This
not only provides a comparative perspective, but also provides inspiration for methodologies used
elsewhere that might possibly be adapted for productive use in the United States.

In this brief research note, we provide background on the tension between monitoring of content
versus freedom of expression, and how it expresses itself in law and regulation in the United States.
We proceed to discuss the authority and powers of the US Federal Communications Commission
(FCC), and some of the tools potentially at its disposal. We then articulate a possible co-regulatory
approach3 that is implementable, does not rely on new legislation, and potentially should help to
address (but not necessarily fully solve) the growing problem of false or misleading information in
the USA.

1
The author gratefully acknowledges extensive comments by Christopher Savage, and helpful review by John
R. Levine and Adam Cassady.
2
American legislators regularly introduce bills (prospective laws) for consideration, but only a tiny fraction of
bills are actually enacted. For example, in the context of this article, several legislators re-introduced the EARN
IT bill in 2022 (Rodrigo, 2022). We doubt that this or any similar bill will be enacted in the foreseeable future.
3
A co-regulatory approach is one where the parties themselves put forward the obligations to which they
propose to commit themselves, but government retains some ability to review the proposed obligations, to
evaluate their effectiveness over time, and to enforce the regulated entity’s compliance with its commitments.

Electronic copy available at: https://ssrn.com/abstract=4047373


1. The essence of the problem
The fundamental issue is clear enough. As the use of online digital platforms such as social networks
becomes increasingly widespread, they play a growing role in the broader society. The risk of
misleading information distorting the public discourse has rapidly grown.

This is not an entirely new issue – the degree to which speech in general, and journalism in
particular, is and should be free from censorship has been a perennial topic in US law and
jurisprudence. It was already an important issue for print media, radio and television. But the
growing power of internet platforms, together with current themes such as alleged election
interference by foreign governments, denial of facts about the COVID virus, and more, together with
an increasingly divisive and combative political environment, has raised the importance and visibility
of the issue enormously.

The key tension that has been exposed is between on the one hand seeking to limit the
dissemination of content that is harmful and patently false, but not illegal; and on the other,
continuing to maintain freedom of expression in all of its forms.

Harmful but dubious content might be published with criminal intent, or with intent to obtain
commercial or political gain. It might be published by foreign entities who seek to harm the United
States. But content that some might view as being harmful might just as well be published without
malign intent.

Freedom of expression has deep roots in US law, as it should (see Section 2); however, this freedom
has never been absolute (see Section 5).

The tension between freedom of expression versus the mitigation false or misleading and patently
false and/or harmful content is exceedingly difficult to resolve. What is truth? Who gets to decide?

The current debate largely centres on questions relating to moderation of content by digital
platforms, and on participation in online platforms. Both major US political parties could be said to
agree that modernisation of US law or regulation in this area is necessary, but their understanding of
how to do so is nearly opposite. Some reform proposals seek to strengthen the ability of online
digital platforms to moderate digital content, or to strengthen their obligation to do so; others seek
to weaken the ability of online digital platforms to moderate digital content.

The debate takes on a special intensity because the years 2017 – 2020 witnessed an explosion of
statements from the highest levels of the US government that appear to conflict with objective
reality. All US Presidents have made misleading or inaccurate statements from time to time, but the
practice reached an altogether new level in recent years (Washington Post, 2021).

2. Freedom of expression and intermediary liability under US law


The basic principles of freedom of expression, including both freedom of speech and freedom of the
press, are embodied in the First Amendment to the US Constitution. These rights thus go back to the
earliest days of the American republic. The First Amendment states that “Congress shall make no law
… abridging the freedom of speech, or of the press …” Courts subsequently broadened this language
to apply it to states as well. A critical feature of the First Amendment is that it constrains what the
government can do. It seeks to prevent, in strong terms, a government censorship regime. It does not
specifically address interactions among private actors, such as newspapers, broadcast media, online
platforms, and citizens.

The key statutory provision that seeks to ensure freedom of speech over the internet is Section 230,
which was added to the Communications Act by the Telecommunications Act of 1996. Section 230 is

Electronic copy available at: https://ssrn.com/abstract=4047373


often referred to as the Communications Decency Act.4 Section 230(c) was enacted to try to reconcile
mutually contradictory court rulings as to whether online digital platforms (as we might call them
today) could be held liable for content posted by their users, and in an effort to update the concept
of free speech so as to bring it in line with the nature and operation of the internet:

(1) No provider or user of an interactive computer service shall be treated as the publisher or
speaker of any information provided by another information content provider.
(2) No provider or user of an interactive computer service shall be held liable on account of

(A) any action voluntarily taken in good faith to restrict access to or availability of
material that the provider or user considers to be obscene, lewd, lascivious, filthy,
excessively violent, harassing, or otherwise objectionable, whether or not such material
is constitutionally protected; or

(B) any action taken to enable or make available to information content providers or
others the technical means to restrict access to material described in paragraph (1).

In a simple reading of the legislative text, Section 230(c)(1) can be understood as exempting online
digital platforms from legal liability for content posted by their users, while Section 230(c)(2) exempts
the same platforms from liability for moderation “taken in good faith” and applied to certain forms of
arguably inappropriate content. As we explain in Section 4, however, the exact interpretation of
these two main clauses of Sections 230(c) is a key element of the ongoing debate.

3. Statutory authority
As noted at the outset, our primary interest here is on measures that could be undertaken under
present law, because enactment of new law in the USA at present is nearly impossible, even in
instances where both major political parties consider new law to be called for.

The starting point for discussion is thus, what entity has legal authority to interpret existing law in
this space (notably Section 230) so as to bring clarity and to resolve the current debate?

The legal authority of America’s National Regulatory Authority (NRA), the Federal Communications
Commission (FCC), is defined in the Communications Act of 1934 as amended (the most notable set
of amendments having been the Telecommunications Act of 1996). The FCC’s enabling statute is thus
one and the same as the legislative act that contains Section 230.

US courts have been highly sceptical of any actions by the FCC that sought to assert authority in areas
that do not reflect a specific statutory purpose expressed in the Communications Act. Section 230
does not assign any specific powers to the FCC, but it unquestionably constitutes a statutory
purpose. The FCC therefore arguably has the ability to undertake limited actions in this space;
however, in the absence of a grant of specific powers, what might those actions be?

In October 2020, the FCC’s General Counsel posted a quick summary (Johnson, 2020) of his analysis
of the FCC’s authority to interpret Section 230. He concluded that “the FCC has the authority to
interpret all provisions of the Communications Act, including amendments such as Section 230. …

4
In fact, the Communications Decency Act was a larger enactment of which Section 230 was a part. The
Communications Decency Act was intended to prevent adult-oriented sexual materials from being made
available online to those not yet of adult age; in American law, efforts to control access to such materials, both
online and offline, have generated a great deal of legal doctrine under the First Amendment. Large portions of
the Communications Decency Act were struck down by the Supreme Court on First Amendment grounds in
Reno v. American Civil Liberties Union (1997). Section 230, however, was found to be severable from the
portions of the Communications Decency Act that were invalidated. It continues to be in effect.

Electronic copy available at: https://ssrn.com/abstract=4047373


[This] authority flows from the plain meaning of Section 201(b) of the Communications Act of 1934,
which confers on the FCC the power to issue rules necessary to carry out the provisions of the Act. By
expressly directing that Section 230 be placed into the Communications Act, Congress made clear
that the FCC’s rulemaking authority extended to the provisions of that section. Two seminal U.S.
Supreme Court cases … confirm this conclusion. Based on this authority, the Commission can feel
confident proceeding with a rulemaking to clarify the scope of the Section 230 immunity shield.”

This analysis appears to be legally sound, at least on the surface, but has never been tested in court.
For purposes of this note, we assume that the FCC has considerable ability to interpret Section 230.
As a general matter, as long as the FCC does not attempt to claim powers that it does not explicitly
possess, or to put forward rulings that are flawed either in their logic or in their underlying
administrative process, courts will normally confirm its rulings.5 Indeed, one might well ask, if the FCC
as an independent regulatory agency does not have the authority to interpret its own enabling
statute, who does?

4. Freedom of expression versus authority of online platforms to


moderate content and participation
Many of the court cases that have dealt with content moderation have concluded that the
exemption from liability under Section 230(c)(1) provides sufficient grounds to grant free rein to the
online digital platform, without considering the more detailed provisions that Section 230(c)(2)
provides as regards “restrict access to or availability of material”.

In July 2020, the National Telecommunications and Information Administration (NTIA) (a branch of
the US Department of Commerce) lodged a petition for rulemaking with the FCC (NTIA, 2020) in
which they argue that Section 230(c)(2) should have legal force independent of Section 230(c)(1)
inasmuch as it goes beyond 230(c)(1) to provide liability for content moderation only when “taken in
good faith”, and itemises certain forms of content for which moderation is appropriate (which we go
on to address in Section 5).6

As the NTIA petition notes, the American legal principle of surplusage argues that a statute should
be interpreted such that no part is rendered as superfluous. In plain English, if the Congress had
not intended Section 230(c)(2) to mean something beyond what was already expressed in Section
230(c)(1), they would not have enacted it. This implies that Section 230 should not be read in such
a way as to ignore Section 230(c)(2).7
Although the language of Sections 230(c)(1) and (c)(2) is facially independent (very broadly, one
protects platforms from what they leave up, and the other protects them from decisions about what

5
Courts in the US tend to defer to the judgment of an expert agency (so-called Chevron Deference) unless the
agency appears to have acted outside the scope of its statutory authority or in an “arbitrary and capricious”
manner. The issues in litigation over an FCC effort to impose liability or obligations on platforms in reliance on
Section 230 would likely include the following: (1) because online platforms presumably do not provide
“telecommunications services” under the Communications Act; the FCC does not have regulatory authority
over providers of “information services” (the statutory category into which platforms would appear to fall); and
Section 230 itself does not confer any regulatory authority, the agency may possibly lack statutory power to
impose any obligations on platforms; (2) the Supreme Court has recognized the central role of online platforms
to modern political discourse (Packingham v. North Carolina (2017)), so any FCC action that had the effect of
limiting platforms’ discretion regarding what they publish may possibly violate the First Amendment.
6
The NTIA petition needs to be understood as a product of the period in which it was written.
7
If it were relevant here, the jurisprudence of the EU would tend to reach the same conclusion in a slightly
different way. Section 230(c)(2) deals with a more specific case than 230(c)(1). As lex specialis, it would take
precedence over the more general 230(c)(1) (the latter constituting lex generalis).

Electronic copy available at: https://ssrn.com/abstract=4047373


to take down), the NTIA argues that the two sections should be understood to be interrelated, as
follows. “First, the FCC should make clear that section 230(c)(1) applies to liability directly stemming
from the information provided by third-party users. Section 230(c)(1) does not immunize a platforms’
own speech, its own editorial decisions or comments, or its decisions to restrict access to content or
its bar user from a platform. Second, section 230(c)(2) covers decisions to restrict content or remove
users.”

As far as it goes, this statement broadly conforms to existing US law, which holds that a platform
becomes liable for content posted by someone else (such as a user or an advertiser) only if it
“materially contributes” to whatever it is that makes the content problematic.8 No case holds that a
platform is not liable for its “own speech” or even its own “editorial comments” if those comments
are independently problematic. And most cases considering the issue recognize that a platform’s
decisions about restricting “access to content” are to be evaluated under Section 230(c)(2), not
230(c)(1).9 Particularly with regard to content moderation, we think that this is a logical reading of
Section 230 as it exists today, and that it also makes good sense from a policy perspective. We also
believe that this kind of clarification of Section 230 may well be within the FCC’s statutory authority.

5. Topics amenable to content moderation


Section 230 anticipates possible restriction of access to content that the platform considers to be
“obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable”. There
are thus six explicit categories, four of which are closely interrelated. Moreover, the “otherwise
objectionable” means that the list is somewhat open-ended.

One should bear in mind that the named categories do not require online digital platforms to
perform any content moderation at all; however, they help protect platforms that choose to conduct
content moderation. For a category to be explicitly named probably provides the platform with
somewhat better protection from liability than merely leaving it to a court’s interpretation of what is
“otherwise objectionable”.

These categories would indeed appear to be good candidates for content moderation, but it is
natural to ask whether they cover all of the categories where some limitations on freedom of
expression have historically been recognised in the United States.

In US jurisprudence, the rights to freedom of speech and freedom of the press have never been
absolute. As Supreme Court Justice Kennedy wrote in the decision of United States v. Alvarez, the US
Supreme Court has permitted “content-based restrictions on speech … for limited traditional
categories including incitement of imminent lawless action; obscenity; defamation; speech integral
to criminal conduct; so-called ‘fighting words’”; child pornography; fraud; true threats; and speech
presenting some grave and imminent threat the government has the power to prevent …” Few of

8
Obviously, courts have to assess whether any platform interfaces, actions, or practices amount to making a
“material contribution” in any particular case.
9
No court decision of which we are aware relies on either provision of Section 230 to protect platforms from
liability for banning or restricting users. Platforms’ rights in that regard arise from their status as private actors,
with no legal obligation to deal with the public at large or any member of it. This distinction is illustrated by an
opinion by conservative Justice Clarence Thomas, agreeing with all other justices in vacating, as moot,
President Trump’s appeal of a lower court ruling holding that the @RealDonaldTrump Twitter feed was a public
forum, subject to First Amendment obligations. Justice Thomas mused that it might be appropriate to treat
online platforms such as Twitter, Facebook or Google as “public utilities” under the common law, subject to an
obligation to serve everyone without discrimination. Biden v. Knight First Amendment Institute (2021).

Electronic copy available at: https://ssrn.com/abstract=4047373


these are explicitly named in Section 230(c)(2), but most would also appear to be good candidates for
content moderation on the part of online digital platforms.

This does not necessarily imply that there is a problem with the language of the statute. Any or all of
these could arguably fall within the scope of “otherwise objectionable” content. But it would not be
inappropriate, if the FCC were to conduct a rulemaking on Section 230, to include some of these in
an expanded list.

Recent experience has raised any number of new categories of speech that are arguably
objectionable. Candidates include false or misleading health information, for instance about
vaccines; and false or misleading information about elections, or efforts to manipulate their results
(including efforts by foreign governments or their agents). Again, these arguably are within the reach
of Section 230 as written due to the open-ended “otherwise objectionable” language, but it the FCC
were to undertake a rulemaking, it would be logical to include some of the most pressing issues that
confront American society today.

The NTIA argued in its petition (NTIA, 2020) that “otherwise objectionable” should be interpreted as
merely referring to any material similar to the previously enumerated six categories (NTIA, 2020, S.
38). They seek to justify this by means of the US legal principle of eiusdem generis (Latin for “of the
same kind”). Under this principle, where a general residual clause in a statute follows a set of more
specific terms, the general phrase needs to be interpreted in the context of the more specific
language that came before.

We take issue with the NTIA’s proposal on legal grounds, on policy grounds, and also based on the
clear wording of the statute. On the legal side, the NTIA petition’s reading is not foreclosed by the
language of the statute, but it is only one reading out of many. The petition itself makes clear that
court rulings on this very point have been quite diverse. Moreover, to assume that Congress did not
mean “otherwise objectionable” to have meaning beyond that of the previously enumerated six
categories would run counter to the very principle of surplusage that the NTIA cites earlier in its own
petition – courts should avoid interpreting a statute in such a way as to render part of the text as
irrelevant. More directly expressed, if the Congress had not intended the phrase “otherwise
objectionable” to have meaning, they would not have enacted it.

On the policy side, we would argue strongly that the statute should not be interpreted in such a way
as to interfere with the ability of online platforms to moderate aspects of content where courts have
routinely considered limits to freedom of expression to be appropriate. We would further argue
based on policy considerations that online platforms should not be legally blocked from performing
good faith moderation of content in new areas of legitimate concern, such as misinformation about
the pandemic, or about vaccines.

Finally, the Congress itself chose the words “otherwise objectionable”. The presence of “otherwise”
clearly implies that they were thinking of topics beyond the six enumerated categories. Had Congress
wanted to limit the relevant content to materials similar to the enumerated categories, it could have
said that platforms were permitted to block “material that the provider … considers to be obscene,
lewd, lascivious, filthy, excessively violent, harassing, or similar material.” Giving meaning to
Congress’ use of the term “otherwise” would seem to require expanding, not limiting, the categories
of material that platforms can block without fear of liability.

Given that new threats are sure to emerge over time as regards false or misleading information, we
would argue that the “otherwise objectionable” language should continue to have legal force.

Electronic copy available at: https://ssrn.com/abstract=4047373


A number of courts have suggested that online platforms should not have completely unlimited
discretion under Section 230(c)(2) as to which topics they are permitted to moderate. This is a fair
point. We would argue that an FCC rulemaking following the principles of the Administrative
Procedures Act (APA) is an appropriate means by which to arrive at an appropriate expanded list, or
more likely to arrive at a set of criteria for determining whether any given topic is fair game for good
faith moderation on the part of online digital platforms.10

6. A co-regulatory approach to moderation of content and


participation
A key question as regards moderation of content and participation is, who gets to decide, and how?

Section 230(c)(2) has an unambiguous answer. It protects providers and users of an interactive
computer service from liability for “any action voluntarily taken in good faith to restrict access to or
availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy,
excessively violent, harassing, or otherwise objectionable … [emphasis added]”

It is thus the provider’s judgment that is relevant here. No role is envisioned for government –
indeed the First Amendment would appear to explicitly rule out any role for the government in
moderation of content (other than content in a category that can be banned under the First
Amendment) or user participation.

In broad regulatory theory, it is common to distinguish among regulation, self-regulation, and


co-regulation. With regulation, the regulatory authority imposes rules. With self-regulation, the
regulated entities themselves determine the rules and procedures to which they will commit to
follow, typically using means such as standards or codes of conduct. Co-regulation is a hybrid
structure that is finding increasing application in the EU, but apparently rare in the USA. With co-
regulation, the regulated entities themselves determine the rules and procedures to which they will
commit, but some public or governmental authority retains a degree of enforcement power.

Self-regulation or co-regulation can be particularly appropriate where the firms know their business
vastly better than the regulator, and where different firms have different business models. These
conditions are largely fulfilled for large online digital platforms. In this kind of environment, the firms
are likely to be far better placed than any regulatory authority to understand what kind of measures
are needed; however, the firms may not want to implement the best rules.

Classic regulation can thus be impractical in cases where the rules that are needed are complex,
difficult to formulate in advance, highly diverse from one regulated entity to another, and/or beyond
the statutory authority of the regulatory authority to impose. Self-regulation is far more flexible, but
often limited in effect because the regulated entities have no incentive to subject themselves to
stringent rules. Co-regulation attempts to combine the best features of both – the flexibility of self-
regulation, but coupled with some enforcement powers.

Classic regulation is clearly inappropriate here because (1) the subject matter is exceedingly complex,
(2) the regulatory authority is inherently subject to multiple perceived or actual conflicts of interest,
and (3) even if these were not the case, a classic regulatory approach is clearly incompatible with the
First Amendment.

10
As discussed below, a separate question is whether the platform reserves to itself broad rights to moderate
content under its terms of service, which constitutes the contract between the platform and its users. That is,
even if Section 230(c)(2) does not immunize a platform from liability for content moderation, its terms of
service might.

Electronic copy available at: https://ssrn.com/abstract=4047373


Self-regulation is approximately what we have today. As is often the case, it is proving to be
unsatisfactory to large segments of society. Many feel that the digital online platforms are not doing
enough, or are self-serving, or are themselves subject to conflicts of interest. In fairness to the
platform firms, one should also note that this is an exceedingly difficult problem.

Taking all of this into account, we think that there is a very strong argument for putting a co-
regulatory structure in place. We believe that doing so is compatible with Section 230 as currently
written, and is within the FCC’s authority to implement.

The basic notion is that the FCC would establish a process whereby firms could lodge with the FCC
(and with the US Federal Trade Commission (FTC) for reasons that we explain in Section 8) a full
explanation of the process that they use to moderate content, and to prevent users from posting
content or to eject them from the service altogether.

Through a formal rulemaking procedure (accompanied by a proper public consultation as required


under the Administrative Procedures Act (APA)), the FCC would identify a list of requirements that
such a plan should have. It is possible to identify some elements that such a plan should presumably
include. The plan should for instance identify the process that the firm will follow in deciding to
remove content, or to delete a user’s ability to post content. If the firm will sometimes flag
apparently misleading content without removing it, that should be reflected. If the firm uses
automated tools (e.g. artificial intelligence /machine learning systems) to identify harmful content,
the plan should reflect this. If large numbers of human content reviewers are employed (possibly
residing in a third country), the plan should reflect this as well, and should document in general
terms the procedures that they will follow. An appeals process should be a requirement, and should
be clearly documented. For an appeals process to be meaningful, the platform must have a process
for explaining its rationale in removing content or ejecting or muzzling an individual. If a panel of
independent experts for the appeals process is envisioned, this should once again be reflected.

The FCC, or more likely the FCC and FTC jointly, could review plans and could publicly post plans that
have been accepted. Any review should be limited to procedural matters, and should avoid any
judgments as to the ideology or philosophy of the platform firm. This is in keeping with the First
Amendment, and with the text of Section 230 which leaves judgments to the discretion of the online
digital platform. As long as there are multiple platforms, as long as media pluralism is maintained,
this is in order. There is no problem for the broader society if some users prefer to get their
information from Fox News, and others from MSNBC.

The FCC rulemaking should establish that any firm that has had its content moderation and user
participation plan approved by FCC/FTC, and that has acted in accordance with its plan in a particular
instance of content moderation, should be deemed as having acted in “good faith” in the sense
meant by Section 230(c)(2). This presumably goes a long way toward reducing the platform’s legal
risk in the event of any litigation.

Is it necessary to oblige all online digital platforms to submit a plan? This is a question that could
perhaps best be addressed through the rulemaking procedure. It is quite possible that the threat of
legal liability that would be associated with not explicitly being identified as operating in “good faith”
is more than adequate to motivate most platforms to submit a plan.

It might also be appropriate to require platforms above a certain size (measured by the average
number of distinct monthly or daily users) to submit a plan (see Section 9), while permitting, but not
requiring, smaller platforms to do so.

Electronic copy available at: https://ssrn.com/abstract=4047373


7. A possible role for private rating of the monitoring process
We argue here that neither FCC nor FTC should comment on whatever ideological or philosophical
leaning a particular platform might have, but that does not prevent private rating agencies from
doing so and from providing (or withholding) a trustmark to or from an online platform. A non-profit
that specialises in privacy and quality of online digital services, or perhaps several of them jointly,
might for instance choose to issue a trustmark to firms that consistently follow good practice in their
fact-checking. There is nothing to prevent such a trustmark initiative from also taking into account
whether the online platform in question seems to be aligned with their organisation’s own ideology –
this is in line with the guarantee of free expression that we find in the First Amendment.

If the FCC undertakes the rulemaking that we suggest here, it should also consider measures that
might facilitate the work of private rating organisations. For example, standards for expressing the
grounds on which content was removed would facilitate the ability of raters to determine whether
the removal of the content was in line with the platform’s stated policies. Similar considerations
apply to decisions to eject someone from the platform.

In time, the raters themselves might come to be rated.

8. Enforcement
In principle, there is no reason why the FCC could not be the enforcement agency for the
co-regulatory scheme proposed in Section 6. Our sense is, however, that the Federal Trade
Commission is already well-positioned to play a crucial role in enforcement.

The first reason for this suggestion has to do with staff competence. The FCC has broad jurisdiction,
but nearly all of its explicit powers relate to providers of telecommunications services. These are
firms that move data over wires. The FCC has only limited experience in dealing with online digital
platforms such as search engines or social networks, and minimal experience with the issues that
arise in modern online content moderation. The FTC, by contrast, has considerable expertise in
dealing with these firms in the context of privacy, consumer protection, and competition law.

The second reason is that the FTC has a very extensive track record in enforcing obligations where
firms have failed to comply with undertakings that they have made. This is very much in line with
processes that they routinely implement.

Working from our assumption that Section 230 empowers the FCC to take action in this area, the FCC
would presumably have the authority to declare that a firm is failing to moderate content or
participation in “good faith” because it systemically (and not just occasionally) fails to live up to the
commitments that it has made in its plan. It is not clear, however, what enforcement authority the
FCC would have.11 However, in the area of online privacy, the FTC has long taken the position that
when an online entity fails to follow its own stated privacy policy, that constitutes a “deceptive
practice” that violates the FTC’s enabling statute, 15 U.S.C. § 45. From this perspective, once a
platform has publicly stated that it will abide by a particular content moderation policy – whether or
not it formally files that policy with the FCC – failure to do so would constitute an independent
violation of Section 45, subject to investigation and enforcement by the FTC. A mechanism could be
put in place whereby the FTC makes such a recommendation to the FCC in the extreme case where it

11
The FCC’s enforcement authority under 47 U.S.C. §§ 501 et seq. is reasonably robust, but broadly speaking is
limited to violations of the Communications Act or the FCC’s own regulations. Under the regulatory model
proposed here, the FCC requirement would be that a platform file a content moderation policy, not that the
platform literally take actions that conform to the policy.

Electronic copy available at: https://ssrn.com/abstract=4047373


is needed. In most cases, the threat should be more than adequate to bring a digital platform firm
into line, since it would put into question the firm’s immunity from lawsuits under Section 230.

9. Transatlantic cooperation
The European Union has been making progressively greater use of co-regulation over time. The
proposed Digital Services Act (DSA) that is currently making its way through the European Parliament
and the Council (European Commission, 2020) will be taking a somewhat similar co-regulatory
approach to any threats that the largest online digital platforms might post to rights that are viewed
as fundamental in the EU.

The DSA incorporates provisions similar to those of Section 230, replacing similar provisions in the
e-Commerce Directive (Directive 2000/31/EC). It deals explicitly with illegal content; however,
content that is harmful but not illegal can be dealt with (for the largest online digital platforms)
through a co-regulatory structure. These portions of DSA apply only to very large online platforms,
which are defined as online platforms that deliver their services to more than 45 million EU
recipients on average per month.

These platforms are obliged to provide an annual assessment of “any significant systemic risks
stemming from the functioning and use made of their services in the Union”, to include “any
negative effects for the exercise of the fundamental rights to respect for private and family life,
freedom of expression and information, the prohibition of discrimination and the rights of the child”
(Art. 26 DSA). They are required to put in place measures to mitigate those risks (Art. 27 DSA), and to
annually audit their compliance with a range of obligations (Art. 28 DSA). Many forms of harmful but
not illegal expression12 are likely to fall within the scope of these provisions.

All of the largest US-based online digital platforms do substantial business in the EU, and most are
likely to be subject to these provisions.

If the US implements any revisions to Section 230 with due care, and assuming that the EU and US
continue to be engaged on these issues, it seems likely that platforms could be able to use
substantially the same plan to comply both with the new EU obligations, and with the Section 230
obligations that we are putting forward here. In other words, if intelligently implemented, a single
process could potentially achieve compliance with the emerging regulation in the EU and with the
approach that we are proposing for the USA. This has obvious advantages for the firms inasmuch as it
avoids the need for duplication of effort, and is likely to generate a wide range of positive outcomes.

If there is interest in pursuing this, careful coordination between the European Commission and the
US government would be in order. The EU-US Trade and Technology Council (EU-US TTC) could
provide a suitable forum for dealing with these issues.

10. Summary
The need for better, more timely and more effective monitoring of content and of participation in
online digital platforms in the USA is manifest and urgent.

The approach suggested here could represent a positive step to enable and empower online
platforms to moderate content and participation. It is fully in line with both the text and the spirit of
Section 230 as written – no new legislation is required.

12
Expression that is illegal, as distinct from being harmful but legal, is dealt with explicitly in other articles of
the DSA.

Electronic copy available at: https://ssrn.com/abstract=4047373


The FCC already has sufficient authority to implement what is proposed. As usual, the proposals
brought forward here can and should be clarified through an FCC rulemaking and associated public
consultation process in line with the APA.

Under the proposed co-regulatory approach, the online digital platform firms themselves would
propose how they intend to moderate content and participation; however, it would not give the
firms a “blank check”. The firms’ commitments, their effectiveness over time, and the firms’
compliance with their commitments would still be subject to review by the FCC and/or FTC.

If done intelligently, it could not only provide a coherent approach for the US, but could also provide
a rulebook that would be broadly consistent with that of the EU. This would have the additional
benefit that it would incur little increase in net burden on the online platform firms, since the EU’s
emerging DSA will require the same firms to undertake roughly the same obligations.

The need for better monitoring of content and participation is an exceedingly difficult problem. It is
unlikely that the measures put forward here can fully solve it; however, we think that the suggestions
put forward here could provide the basis for an actionable plan that could represent a significant
improvement over the current situation.

Electronic copy available at: https://ssrn.com/abstract=4047373


References
European Commission. (2020). Proposal for a Regulation ... on a Single Market For Digital Services
(Digital Services Act) and amending Directive 2000/31/EC. Abgerufen am 13. February 2022

Johnson, T. M. (2020). The FCC's Authority to Interpret Section 230 of the Communications Act.
Abgerufen am 12. February 2022 von https://www.fcc.gov/news-
events/blog/2020/10/21/fccs-authority-interpret-section-230-communications-act

NTIA. (2020). In the Matter of Section 230 of the Communications Act of 1934. Petition for
rulemaking of the National Telecommunications and Information Administration.

Rodrigo, C. M. (1. February 2022). Graham, Blumenthal reintroduce controversial Section 230 bill.
The Hill. Abgerufen am 13. February 2022 von
https://thehill.com/policy/technology/592301-graham-blumenthal-reintroduce-
controversial-section-230-bill

Washington Post. (23. January 2021). A term of untruths. Washington Post. Von
https://www.washingtonpost.com/politics/interactive/2021/timeline-trump-claims-as-
president/ abgerufen

Electronic copy available at: https://ssrn.com/abstract=4047373

You might also like