Submission - Online Harms - G Smith - Further Reference

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 20

ONLINE HARMS WHITE PAPER - RESPONSE TO CONSULTATION

28 June 2019

SUBMISSION BY GRAHAM SMITH

Graham Smith is a solicitor in private practice in London. He is the editor and main
author of the legal textbook Internet Law and Regulation (Sweet & Maxwell, 5th edition
forthcoming). He also writes the Cyberleagle blog.

This submission is made in Mr Smith’s personal capacity. The views expressed are not
attributable to the law firm at which he works or to any of its clients.

Contents

Introduction ................................................................................................................................. 2
1. The proper role and limits of a duty of care ........................................................................ 2
2. Speech and personal safety .................................................................................................. 6
3. Parallel legal regime ............................................................................................................. 7
4. Breadth of powers ................................................................................................................ 9
5. Most easily offended reader ............................................................................................... 10
6. Manifest illegality ................................................................................................................ 12
7. Certainty of law ...................................................................................................................14
8. Regulatory models...............................................................................................................16
9. An alternative approach ...................................................................................................... 17
ANNEX ....................................................................................................................................... 20
Commentaries incorporated by reference............................................................................. 20

1
INTRODUCTION

This submission addresses the central question raised by the White Paper (but on which the
government has not solicited responses), namely whether the government is on the right
path with its proposal of a duty of care1 and online regulator.

The author has written previously about the White Paper and the debate leading up to it.
Rather than repeat all the points covered in those pieces, they are incorporated by reference
and listed in the Annex.

1. THE PROPER ROLE AND LIMITS OF A DUTY OF CARE

“…private individuals and bodies, are generally under no duty of care to


prevent the occurrence of harm”

“…private individuals and bodies, generally owe no duty of care


towards individuals to prevent them from being harmed by the conduct
of a third party”

UK Supreme Court, Robinson v Chief Constable of West Yorkshire Police [2018] UKSC 4

1.1 The notion of a duty of care is as common in everyday parlance as it is misunderstood.


In order to illustrate the extent to which the White Paper abandons the principles
underpinning existing duties of care, and the serious problems to which that would
inevitably give rise, this submission begins with a summary of the role and ambit of
safety-related2 duties of care as they currently exist in law.

1.2 A generic duty of care is the exception, not the norm Imposition of a generic
duty of care is an atypical3 approach to incentivising or discouraging behaviour.

1 The White Paper does not propose to introduce an individual right of action for breach of the
proposed statutory duty of care and would not extend the existing common law tortious duty of care.
It proposes a statutory duty enforced by a regulator rather than a duty of care properly so called.
However, since the government’s own consistent framing is that this is a duty of care – indeed the
Culture Secretary in his letter to the Society of Editors on 10 April 2019 refers to ‘a duty of care
between companies and their users” - it is apposite to draw comparisons with offline duties of care.
2 The White Paper’s starting point is that the proposed duty of care is about online safety. Safety-

related duties of care such as occupier’s liability or health and safety are in any case the appropriate
comparable for the purpose of the duty contemplated by the White Paper (cf the work of Professor
Lorna Woods and William Perrin for the Carnegie UK Trust). Negligence liability based on a duty of
care also exists for reliance on professional advice, negligent misstatement and so on, for which
economic loss is recoverable. However these reliance-based kinds of liability (a) require a special
relationship or assumption of responsibility between the person providing the information and the
person relying on it and (b) have no application where the duty contemplated is, as proposed by the
White Paper, a duty to prevent one third party injuring another third party. The operator of a theme
park owes no duty of care to prevent visitors making inaccurate statements to each other.
3 Negligence causing personal injury accounts for by far the greatest volume of civil claims.

Nevertheless, as a conceptual approach to liability a generic duty of care is the exception rather than
the norm.
2
Typically the legislature enacts subject-specific legislation, tailoring the prohibited
wrongdoing to the deprecated activity and setting out clearly and precisely which
actors are liable, in what circumstances and for what kind of behaviour.

1.3 Exceptionally, the common law or the legislature has developed a more generic basis
of liability based on failure to take reasonable care to avoid injuring someone else.
Failure to take reasonable care can result in legal consequences only where a duty of
care is imposed in the first place, whether by statute (e.g. Health and Safety Act,
Occupiers' Liability Act) or by the common law (negligence).

1.4 Limits on duty of care exist for fundamental policy reasons A generic basis
for liability has the potential to spread like spilt milk, with negative impacts on society
at large as well unjust consequences for the person subject to the liability. The
circumstances in which a duty of care can come into existence, and its scope, are
therefore deliberately limited. The limits relate to kinds of damage, proximity,
causation, remoteness, acts versus omissions, acts of third parties, the class of persons
to whom a duty is owed, and others.

1.5 The statements by the UK Supreme Court in Robinson quoted above thus reflect
carefully considered limits on the existence and extent of existing duties of care,
whether offline or online. The purely preventive, omission-based kind of duty of care
in respect of third party conduct contemplated by the White Paper is exactly that which
generally does not exist offline, even for physical injury. The ordinary duty is to avoid
inflicting injury, not to prevent someone else from inflicting it.

1.6 Duties of care apply to safety properly so called Safety properly so-called – risk
of physical injury – is the subject matter of existing comparable4 duties of care. The
more nebulous and subjective kinds of harm to which the White Paper would extend
its proposed online duty have no place in these.

1.7 The White Paper does not acknowledge its radical departure from existing
principles The limits on duties of care exist for policy reasons that have been
explored, debated and developed over many years. Those reasons have not evaporated
in a puff of ones and zeros simply because we are discussing the internet and social
media. The government’s proposed online duty of care would disregard these and other
limits, but the White Paper does not acknowledge its radical departure from existing
principles.

1.8 Own conduct versus third party conduct is a crucial distinction The
distinction affirmed in Robinson between a person’s own conduct and prevention of
harm caused by that of a third party is crucial to an understanding of how far the White
Paper proposes to depart from the principles underpinning the existing duty of care
regime.

4Comparable duties of care are those owed by occupiers to visitors under the Occupier's Liability Act
and in negligence. Psychological damage is encompassed only in limited circumstances for tightly
drawn classes of persons, and dependent on actual or immediately apprehended physical injury. A
duty extending to welfare may be owed where there is a special relationship with the potential victim,
such as someone in loco parentis to a child or an employer to an employee (but not an employer to a
visitor). An online intermediary is not in such a relationship with its users.
3
1.9 The proposed preventative duty has been analogised to requiring an employer to fix a
loose floorboard before someone trips over it5. That is the stuff of everyday safety-
related duties of care. It is a duty owed in respect of the employer’s own activity, which
may itself cause injury to a visitor.

1.10 No duty to prevent injury from third party conduct in the absence of
specific risk creation Prevention of harm caused by a third party’s conduct is a
different matter. The analogy would be with a duty on an employer to prevent a visitor
to the employer’s premises from attacking another visitor.
1.11 No such duty exists, unless the employer has by some specific action of its own created
or increased the risk of that occurring, or has assumed responsibility for it5a - in
which case a limited duty of care may (but will not necessarily) be imposed. Where
such a duty does exist, it concerns safety in the well understood sense of the word. It
would not go beyond risk of physical injury6.
1.12 Serving alcohol at a bar7, inviting people to play golf at a golf course8, inviting to a
stadium football fans with a known likelihood of violence9, are examples of risk-
creating activities that have been held to justify imposing a duty of care in respect of
harm caused by (respectively), one patron attacking another with a knife, a visitor
being struck by an errant golf ball, or fans prising up loose lumps of concrete and using
them as missiles.
1.13 The proposed duty of care inevitably concerns third party conduct The
online duty proposed by the government falls squarely in the territory of preventing
harm by the conduct of a third party. It concerns harm caused by the conduct of users,
not the conduct of the intermediaries themselves. The trolls, cyberbullies, harassers,
misogynists, groomers, disinformation merchants, spreaders of CSAM and terrorist
material are all online users. The proposed duty of care would impose a duty on
intermediaries to prevent these third party activities.

1.14 Risk creation online? It may be suggested that some intermediaries in some
respects create or increase the risk of such conduct10. To the extent that that could be
shown to be so, then by analogy with offline duties of care that might justify the
imposition of an appropriately limited duty of care in respect of a specific risk of a
particular kind of injury created by a certain feature of the intermediary’s activities.

5 Woods/Perrin/Walsh: Summary Response to the Online Harms White Paper from the Carnegie
UK Trust. www.carnegieuktrust.org.uk/blog/online-harms-response-cukt/. See quotation below.
5a See e.g. Al-Najar and Others v Cumberland Hotel (London) Ltd [2019] EWHC 1593 (QB).

6 Although safety also encompasses physical damage to property, it is difficult to see how that would

be relevant to an online duty of care. It might be argued that existing duties of care should extend to
risk of standalone psychological harm. However that does not represent the existing law. The White
Paper goes even further than that.
7 Everett & Anor v Comojo (UK) Ltd (t/a the Metropolitan) & Ors [2011] EWCA Civ 13.

8 Phee v Gordon [2011] CSOH 181.

9 Cunningham and Others v Reading Football Club Limited [1991] Lexis Citation 2754.

10 See, for instance, D. Tambini "Reducing Online Harms through a Differentiated Duty of Care",

Foundation for Law, Justice and Society, page 2 (2nd 4 bullet).


1.15 Even so limited, application to the broad range of behaviour described in the White
Paper would entail extending the notion of a preventative duty of care far beyond the
risk of physical injury that applies in the offline world.

1.16 Risk creation versus general duty of care An approach focused on risk creation
would refrain from imposing a general duty of care on all kinds of intermediary for all
kinds of third party user conduct considered to be harmful, without considering
whether the intermediary in question was engaging in some specific activity that
created or increased risk of a relevant kind of harm.

1.17 Thus even if a duty of care were to extend beyond risk of physical injury to some other
specific kinds of harm, on that approach before a duty of care could be said to exist in
a particular case it would still be necessary to identify the feature said to give rise to
the risk and the kind of harm that could directly and foreseeably connected to the
existence of that feature.

1.18 Facilitation of speech is not a risk-creating activity In that regard, it is


pertinent to ask whether, for instance, a user who abuses an MP on a social media
platform does so from their own choice, or as a result of some feature of the social
media platform? Trivially, it might be said that the mere existence of the platform
enables a user to do so. That, however, is not enough. It does not identify a specific
feature that creates or increases the risk of the user speaking in that way, or of someone
suffering harm (however defined) as a result.

1.19 If it be said that mere facilitation of users’ individual public speech is sufficient to
justify control via a preventive duty of care placed on intermediaries, that proposition
should be squarely confronted. It would be tantamount to asserting that individual
speech is to be regarded by default as a harm to be mitigated, rather than as the
fundamental right of human beings in a free society. As such the proposition would
represent an existential challenge to the right of individual freedom of speech.

1.20 The White Paper does not address these principles when proposing a
general duty of care These are the kinds of issues that one might expect the White
Paper to consider carefully before proposing a wide-ranging online duty of care that
departs so radically from established principles. However, such discussion is absent.

1.21 Instead, the White Paper assumes that the various kinds of user behaviour which it
catalogues ought, by the platform’s mere existence, to be attributed to any platform on
which it takes place. The proposed duty would automatically apply to any operator that
fits the description of allowing users to share or discover user-generated content, or
interact with each other online: from Facebook and Twitter to Mumsnet and beyond,
by way of World of Warcraft and the John Lewis customer review section11.

1.22 A general duty of care in relation to third party activities combined with
subjective harms causes the very problems that the limits on offline duties
of care are designed to avoid As a matter of formulating duties of care according
to established principles, this approach is deeply flawed. The White Paper compounds

11White Paper, paragraph 4.1 The operators in scope are further expanded in paragraph 4.2 and
apparently (paragraph 4.5) would include those who provide ancillary services.
5
the sin by extending its concept of harm far beyond the notion of physical harm
suffered by individual people.

1.23 By including both subjective individual harms and nebulous harms to society, the
government has brought upon itself the very problems, notably damage to legitimate
online freedom of expression, that are avoided by the deliberately crafted limits to
offline duties of care.

2. SPEECH AND PERSONAL SAFETY

“It is difficult to envisage any circumstances in which speech which is


not deceptive, threatening or possibly abusive, could give rise to
liability in tort for wilful infringement of another’s right to personal
safety. The right to report the truth is justification in itself. That is not
to say that the right of disclosure is absolute … . But there is no general
law prohibiting the publication of facts which will cause distress to
another, even if that is the person’s intention.”

UK Supreme Court, Rhodes v OPO [2015] UKSC 32.

2.1 Caution must be exercised in applying physical world concepts of injury to


speech The Rhodes case aptly illustrates the caution that has to be exercised in
applying physical world concepts of harm, injury and safety to communication and
speech. That is even before considering the further step of imposing a duty of care on
a platform to take steps to reduce the risk of their occurrence as between third parties,
or the yet further step of appointing a regulator to superintend the platform’s systems
for doing so.

2.2 The White Paper proposals would likely suppress material that the UK
Supreme Court has held should be permitted The White Paper would place a
duty on intermediaries that would most likely result in the suppression, or at least
restriction12, of material of the kind discussed in Rhodes. Yet the Supreme Court held
in the strongest terms that suppression could not be compelled. Indeed the author had
the right to communicate his experiences using brutal language, not subject to
bowdlerisation by the court:

“His writing contains dark descriptions of emotional hell, self-hatred and


rage, … . The reader gains an insight into his pain but also his resilience and
achievements. To lighten the darkness would reduce its effect. The court has
taken editorial control over the manner in which the appellant’s story is
expressed. A right to convey information to the public carries with it a right to
choose the language in which it is expressed in order to convey the
information most effectively.” [78]

2.3 The claim in Rhodes was brought for the protection of a child. The case was about
whether the author of an autobiography should be prevented from publishing by an
interim injunction. The claim was that, if his child were to read it, the author would be
intentionally causing distress to the child as a result of the blunt and graphic

12Nothing in the White Paper appears to prevent the duty of care requiring removal of legal but
harmful material. In some places the White Paper appears to contemplate removal.
6
descriptions of the abuse that the author had himself suffered as a child. It was brought
under the rule in Wilkinson v Downton, which by way of exception from the general
principles of liability permits recovery for deliberately inflicted severe distress
resulting in psychiatric illness.

2.4 The Supreme Court stressed the need to consider:

“the wider question of justification based on the legitimate interest of the


defendant in telling his story to the world at large in the way in which he
wishes to tell it, and the corresponding interest of the public in hearing his
story. … ” [75]

and emphasised that:

“… there is a corresponding public interest in others being able to listen to his


life story in all its searing detail”.

2.5 A duty of care framed in the terms proposed in the White Paper could readily be
interpreted so as to deem such material as harmful and subject to suppression or
restriction.

3. PARALLEL LEGAL REGIME

3.1 The duty of care would trump existing legislation The Rhodes case study
illustrates the extent to which the proposed duty of care would, to all intents and
purposes, set up a parallel legal regime controlling speech online, comprising rules
devised by the proposed regulator under the umbrella of a general rubric of harm.

3.2 This parallel regime would in practice take precedence over the body of legislation in
the statute book and common law that has been carefully crafted to address the
boundaries of a wide variety of different kinds of speech.

3.3 Rhodes is but one convenient illustration of these issues. There are others.

3.3.1 Example: Blasphemy In 2008 Parliament took a considered decision to


abolish the criminal offence of blasphemy. Blasphemous material is free to
circulate as long as it does not contravene any other law (such as inciting
religious hatred).

3.3.2 The White Paper proposals would enable the regulator to deem blasphemous
material to be harmful and to require intermediaries, as part of their duty of
care, to take steps to restrict or, possibly, suppress it. Thus, by a sidewind, a
deliberate decision of Parliament would be circumvented.

3.3.3 Example: Twitter joke trial Would an intermediary subject to the duty of
care have considered that at least arguably it had to remove the Twitter joke
trial tweet which was subsequently found to be lawful?

3.3.4 Example: Harassment Suppose that a post does not quite qualify as
harassment under the 1997 Act. What is to stop it being deemed harmful and
still liable to be restricted or removed under the duty of care?

7
3.4 Backdoor legislation If a Secretary of State decides that, say, anti-vaxxers should
be silenced, then the proper course for such a serious matter is to bring legislation
before Parliament, have the matter debated and the precise contours of the legislation
considered, amended (if necessary) and either passed or rejected.

3.5 Under the White Paper's proposal that democratic procedure can be bypassed. The
regulator (perhaps with an ear open to the political wind) can deem it harmful and the
co-opted intermediaries are sent into action.

3.6 A silencing law has, in effect, been created on the fly by a regulator. Whether one is in
favour or not of silencing anti-vaxxers is beside the point. The process builds in a
democratic deficit13.

3.7 Bright lines required Unless a bright line were drawn in any legislation whereby
(for instance) lawful material could not be subjected to a duty to remove, and express
limits were placed on the steps that could be required under a duty of care, then it
seems very likely, certainly possible, that determination of illegality would become
largely irrelevant, subsumed in the broader category of merely harmful.

3.8 Widening gap with offline It is highly likely that the parallel regime would become
more restrictive than offline. There are already instances of material being banned
from platforms, the offline equivalents of which circulate freely. A recent example is
this book jacket cover on a Twitter profile.

13That deficit is not cured by the possibility, contemplated in Consultation Question 4, of placing
Codes of Practice before Parliament. Neither the quality of debate required for primary legislation, nor
the possibility of amendment, would apply to laying a detailed Code of Practice.
8
3.9 The government's proposals would serve to widen the divide by forcing online
platforms to take cautious decisions in the name of preventing amorphous kinds of
harm.

3.10 The duty of care would conflict with the presumption against prior
restraint The duty of care would also, since the emphasis is on prevention rather than
action after the event, create an inherent conflict with the presumption against prior
restraint, a long standing principle designed to provide procedural protection for
freedom of expression.

4. BREADTH OF POWERS

4.1 Given the White Paper's unbounded concept of harm, the broad swathe of
intermediaries that would fall in scope, and the emphasis on flexibility, the regulator
would automatically be imbued with broad powers to determine what complies with
an intermediary’s duty of care.

4.2 Those powers would inevitably trespass into areas that are the province of other
existing legislation, or should be the sole preserve of fully debated future primary
legislation. The following examples are presented by way of illustration only. In the
nature of the proposed regime it is impossible to predict everything that a regulator
might propose.

4.2.1 Example: Age verification The most obvious example is age verification.
Two regulators are already engaged in legislating controversial age
verification requirements. This proposal would add a third.

4.2.2 Example: End to end encryption What would prevent the regulator from
requiring an in-scope private messaging service14 to remove end-to-end
encryption? This is a highly sensitive topic which was the subject of
considerable Parliamentary debate during the passage of the Investigatory
Powers Bill. It is unsuited to be delegated to the discretion of a regulator15.

4.2.3 Example: Real-name policies What would prevent the regulator from
requiring intermediaries to fulfil their duty of care by adopting ‘real-name’
policies? Again this would be a highly controversial step, unsuited to being
delegated to the discretion of a regulator16.

14 The question of how the proposed regime should apply to private messaging services is the subject
of a Consultation question. Nevertheless the government appears to be contemplating that a
sufficiently large messaging group could be subject to scanning and monitoring obligations on the part
of the provider. That plainly raises the issue of encryption.
15 Nor would Parliamentary approval of a Code of Practice provide sufficient reassurance. See note 13

above.
16 Ditto.

9
4.3 Inherent in broad scope These examples are not outliers, or exceptional problems
that can be overcome with a few specific statutory exclusions or 'due regard' clauses.
They are inherent in the broad scope of the proposed regime.

5. MOST EASILY OFFENDED READER

5.1 A significant issue with extending a duty of care to subjective harms is that everyone
reacts differently to reading particular content. Some may be distressed, annoyed,
shocked, disturbed, offended (or whatever possible reaction is deemed by the regulator
to constitute harm). Many may not.

5.2 Notional reader Evaluating risk of harm therefore involves setting up a notional
reader with notional attributes. What are those attributes? Are they those of the most
easily offended? Or some other standard? If risk to the most easily ‘harmed’ is the
standard, what of those who would not be harmed at all by reading the material, indeed
who would benefit from reading it?

5.3 The problem of subjective harms This problem is created by the extension of the
duty of care beyond the offline limit of objectifiably ascertainable physical injury. It
simply does not arise for risk of physical injury. We all react in the same way to a knife
stuck into our ribs.

5.4 Consider the loose floorboard posited by Woods, Perrin and Walsh17:

“In our view, many of the difficulties around harm can be ameliorated by
focusing on the means by which types of harm are likely to arise. Taking a
real-world analogy, if an employer sees a floorboard sticking up that person
would not think “will someone break their leg or just twist their ankle?” but
would ensure the floorboard is fixed. The process is about identification of
something causing a risk of relevant harm, in which the precise nature of the
harm does not need to be precisely identified or quantified.”

5.5 Kind versus degree of harm Although this analogy is not directly relevant for the
reasons explained above (causing injury versus preventing third party actions), it well
illustrates the need to distinguish between degree of harm and kind of harm.

5.6 A loose floorboard creates a foreseeable risk of physical injury for anyone. No matter
whether they are child or adult, fit or vulnerable, someone who trips over it is liable to
be hurt by the physical impact.

5.7 How badly cannot be predicted. The degree of likely physical injury does not need to
be precisely identified or quantified. It is sufficient that that kind of injury is a
reasonably foreseeable consequence of being sent flying by a loose floorboard. That is
a risk for anyone and a benefit for no-one18.

17Para 1.9 above.


18 It may be relevant to consider matters such as ‘amenity value’. A loose, or at least uneven,
floorboard is likely to be more acceptable in an old building, taking into account the benefit of
preserving its character, than in a new one. However that is a basis on which to limit the scope of a
duty of care in relation to physical injury. It does not provide a ground on which to extend a duty of
care to effects subjectively experienced by a person.
10
5.8 Speech is not a tripping hazard Subjective speech harms19 are quite different,
since different people react differently to encountering the same speech. What may be
harmful to one can be neutral or beneficial to another. That is a category difference,
not a question of likely degree of harm.

5.9 A tweet is not a projecting nail to be hammered back into place, to the benefit of all
who may be at risk of tripping over it. Removing a perceived speech risk for some
people also removes benefits to others. Treating lawful20 speech as if it were a tripping
hazard is wrong in principle and highly problematic in practice. It verges on equating
speech with violence.

5.10 A veto for the most easily offended? The White Paper is an invitation to treat the
most easily offended hypothetical reader as the norm, overriding the interests of all
those who would benefit from reading the same material, would read it with
equanimity, or who even if shocked, distressed, offended, disturbed or upset21 would
not suffer any kind of objectively determinable harm, or could benefit from the
experience.

5.11 A reasonable person of ordinary sensibilities? This kind of problem is


sometimes addressed by means of a legal fiction such as “the reasonable person of
ordinary sensibilities”22. This hypothetical character is intended to introduce a degree
of objectivity to the evaluation.

5.12 But it does not of itself solve the problem, since the bare character has still to be clothed
in the attributes of reasonableness and ordinary sensibility. What are those attributes?
To take an obvious example, does this notional person embrace controversy and
difficult topics, or do they demand safe spaces and trigger warnings?

5.13 Someone would have to decide this question, but on what basis? When a court decides
on the attributes, its touchstone is justice:

"The spokesman of the fair and reasonable man, who represents after all no
more than the anthropomorphic conception of justice, is and must be the court
itself."23

19 This leaves open the question of whether ‘harm’ is an appropriate characterisation of being shocked,
offended, disturbed, distressed or upset.
20 Even when speech spills over into potential unlawfulness that is not necessarily a cut and dried

issue, given the numerous checks and balances that tend to be incorporated in legislation governing
speech, public interest defences and so on.
21 Or having encountered inaccurate information that may "confus[e] our understanding of what is

happening in the wider world" (White Paper para 7.25).


22 This character has assisted in the assessment of whether someone has a reasonable expectation of

privacy. See Campbell v MGN Ltd [2004] UKHL 22 at [99]. The use of such devices as legal fictions
was memorably described by Lord Reed in Healthcare at Home Ltd v The Common Services Agency
[2014] UKSC 49.
23 Per Lord Radcliffe in Davis Contractors Ltd v Fareham Urban District Council [1956] AC 696, 728.

11
5.14 Would a regulator, buffeted by the winds generated by government, pressure groups,
industry, Parliamentarians, the press and the rest24, be a reliable spokesperson for the
reasonable person of ordinary sensibilities?

5.15 Sensibility threshold Then there is the question of the threshold that would trigger
the ordinary sensibilities of this hypothetical reasonable person. Is it distress, severe
distress, upset, offence, gross offence, outrage, psychological harm or something else?

5.16 If risk of medically recognised psychological harm were the threshold, then much of
the effect on readers described in the White Paper would be out of scope25. It would
presumably not be contended, for instance, that for the reasonable person of ordinary
sensibilities “confusing our understanding of what is happening in the wider world”
(White Paper para 7.25) carries a risk of medically recognised psychological harm.

5.17 But if the threshold were lowered to accommodate such a broad conception of harm
(not to mention societal harms with no identifiable victim), then the interpolation of a
reasonable person of ordinary sensibilities would have little or no effect. The lower the
threshold, the greater the potential impact on the legitimate freedom of expression
interests26 of people at large.

5.18 Nor can the interpolation of a hypothetical character alter the fact that whatever the
threshold, the freedom of expression rights of those who are in fact more robust than
the hypothetical reader, or are more willing to take the risk of having their sensibilities
disturbed, would be overridden.

5.19 Embedded in the White Paper's scheme The prospect of embedding the standard
of the most easily offended reader is baked in to the very core of the White Paper’s
proposed scheme, as a result of the inclusion of unbounded subjective harms. It is not
something that can be cured by ‘a due regard to freedom of expression’ clause, or by
recitations of necessity and proportionality, or by guaranteeing independence of the
regulator, or by having Parliament approve Codes of Practice, or even by a hypothetical
objective reader. It suggests that a root and branch rethink of the proposal is required.

6. MANIFEST ILLEGALITY

6.1 The application of a duty of care to illegal material has received less attention than it
might otherwise have done, due to the focus of criticism on the problematic category
of legal but harmful content.

6.2 However, a duty of care in relation to illegality brings with it its own issues. It would
exacerbate the problems in determining illegality presented by the existing notice and
takedown regime.

24 It is notable that the White Paper has already started down the road of setting out the government's
own expectations of companies in fulfilling the duty of care. It "expect[s] the regulator to reflect these
in future codes of practice." (p.64)
25 It might be argued that the threshold should be lower on grounds that the difference between

psychological damage and distress is a matter of degree. Lowering the threshold for whatever reason
would reduce the impact of interpolating a hypothetical person of ordinary sensibilities.
26 Freedom of expression rights include the right to receive information as well as to impart it.

12
6.3 Under the eCommerce Directive regime27 a hosting intermediary is at risk of losing the
liability protection of the Directive once it has relevant knowledge of illegality. If it does
not remove the material expeditiously upon gaining that knowledge it is then a
question of the substantive law in question – defamation, copyright, obscenity,
harassment and so on – whether liability accrues.

6.4 Illegality not apparent In some kinds of case the illegality will be manifest. For most
categories it will not be, for any number of reasons. The alleged illegality may be
debatable as a matter of law. It may depend on context, including factual matters
outside the knowledge of the intermediary. The relevant facts may be disputed. There
may be available defences, including perhaps public interest. Illegality may depend on
the intention or knowledge of one of the parties. And so it goes on.

6.5 One example may serve as an illustration. The White Paper suggests that the exclusion
of harms suffered by organisations will exclude "most cases of intellectual property
violation". Infringement of copyright owned by individuals would still be in scope.

6.6 Parenthetically one might ask, what does copyright have to do with online personal
safety? Why is intellectual property in scope at all? That aside, consider the kinds of
material uploaded online for which the copyright owners would normally be
individuals.

6.7 Example: does this selfie infringe copyright? Many millions, probably billions
and perhaps trillions of images are uploaded and shared by individuals, from selfies to
holiday pictures and everything in between.

6.8 Any uploading by someone other than the individual who took the photograph (who
under English law would normally be the copyright owner) is unlikely to be the subject
of a formal, express licence. It could be the subject of some informal or implied licence,
or indeed it might have been uploaded without permission. The circumstances
determining that would be information unavailable to the intermediary. They could be
factually complicated28. Nothing on the face of the image would indicate illegality (as
a matter of copyright law) or not.

6.9 Under the eCommerce Directive the platform would be shielded from potential
copyright liability unless and until it acquired knowledge of infringement, and if it then
did not take the image down expeditiously.

6.10 The proposed duty of care would potentially place an obligation on the platform to do
more. But what more, and why? Would it be required to obtain evidence of copyright
ownership or permission from every user uploading a photo to the platform? Might it
have to do nothing different from now? If there were no Code of Practice covering the
situation, would a duty of care still apply?29 If so, what would it require?

27 The frequently misunderstood operation of the hosting shield under the Directive is described by
the author here: www.cyberleagle.com/2018/04/the-electronic-commerce-directive.html.
28 See e.g. Tumber v Independent Television News Ltd (ITN) & Anor [2017] EWHC 3093 (IPEC).

29 This is a point of more general significance for the White Paper proposals. It appears from the

general 'duty of care' description at 7.4 and 7.5 that it would apply, even if there were no relevant Code
of Practice.
13
6.11 As discussed above (Section 3), the duty of care would effectively create a parallel legal
regime, fashioned at the discretion of the regulator, to the existing carefully structured
law. These kinds of questions, the regulator’s answers to which would for practical
purposes change the law, should be addressed by Parliament in fully debated primary
legislation or not at all. They are not suitable to be delegated to a regulator.

6.12 Prior restraint Freedom of expression has long been protected in the UK by the
presumption against prior restraint. That applies to removal prior to determination on
the merits as well as to pre-publication censorship30. The existing hosting regime is
already open to criticism as incentivising over-cautious removal of material without a
determination of legality on the merits. A positive obligation to remove material,
backed up by the threat of fines, could only add to that.

6.13 Manifest illegality If there were to be any kind of positive duty to remove illegal
material of which an intermediary becomes aware, it is unclear why that should go
beyond material which is manifestly illegal on the face of it. If a duty were to go beyond
that, consideration should be given to restricting it to specific offences that either
impinge on personal safety (properly so called) or, for sound reasons, are regarded as
sufficiently serious to warrant a separate positive duty which has the potential to
contravene the presumption against prior restraint.

7. CERTAINTY OF LAW

7.1 Lord Justice Hoffmann (as he then was), in R v Central Independent Television plc31,
stated in a well known passage that:

“… publication may cause needless pain, distress and damage to individuals or


harm to other aspects of the public interest. But a freedom which is restricted
to what judges think to be responsible or in the public interest is no freedom.
Freedom means the right to publish things which government and judges,
however well motivated, think should not be published. It means the right to
say things which 'right-thinking people' regard as dangerous or irresponsible.
This freedom is subject only to clearly defined exceptions laid down by
common law or statute.”

7.2 Those sentiments apply to regulation by regulator as much as to the views of


government and the decisions of judges.

7.3 Clearly defined exceptions The most important point for present purposes is
contained in the last sentence. It recognises that freedom of speech may be abridged,
but only by “clearly defined exceptions”. A regulatory scheme founded on undefined
‘harm’ is the opposite of a clearly defined exception.

7.4 Impermissible vagueness It would not be acceptable to enact one all-purpose


statute proclaiming “Thou shalt do no harm”, then leave the courts to pour content into
the empty vessel. That would not comply with the rule of law. Law must be accessible
and sufficiently precise to enable someone, with reasonable certainty, to know in

30 ECtHR, Yildirim v Turkey [52].


31 R v Central Independent Television plc [1994] 3 All ER 641
14
advance whether their proposed conduct would be lawful. "Do no harm" does not
achieve that.

7.5 That principle is as relevant to an internet user liable to have a post or a tweet
suppressed by an intermediary co-opted by a state regulator as it is for someone liable
to be sued or prosecuted.

7.6 Legal certainty is a familiar concept from ECHR law. It is also part of English domestic
law. In the context of criminal law the objection to vagueness was spelt out by the
House of Lords in R v Rimmington32, citing the US case of Grayned:

"Vagueness offends several important values … A vague law impermissibly


delegates basic policy matters to policemen, judges and juries for resolution
on an ad hoc and subjective basis, with the attendant dangers of arbitrary and
discriminatory application."

7.7 Whilst most often applied to criminal liability, the objection to vagueness is more
fundamental than that. It is a constitutional principle that applies to the law generally.
Lord Diplock referred to it in a 1975 civil case (Black-Clawson):

"The acceptance of the rule of law as a constitutional principle requires that a


citizen, before committing himself to any course of action, should be able to
know in advance what are the legal consequences that will flow from it."

7.8 The foreseeability gap may perhaps be filled by the regulator, albeit that is open to the
‘impermissible delegation’ criticism referred to in Rimmington. Schemes of this type
have of course been considered appropriate for broadcast regulation for many years.
But individual speech is not broadcast.

7.9 Individual speech is not broadcast The White Paper, although framed as
regulation of platforms, concerns individual speech. The platforms would act as the co-
opted proxies of the state in regulating the speech of users. Certainty is a particular
concern with a law that has consequences for individuals' speech. In the context of an
online duty of care the rule of law requires that users must be able to know with
reasonable certainty in advance what speech is liable to be the subject of preventive or
mitigating action by a platform operator operating under the duty of care.

7.10 Given the particular importance of freedom of expression, it is (to say the least)
questionable whether a discretionary regulation scheme that would effectively
supplant the rest of the statute book as it applies to individual speech is an acceptable
way of proceeding.

7.11 A Twitter user is not a guest on daytime TV A Facebook, Twitter or Mumsnet


user is not an invited audience member on a daytime TV show, but someone exercising
their freedom to speak to the world within clearly defined boundaries set by the law.

7.12 A policy initiative to address behaviour online should take that principle as its starting
point and respect and work within it. The White Paper does not do so.

32 R v Rimmington [2005] UKHL 63.


15
8. REGULATORY MODELS

8.1 It cannot be assumed that an acceptable mode of regulation for broadcast is


appropriate for individual speech. The norm in the offline world is that individual
speech should be governed by general laws, not by a discretionary regulator.

8.2 Broadcast regulation The table at Box 18 of the White Paper lists a number of
regulatory approaches (drawn from an Ofcom paper) and characterises them as
resulting in ‘significant gaps in consumer protection’. This, however, assumes the
conclusion: namely that a discretionary regulatory model is appropriate for individual
online speech and therefore that apparent online gaps in that model ought to be filled
in the same way.

8.3 Does it matter that the same content may be regulated (by regulator) in different ways
on different services? The table does no more than illustrate variations in broadcast
regulation applied to a variety of broadcast-type services, with the addition of the
highly controversial extension of the AVMS Directive to video-sharing platforms in
2020. An assumption that because the picture is moving it demands broadcast-style
‘regulation by regulator’ is no more valid today than it was in 199833, 200134, 200735 or
201236.

8.4 General law for individual speech The most significant aspect of the table is the
arrow that states ‘general law applies’. However, the table omits to note that the general
law is the regime that applies to all individual ‘user-generated’ speech online and
offline, not only that on video-sharing platforms.

8.5 The general law alone, not exceptional regimes for broadcast, is the appropriate model
for individual speech. When we speak online that is no less individual speech than
offline.

8.6 In praise of fragmentation Moving beyond broadcast regulation, the White Paper
appears to favour a generalised ‘Do No Harm’ approach to law (see para 7.4 above). It
is difficult to attribute any other meaning to the complaint at paragraph 2.5 that the
separate items of legislation dealing with data protection, election law, pornography,
equality, gambling and consumer protection create a ‘fragmented regulatory
environment’?

8.7 This is a most curious statement. We have separate laws because each subject matter
is different and demands different approaches, each of which has to be spelt out with

33 G.J.H. Smith “Networks without Broadcast Restraints”, April 1998, Financial Times
(www.cyberleagle.com/2012/01/regulatory-convergence-same-old.html)
34 Commons Select Committee on Culture, Media and Sport Second Report 7 March 2001, Ch 6: “The

limits of Internet regulation”


https://publications.parliament.uk/pa/cm200001/cmselect/cmcumeds/161/16108.htm.
35 G.J.H. Smith “Convergence is not an excuse to regulate the Internet” 22 October 2007 The Times

Online (www.thetimes.co.uk/article/convergence-is-not-an-excuse-to-regulate-the-internet-
ps26khb5vzg)
36 G.J.H. Smith “Regulatory convergence - same old question, same right answer” 7 January 2012

www.cyberleagle.com/2012/01/regulatory-convergence-same-old.html.
16
sufficient precision to satisfy the rule of law test of certainty. Fragmentation of law is a
virtue, not a vice.

8.8 Broadcast standards If it is the government's intention that no-one on Twitter,


Facebook or anywhere else online should be able to say anything that would not be
allowed on daytime TV, that striking proposition should be put fairly and squarely
before Parliament for full debate and decision.

8.9 If that is not the government's intention, then it cannot be right to enact a scheme that
would give a regulator discretion to adopt that standard under the banner of ‘harm’.

8.10 Duty to respect freedom of expression That dilemma is not resolved by requiring
a regulator to have due regard to freedom of expression. Many different restrictions
on speech are compatible with the minimum level of protection set by human rights
legislation. That does not make them desirable or appropriate.

8.11 Nor would a positive duty to promote freedom of expression offer a sufficient
guarantee, given its potential to be deployed so as to favour particular kinds of speech.
It is an invitation to destroy the village in order to save it.

9. AN ALTERNATIVE APPROACH

9.1 The government should seek an alternative approach consistent with the UK’s
traditional commitment to the rule of law and freedom of speech. Such an approach
could include at least the following:

Duty of care

9.2 Follow the principles that have been developed for existing duties of care. Thus:

9.2.1 Limit any duty of care to personal safety properly so called. Exclude
subjective harms.

9.2.2 Limit any duty of care to situations in which specific activities of an


intermediary create or increase a specific risk of danger to someone’s
personal safety as a result of what they encounter online.

9.3 Examine, for each of the other harms identified in the White Paper:

9.3.1 Whether any of them include currently lawful activity that should be made
unlawful (whether civil or criminal).

9.3.2 Whether, for those harms, legislation can be crafted that clearly delineates
the boundary of legality. (If that proves not to be possible, it would be
problematic to delegate an impossible task to a regulator. The unavoidable
result would be arbitrary determination and enforcement.)

9.3.3 For such proposed legislation, along with the existing subject-specific
legislation identified in the White Paper, consider all alternatives for
improving access to enforcement. There should be no presumption that co-
opting online intermediaries as proxies for the state is the appropriate way
forward.

17
Access to justice and enforcement

9.4 Someone who has suffered unlawful behaviour online has three possible places to go:
the police (if the behaviour is criminal), the civil courts or the platform. Neither the
police nor the present civil courts have the ability or resources to operate at the scale
required to address unlawful online behaviour.

9.5 Views may differ on whether it is appropriate to require platforms to undertake the
task of prosecutor, defender, judge, jury, enforcer and (as some would advocate)
detective. But it can hardly be suggested that they have the social legitimacy to perform
such a role, even putting aside the heavily contested question of whether they could
ever be competent to perform it.

9.6 There is a gap: access to justice and enforcement. To fill the gap, legislate so as to let a
thousand (or as many as necessary) small claims online speech tribunals blossom. Let
them hear complaints, using proportionately abbreviated written procedures while
adhering to the familiar adversarial model. Let them make determinations as to
unlawfulness, with power to award a limited amount of compensation. Let them also
have power, if the circumstances should warrant it, to order that the material be
removed, if need be by the platform.

9.7 This would put the emphasis back where it belongs: on the perpetrators, on access to
justice, on determination of complaints according to the law and on prior due process.

Anti-social online behaviour

9.8 As with the football hooligan or the persistent disturber of a town centre’s peace, there
may be a case for exceptional remedies against the most egregious perpetrators of
online misconduct, including that which does not necessarily fall neatly into
established categories of unlawfulness.

9.9 Those remedies, albeit somewhat controversial, already exist and can be used against
online disturbers of the peace. They map readily on to trolling, cyberbullying,
harassment and the like.

9.10 The Anti-Social Behaviour, Crime and Policing Act 2014 contains a procedure for some
authorities to obtain a civil anti-social behaviour injunction (ASBI) against someone
who has engaged or threatens to engage in anti-social behaviour, meaning “conduct
that has caused, or is likely to cause, harassment, alarm or distress to any person”. That
succintly describes much of the kind of online behaviour complained of in the White
Paper.

9.11 Nothing in the legislation restricts an ASBI to offline activities. Indeed over 10 years
ago The Daily Telegraph reported37 an 'internet ASBO' made under predecessor
legislation against a 17 year old who had been posting material on the social media
platform Bebo. The order banned him from publishing material that was threatening
or abusive and promoted criminal activity.

37 www.telegraph.co.uk/technology/3355432/Teenager-handed-internet-Asbo.html.
18
9.12 ASBIs raise difficult questions of how they should be framed and of proportionality.
Some may have concerns about the broad terms in which anti-social behaviour is
defined in the legislation. Nevertheless the courts to which applications are made have
the societal and institutional legitimacy, as well as the experience and capability, to
weigh such factors. Unlike the deep problems with the White Paper, such issues can be
addressed.

9.13 Somewhat surprisingly, the Home Office Statutory Guidance38 on the use of the 2014
Act powers (revised in December 2017) makes no mention of their use in relation to
online behaviour. That should surely be revisited as a matter of urgency.

9.14 The government could also explore the possibility of extending the ability to apply for
an ASBI beyond the official authorities, for instance to some voluntary organisations.
That might be a more useful and practical step than granting them the status of 'super-
complainant' to the proposed regulator under the White Paper proposals.

ERRATA corrected 29 June 2019:

5.4: “Woods and Perrin” corrected to “Woods, Perrin and Walsh”. 5.14: “Parliament”

corrected to “Parliamentarians”.

Further reference added 1 July 2019:

1.11 Assumption of responsibility: Al-Najar and Others v Cumberland Hotel (London) Ltd

[2019] EWHC 1593 (QB).

38

https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file
/679712/2017-12-13_ASB_Revised_Statutory_Guidance_V2.1_Final.pdf.
19
ANNEX

Commentaries incorporated by reference

A Lord Chamberlain for the internet? Thanks, but no thanks (7 October 2018)

Take care with that social media duty of care (19 October 2018)

A Ten Point Rule of Law Test for a Social Media Duty of Care (16 March 2019)

Users Behaving Badly – the Online Harms White Paper (18 April 2019)

The Rule of Law and the Online Harms White Paper (5 May 2019)

Whatever happened to offline-online equivalence? (21 June 2019)

20

You might also like