Download as pdf or txt
Download as pdf or txt
You are on page 1of 15

LADY SHRI RAM COLLEGE FOR WOMEN, DELHI

UNIVERSITY OF DELHI

Academic Year: 2021-22

RESEARCH QUESTION:
What should the Indian Government do to balance free speech and hate speech on
digital platforms by learning from the impactful law of Germany?

Course: BA Hons Political Science.


Year: II year.
Semester: IV semester.
Paper name: Global Politics

Name of Students:
1. Ananya- 895.
2. Aarzoo Godara- 476.
3. Pranjali Patel- 609.
4. Siddhi Anand Lodha- 983.
Date of Submission: 24th March 2022.
Submitted to: Ms. Bhargavi Charan.
Abstract
Dr. De Verennes was presenting his report on the outcomes of the 13th Forum on Minority Issues
to the latest session of the Human Rights Council in Geneva, Switzerland. In his report, he
examined how online hate speech had increased against minority groups. He presented that in
many countries, more than three-quarters of victims of online hate speech were the members of
minority groups.1 He said that this digital hate can prepare the ground for dehumanization and
scapegoating of minorities, and for normalizing hate.2 Given this issue, the paper is roughly
divided into four parts. First, it seeks to identify online hate speech in India and how it is harmful
to society at large. Second, it examines Section 66A of the Information Technology Act, 2000
which was used to penalize digital hate till 2015, after which the Supreme Court struck down this
Act due to its widespread misuse and its tendency to violate the right to freedom of speech and
expression. This part also studies various other present laws which can deal with this issue to
some extent. Third, the paper takes various lessons from the NetzDG law of Germany which is
one of the most stringent policies in the world for online hate speech. Fourth, the paper gives
specific recommendations to deal with this issue in India. These include specific definitions of
technical and vague terms, induction of a legislative framework, a four-step model to be followed
by social media platforms, and the role of end-user in curbing online hate speech.

Keywords: Minority issues, Human Rights Council, online hate speech, digital hate,
dehumanization, Section 66A of Information Technology Act, NetzDG law of Germany,
legislative framework, the role of end-user.

1
United Nations Human Rights. 2021. Report: Online hate increasing against minorities, says expert.
2
Ibid.
INTRODUCTION
Hate speech includes several types of expressions that advocate, incite, promote or justify hatred,
violence, and discrimination against an individual or a group of people for a variety of reasons.
According to a recommendation by the Council of Europe in 1997, hate speech includes “all
forms of expression which spread, incite, promote or justify racial hatred, xenophobia,
anti-Semitism or other forms of hatred based on intolerance, including intolerance
expressed by aggressive nationalism and ethnocentrism, discrimination and hostility
towards minorities, migrants and people of immigrant origin”.3 It poses grave dangers for the
cohesion of a democratic society, the protection of human rights, and the rule of law. If it is left
unaddressed, it can lead to acts of violence and conflict on an enormous scale. In this sense, hate
speech is an extreme form of intolerance that contributes to hate crime.

All the above-given images reflect intense hate speech on Twitter. This is called Digital Hate or
Online hate speech. In general, this concept is tough to define because it lacks unique and

3
Ring, Caitlin Elizabeth. 2013. "Hate Speech in Social Media: An Exploration Of The Problem And Its
Proposed Solutions". Ph. D., University of Colorado.
discriminative features. Among these difficulties are subtleties in language, differing definitions
of what constitutes hate speech, and limitations of data availability for identification and
prevention. In general, online hate speech refers to the “use of aggressive, violent, or offensive
language, targeting a specific group of people sharing common property, whether this
property is their gender, their ethnic group or race, their beliefs, and religion, or their
political preferences.”4

While encouraging freedom of expression, social networking sites also imply freedom to hate to
the extent that individuals exercise their right to voice their opinions while actively silencing
others. The identification of the potential targets of hateful or antagonistic speech is key to
distinguishing online hate from arguments that represent political viewpoints protected by
freedom of expression rights.

Online hate is not a harmless matter. In fact, in recent times, it has become arduous work for
governments around the world to combat this issue. Online extremist narratives have been linked
to abhorrent real-world events, including hate crimes, mass shootings such as the 2019 attack in
Christchurch, stabbings and bombings; recruitment of extremists, including entrapment and
sex-trafficking of girls; threats against public figures, including the 2019 Verbal attack against an
anti-Brexit politician, and renewed anti-western hate in the 2019 post-ISIS landscape associated
with support for Osama Bin Laden’s son and Al Qaeda. Social media platforms seem to be losing
the battle against online hate.

However, the most widely used justification for protecting harmful or degrading speech is that
censorship of any sort limits the search for truth and narrows the marketplace of ideas. Truth,
many scholars believe, is made stronger and clearer when it has the opportunity to collide with
error.5 Therefore, it is best not to silence any opinion before it has had an opportunity to be heard.

4
"Hate Speech and Violence". 2022. European Commission Against Racism and Intolerance (ECRI).
5
Ring, “Hate Speech in Social Media”.
LAWS GOVERNING DIGITAL HATE IN INDIA

In India, the term ‘hate speech law’ is used by lawyers and the media to refer to several laws
that prescribe ‘promoting enmity' between classes of people6 and outraging religious sentiments7.
While hate speech laws have evolved gradually over time through judicial pronouncements, the
use of social and digital media to spread hate has brought into focus specific legal and
governance issues.

In 2008, a noteworthy step was taken by incorporating Section 66A into the Information
Technology Act, 2000 to penalize digital hate. The provision was made applicable to “any
information sent using a computer resource or communication device that is grossly offensive or
menacing in character, or any information that the sender knows to be false but sends anyway,
with the intent of causing annoyance, inconvenience, danger, obstruction, insult, injury, criminal
intimidation, enmity, hatred or ill will”8.

In 2015, section 66A was struck down by the Indian Supreme Court in Shreya Singhal v. Union
of India9 as ‘violative of the right to freedom of speech and expression’. The Court first declared
that freedom of speech available online deserves the same level of constitutional protection as
the freedom of speech available offline. It then analyzed Section 66A in light of the fundamental
right to free speech and expression guaranteed under Article 19(1)(a) of the Constitution. The
Court noted that Section 66A arbitrarily and disproportionately restricted the right to freedom of
speech and expression. The Court ruled that Section 66A did not constitute a reasonable
restriction under Article 19(2) and hence was violative of Article 19(1)(a) of the Constitution" 10.

6
Section 153A of the Indian Penal Code
7
Section 295A of the Indian Penal Code
8
Section 66A, Information Technology Act, 2000.
9
Shreya Singhal v. Union of India AIR 2015 SC 1523.
10
Choudhary, Amit, and Dhananjay Mahapatra. 2015. "Supreme Court Strikes Down Section 66A Of IT
Act Which Allowed Arrests for Objectionable Content Online | India News - Times of India". The Times
of India
Even while 66A was operative, the police typically used 66A along with the Indian Penal Code
(IPC) provisions such as Section 153A, and section 295A. Following the unconstitutionality of
Section 66A of the IT Act, no provision underneath the IT Act presently aims to curtail either
online or offline ‘Hate Speech’. The most employed sections 153A and 295A of the Indian Penal
Code (IPC) are also inadequate to deal with the barrage of digital hate. The Parliamentary
Standing Committee has recommended changes to the IT Act by incorporating the essence of
Section 153A. The report conjointly suggests stricter penalties than prescribed underneath
Section 153A due to the quicker and wider unfolding of information in online spaces. It
advocates criminalizing “innocent forwards”, for example, with the same strictness as the creator
of the content.

Besides, Section 69A of the Information Technology Act allows the State to direct any agency
of the Government or intermediary11 to block access to any information on any computer
resource. Besides, the intermediaries are also under the obligation of adhering to the
government’s directives to block or filter access to any content available online.

Section 79 of the IT Act, contains a ‘safe harbor’ provision absolving the intermediaries of their
liability for third party content. The intermediaries are protected only if they act as platforms
and not the speakers, and if they do not ‘initiate, select the receiver or modify’ the content being
transmitted.12

Also, Section 144 of CrPC empowers the district magistrate to impose internet shutdowns in
their respective districts. In Madhu Limaye v. Ved Murti13, the constitutionality of Section 144
was challenged before the Supreme Court. The Court upholding the section held that the
possibility of misuse of the provision was not sufficient ground to strike it down. Supreme Court
has noted that the objective of Section 144 is to address urgent situations by avoiding damaging

11
Defined as any person who "receives, stores, or transmits" an electronic message on behalf of another
person.
12
The procedure to be ascertained to limit access to online content is contained within the Information
Technology (Procedure and Safeguards for Blocking of Access of Information by Public) Rules, 2009.
13
Madhu Limaye v. Ved Murti, 1971 SCR (1) 145.
occurrences14 The Supreme Court has made it clear that the threat anticipated must be real and
not imaginary or based on likelihood.

Thus, in India digital hate is governed by a combination of colonial-era penal laws and laws
specifically enacted to regulate communication online. All of these are subject to ‘reasonable
restrictions’ to Article 19(1)(a) of the Indian Constitution, which guarantees the freedom of
speech and expression.

DIGITAL HATE LAWS IN GERMANY: A CASE STUDY

Countries across the world have started acknowledging that the issue of hate speech has started
affecting the functioning of society. To look into this matter, Germany has one of the most
stringent policies in this regard. It has the NetzDG or the Network Enforcement Act whose
objective is to combat online hate speech and fake news in social networks. This act ensures
tough prohibitions against hate speech, including the propagation of pro-Nazi ideology.15 It is
colloquially known as a “hate speech law” which is arguably the most ambitious attempt by a
Western state to hold social media platforms responsible for combating online speech deemed
illegal under the domestic law.16

The Network Enforcement Act came into being in 2017. It applies only to social media networks
that have 2 million or more registered users in Germany. It requires these platforms to provide a
mechanism for the users to submit complaints about illegal content. Once they receive a
complaint, the platforms must investigate whether the content is illegal. If the content is found to
be “manifestly illegal”, the platforms are obliged to remove the content within 24 hours of
receiving a user complaint.17 If the illegality is unclear, then the social media network has 7 days

14
Privacylibrary.ccgnlud.org. 2012. In Re: Ramlila Maidan Incident.
15
Gogoi, Gaurav. 2020. "It Is Time to Regulate Hate Speech On Social Media | Opinion". Hindustan
Times.
16
Tworek, H. and Leerssen, P., 2019. An Analysis of Germany’s NetzDG Law.
17
(Germany: Network Enforcement Act Amended to Better Fight Online Hate Speech, 2021)
to investigate and delete it. This law is quite stringent because the social media network is fined
up to 50 million euros for noncompliance.18

The NetzDG Act did not impose any new duties for the social media platforms. This Act only
imposed high fines for non-compliance with existing legal obligations. Its purpose was to
enforce 22 statutes in the online space that already existed in the German criminal code and to
hold large social media platforms responsible for their enforcement.19

The NetzDG also provides for the recognition of “regulated self-regulatory agencies” by the
Ministry of Justice. These agencies are financed by Social Media companies. The role of these
agencies is to determine whether a given piece of content is in violation of the law and if it
should be removed from the platform. The recognition of these agencies by the Ministry of
Justice is contingent on conditions such as the independence of the self-regulatory agency,
expertise of the agency staff, and the agency’s capacity to decide within 7 days. In theory, this
mechanism can turn out to be a step towards establishing some sort of independent
self-regulation for social media under the statute.20

This law has been highly controversial and has been criticized as unconstitutional in particular
concerning free speech. The group “Human Rights Watch” has named the current regulatory
framework as ‘turning private companies into overzealous censors to avoid steep fines and
leaving the users with no judicial oversight or right to appeal.’21 This Act is criticized saying that
the definitions of ‘unlawful content’ remain highly debated in cases of blasphemy and hate
speech. Moreover, the concepts of ‘defamation’ and ‘insult’ are vaguely defined. Another
counterproductive outcome is that of the ‘Streisand Effect’ which means that the prominent cases
of deletion get more publicity. Another major concern is that any extremist or banned user can
easily migrate to small platforms which are not liable under the NetzDG Act. 22

18
Ibid.
19
Tworek, H. and Leerssen, P., 2019. An Analysis of Germany’s NetzDG Law.
20
(Self-regulation and ‘hate speech’ on social media platforms, 2018)
21
Archit Lohani, “Countering Disinformation and Hate Speech Online: Regulation and User Behavioural
Change,” ORF Occasional Paper No. 296, January 2021, Observer Research Foundation.
22
Ibid.
ARRIVING AT A REGULATORY MECHANISM IN INDIA: LESSONS
FROM NETzDG LAW OF GERMANY

Arriving at a regulatory mechanism is unlikely to be easy — a fact that is evident in the ongoing
global debate, as different national governments cite their legal frameworks to set precedent and
seek control. But as hard as it may be, any regulatory framework that evolves will necessarily
need to ensure that it not only protects the right to free speech in a democracy but equally if not
more so — creates safeguards and curbs against the impact and the process of online, social
media amplification of hate speech that can lead to offline, real-world violence.

We can take the following lessons from the NetzDG Act applicable in Germany:

(1) There should be a specific law in place which aims to combat online hate speech.
(2) It should provide a clear definition of ‘hate speech’ and distinguish it from other related
terms like ‘defamation’ and ‘insult’.
(3) It should outline what comes under ‘unlawful content’ on digital platforms. The law
should further distinguish the content into ‘manifestly illegal’ and ‘illegal’ content, hence
leaving no room for confusion and vagueness.
(4) The law should provide the actions which will be taken against illegal content online.
(5) The applicability of the law should be laid out concerning social media platforms and
their number of users.
(6) The social media platforms should be obliged to remove the illegal content within a
specific time-span, failing to which, the social media platforms need to pay a fine.
(7) If it is unclear whether the content is illegal or not, then the social media platform should
be provided a larger period to take action.
(8) The law should also provide for the establishment of ‘self-regulatory agencies’ which
would determine whether the given piece of content is in violation of the law and whether
it should be removed from the platform.
(9) These self-regulatory agencies should be recognized by the government to give them
legitimacy and independence.
(10) The agency staff recruited in these agencies should be beyond political influence and
should be expertise in the field of Law and technicalities of the Digital world; to
determine if the content should be removed or not.

If the Indian government is strict with the ‘Rule of Law’, then the social media platforms have to
abide by the law formulated to curb ‘digital hate’. This will even prevent the users from
practicing hate speech since the social media platforms would formulate some ‘terms and
conditions’ to avoid more cases of hate speech. This law would not curb ‘free speech’ because
the law has specified the distinction. Having the law is the first step, however, proper
implementation of the law is the most important step in curbing digital hate.

SPECIFIC RECOMMENDATIONS

In any strategic intervention against hate speech, the tech platforms are bound to play the biggest
role. There should be continuous collaborative engagements within the industry, along with state
and non-state actors.23 While the creation of charters or codes that define each stakeholder’s
duties and rights will be a lengthy process, a pre-emptive plan cannot be delayed further.

First, the government must speed up defining and paving a way for consensus towards a legal
framework against problematic social media content.

• Definitions:

The Indian challenge to garner consensus and counter ‘hate speech’ extends to its understanding
in the real world.24&25 Hate speech remains undefined under any domestic legal mandate,
including the IT Act. By building consensus on key elements, the legislature can assist the

23
Megha Mandavia, “Social Media to Join Hands to Fight Fake News, Hate Speech,” The Economic
Times, February 19, 2020.
24
The Bureau of Police Research and Development recently published a manual for investigating
agencies on cyber harassment cases that defined hate speech as a language that denigrates, insults,
threatens or targets an individual based on their identity and other traits (such as sexual orientation or
disability or religion etc.).
25
In the 267th Report of the Law Commission of India, hate speech is stated as an incitement to hatred
primarily against a group of persons defined in terms of race, ethnicity, gender, sexual orientation,
religious belief and the like.
platforms in initiating a countermovement by consistent interpretation and implementation of the
law. However, the process of defining and introducing penal provisions must avoid ambiguous
terminologies. Potential overcriminalization must be prevented. The legislature can identify and
agree on key elements to facilitate consensus building and build safety nets around ethical codes.

• Legislative Framework

Limited but necessary legislative support could help generate consensus with a much more
effective regulatory mechanism. Countries like France and Germany have pre-defined the limited
scope of government role to avoid any arbitrary intervention and have even appointed
independent regulators or independent judicial members to dispense moderation objectives. Hate
Speech codes must be founded and authorities must explore the scope of an independent
regulator. The regulator’s objectives should be to assist the application of government policies
and serve as a forum for redressal against the platforms for arbitrary takedowns.

• Social Media Platforms

To effectively safeguard against and mitigate the impacts of ill-speech, countries should have
better fact-checkers and authorities committed to responding to the public interest. Real-time
takedowns can have a positive impact as it significantly develops resistance and reduces the
impact of hate to percolate into local, deeper, and different forms. However, taking down
problematic content is not an absolute safeguard because ‘time’ is not the only determinant when
problematic speech is published online. Ill-speech’s omnipresence and negative impact remain as
an issue. Also, these real-time takedowns face the problem of the ‘Streisand Effect’, which
means that the prominent cases of deletion get more publicity.

Suggesting a four-step model be implemented by social media platforms to counter problematic


content online.

Step 1- Identification: Identification of extremist content or hate speech according to the


definitions or elements after receiving complaints from the users regarding the illegal content.
While upholding anonymity, the platform must flag it in a specific manner that communicates its
problematic/unreliable nature to the end-user.
Step 2- Disallow Proliferation: While the content may continue to exist online, it should
not only be flagged but platforms should disable any type of proliferation further. Any blatantly
problematic content should be taken down by the social media platforms within the specified
time limit. At the same time, the content should not be available on user feeds. Such content
should be disabled from the functions of like, share, retweet, upvote, or other interaction methods
for the same.

Step 3- Issue interaction warnings: Since platforms employ interaction data, they must
issue warnings to all end-users who have encountered problematic content before it was flagged
or identified. All end-users who shared or promoted such content must be sent personal
notifications. They should be directed to more credible sources.

Similarly, the publishing end-user must be provided with necessary reasons for flagging or taking
down the published content. They, along with other interactors, should be encouraged to abide by
the law before publishing any hate content. This can help in disseminating digital education and
necessary online skills to empower the end-users themselves.

Step 4- Provide a better recourse mechanism: In terms of reporting hateful content,


platforms should be user-friendly with timely action and response. This would require a wide
expansion of resources employed at the intermediary’s offices. Recourse against wrongful
takedowns should be formalized and direct end-users to specific mechanisms if their content is
taken down illegally.

• The end-user

The end-user who is exposed to problematic speech is the most important, yet, disengaged
stakeholder. As social media platforms aim to ensure their risk-averseness, they must be a part of
the safeguarding process. Inculcating anti-hate behavior is key in empowering individual entities
to discredit and stifle such speech as the first response. Counter-speech means to engage directly
against negative speech using positive expression. In online discourse, multiple end-users
already assist towards “toning down the rhetoric”26 by providing clarifications to dubious claims,

26
Jamie Bartlett and Alex Krasodomski-Jones, “Counter-speech on Facebook,” Demos, September 2016.
directly engaging with facts to counter hateful messages, calling out fake reportage, employing
humor or dissent through memes and caricatures.

There is no countermeasure against blind sharing but to effectively reduce its impact, it is
necessary to empower individuals with easy access mechanisms to halt negative proliferation. A
red flag should be easy to raise against extremist speech while imparting understanding against
different types of ill-intentioned speech policies, potential impact, and its omnipresence.

CONCLUSION

Hate Speech targeting gender, race, religion, caste, beliefs, or political preferences is prevalent
on social media platforms to a large extent recently. We cannot deny the idea that the
proliferation of online hate speech can enter the real social setting and cause violence or unrest.
To tackle this problem, IT Laws come into the picture. Section 66A of the Information
Technology Act, 200027 served the purpose of this issue to some extent, however, it was misused
by the political parties to serve their interests. As a result, the law was declared unconstitutional.
At present, no one law single-handedly deals with this issue.

On searching for different policy mechanisms in different counties, we find that Germany has
one of the most stringent policies to counter digital hate. Germany has enacted the NetzDG Act
whose aim is to combat hate speech and fake news on social networks. The policy is stringent
because the burden of interpreting and implementing the law lies on the shoulders of social
media platforms who come under the scope of this law. If they fail to do so, they are fined huge
sums of money. The NetzDG law also calls for the establishment of government-recognized
‘Self-Regulatory Agencies’ that can assist social media with identifying illegal content online.
The law is criticized widely, however, India can take some valuable lessons from the NetzDG
law of Germany.

The paper recommends the formulation of a specific law for countering online hate speech. It
asks for clear definitions of ‘hate speech and ‘illegal content’ on social media platforms. It also

27
Section 66A, Information Technology Act, 2000.
suggests the establishment of Government recognized ‘self-regulatory mechanisms’ which
would determine the illegality of the content and assist the social media platforms to take down
illegal content in specific periods as would be formulated.

BIBLIOGRAPHY

1. "Hate Speech And Violence". 2022. European Commission Against Racism And


Intolerance (ECRI). Accessed March 18.
https://www.coe.int/en/web/european-commission-against-racism-and-intolerance/hate-sp
eech-and-violence.
2. Ring, Caitlin Elizabeth. 2013. "Hate Speech in Social Media: An Exploration of The
Problem and Its Proposed Solutions". Ph. D., University of Colorado.
3. Choudhary, Amit, and Dhananjay Mahapatra. 2015. "Supreme Court Strikes Down
Section 66A Of IT Act Which Allowed Arrests for Objectionable Content Online | India
News - Times of India". The Times of India
http://timesofindia.indiatimes.com/india/SupremeCourt-strikes-down-Section-66A-of-IT-
Act-which-allowed-arrests-for-objectionable-contentonline/articleshow/46672244.cms.
4. Section 153A of the Indian Penal Code
5. Section 295A of the Indian Penal Code
6. Section 66A, Information Technology Act, 2000.
7. Shreya Singhal v. Union of India AIR 2015 SC 1523.
8. Madhu Limaye v. Ved Murti, 1971 SCR (1) 145.
9. Ramlila Maidan Incident (n. 77), para 38.
10. Gogoi, Gaurav. 2020. "It Is Time To Regulate Hate Speech On Social Media | Opinion".
Hindustan Times.
https://www.hindustantimes.com/analysis/it-is-time-to-regulate-hate-speech-on-social-me
dia/story-x2JfnAcZ4mh404CM2wQLpO.html.
11. Privacylibrary.ccgnlud.org. 2012. In Re: Ramlila Maidan Incident. [online] Available at:
<https://privacylibrary.ccgnlud.org/case/in-re-ramlila-maidan-incident> [Accessed 20
March 2022].
12. https://www.loc.gov/item/global-legal-monitor/2021-07-06/germany-network-enforceme
nt-act-amended-to-better-fight-online-hate-speech
13. Tworek, H. and Leerssen, P., 2019. An Analysis of Germany’s NetzDG law. [online]
Ivir.nl. Available at:
https://www.ivir.nl/publicaties/download/NetzDG_Tworek_Leerssen_April_2019.pdf
14. Archit Lohani, “Countering Disinformation and Hate Speech Online: Regulation and
User Behavioural Change,” ORF Occasional Paper No. 296, January 2021, Observer
Research Foundation.
15. 2018. Self-regulation and ‘hate speech’ on social media platforms. Article 19. [online]
London: Free Word Centre, p.18. Available at:
https://www.article19.org/resources/self-regulation-hate-speech-social-media-platforms/
16. School, Stanford Law. “Regulating Freedom of Speech on Social Media: Comparing the
EU and the U.S. Approach | Stanford Law School.” Stanford Law School.
law.stanford.edu. Accessed March 17, 2022.
https://law.stanford.edu/projects/regulating-freedom-of-speech-on-social-media-comparin
g-the-eu-and-the-u-s-approach/.
17. Dubey, Rohin. “Regulating Content in the Age of Social Media.” Bar and Bench - Indian
Legal News, Bar and Bench - Indian Legal news, 18 Oct. 2020,
https://www.barandbench.com/columns/regulating-content-in-the-age-of-social-media.
18. Laub, Zachary. “Hate Speech on Social Media: Global Comparisons.” Council on
Foreign Relations, 7 June 2019,
www.cfr.org/backgrounder/hate-speech-social-media-global-comparisons.
19. Cohen-Almagor, Raphael. (2017). Balancing Freedom of Expression and Social
Responsibility on the Internet. Philosophia (United States). 45. 1-13.
10.1007/s11406-017-9856-6.
20. Narrain, Siddharth. (2019). Social Media, Violence and the Law: 'Objectionable Material'
and the Changing Contours of Hate Speech Regulation in India. Culture Unbound:
Journal of Current Cultural Research. 10. 388-404. 10.3384/cu.2000.1525.2018103388.
21. Lohani, Archit. “Countering Disinformation and Hate Speech Online: Regulation and
User Behavioural Change.” ORF, 10 Feb. 2021,
www.orfonline.org/research/countering-disinformation-and-hate-speech-online
22. United Nations Human Rights. 2021. Report: Online hate increasing against minorities,
says expert. [online] Available at:
https://www.ohchr.org/en/stories/2021/03/report-online-hate-increasing-against-minoritie
s-says-expert

You might also like