Download as pdf or txt
Download as pdf or txt
You are on page 1of 13

Assignment of Media Law on-

Hate speech on social media and its impact on society.

Submitted to: Submitted by

Dr. Pushpanjali Kamlesh Manwani

(Assistant Professor of Law) B.B.A.LL.B. (8th Semester)

1120181951

HIMACHAL PRADESH NATIONAL LAW UNIVERSITY

GHANDAL, SHIMLA, P.O. SHAKRAH, SUB-TEHSIL DHAMI

DISTRICT SHIMLA, HIMACHAL PRADESH-171014

Ph. 0177-2779802, 0177-2779803, Fax: 0177-2779802

Website:http://hpnlu.ac.in!

Page 1



DECLARATION

The work embodied in this Assignment titled “Hate speech on social media and its impact on
society” has been done by me and not submitted elsewhere for the award of any other degree/
certificate. All ideas and references have been duly acknowledged.

Kamlesh Manwani

B.B.A.LL.B. 8th Semester

(Batch 2018-23)

(1120181951)

Page 2

Abstract

First, the word “hate” will be understood as “extreme negative feelings and beliefs held about a
group of individuals or a specific representative of that group because of their race, ethnicity,
religion, gender or sexual orientation.” The term “hate speech” will be understood as covering all
forms of expression that “spread, incite, promote or justify racial hatred, xenophobia, anti-Semitism
or other forms of hatred based on intolerance, including intolerance expressed by aggressive
nationalism and ethnocentrism, discrimination and hostility against minorities, migrants and people
of immigrant origin.” Both of these definitions come directly from the Council of Europe’s Protocol
to the Convention on Cybercrime so that any proposed U.S. regulations would be consistent with
the current online hate speech prohibitions enacted by the European Union.
Although hate speech may not be difficult to define, determining what hate speech is or is not
protected by the First Amendment is a far more complex issue. Therefore, it is important to note
here that most online and in-person hate speech is protected by the First Amendment.

Page 3


Problem profile

1. Understanding meaning and nature of hate speech on social media


2. Provisions in India
3. Impact of hate speech
4. Suggestions

Research methodology

Research methodology used is secondary data method. Existing data is summarized and collected to
increase the overall effectiveness of research. Author had gone through articles, research papers,
internet and websites. All the data are already in existence which are combine to this.

RESEARCH OBJECTIVE:-
To study and critically analyse the impact of hate speech on internet.

Research question and objectives.

1. What is hate speech


2. Why it is to be controlled
3. What are various provisions
4. What its impact

Page 4

Introduction

Online hate speech is a type of speech that takes place online with the purpose of attacking a person
or a group based on their race, religion, ethnic origin, sexual orientation, disability, and/or gender.
Online hate speech is not easily defined, but can be recognized by the degrading or dehumanizing
function it serves.
Multilateral treaties such as the International Covenant on Civil and Political Rights (ICCPR) have
sought to define its contours. Multi-stakeholders processes (e.g. the Rabat Plan of Action) have tried
to bring greater clarity and suggested mechanisms to identify hateful messages. Yet, hate speech is
still a generic term in everyday discourse, mixing concrete threats to individuals and/or groups with
cases in which people may be simply venting their anger against authority. Internet intermediaries—
organizations and social networks that mediate online communication such as Facebook, Twitter,
and Google—have advanced their own definitions of hate speech that bind users to a set of rules
and allow companies to limit certain forms of expression. National and regional bodies have sought
to promote understandings of the term that are more rooted in local traditions.
The Internet's speed and reach makes it difficult for governments to enforce national legislation in
the virtual world. Social media is a private space for public expression, which makes it difficult for
regulators. Some of the companies owning these spaces have become more responsive towards
tackling the problem of online hate speech.1
Politicians, activists, and academics discuss the character of online hate speech and its relation to
offline speech and action, but the debates tend to be removed from systematic empirical evidence.
The character of perceived hate speech and its possible consequences has led to placing much
emphasis on the solutions to the problem and on how they should be grounded in international
human rights law. Yet this very focus has also limited deeper attempts to understand the causes
underlying the phenomenon and the dynamics through which certain types of content emerge,
diffuse and lead—or not—to actual discrimination, hostility, or violence.
The extremity of the speech

In order to qualify as hate speech, the speech must be offensive and project the extreme form of
emotion.98 Every offensive statement, however, does not amount to hate speech. The expressions
advocacy and discussion of sensitive and unpopular issue have been termed ‘low value speech’
unqualified for constitutional protection.

1 https://en.wikipedia.org/wiki/Online_hate_speech
Page 5

Presently, in our country the following legislations have bearing on


hate speech, namely2:-

1. The Indian Penal Code, 1860 (hereinafter IPC)

(i) Section 124A IPC penalises sedition

(ii) Section 153A IPC penalises ‘promotion of enmity between different groups on grounds of
religion, race, place of birth, residence, language, etc., and doing acts prejudicial to
maintenance of harmony’.

(iii) Section 153B IPC penalises ‘imputations, assertions prejudicial to national-integration’.

(iv) Section 295A IPC penalises ‘deliberate and malicious acts, intended to outrage religious
feelings of any class by insulting its religion or religious beliefs’.

(v) Section 298 IPC penalises ‘uttering, words, etc., with deliberate intent to wound the religious
feelings of any person’.

(vi) Section 505(1) and (2) IPC penalises publication or circulation of any statement, rumour or
report causing public mischief and enmity, hatred or ill-will between classes.

2. The Representation of The People Act, 1951

(i) Section 8 disqualifies a person from contesting election if he is convicted for indulging in acts
amounting to illegitimate use of freedom of speech and expression.

(ii) Section 123(3A) and section 125 prohibits promotion of enmity on grounds of religion, race,
caste, community or language in connection with election as a corrupt electoral practice and
prohibits it.

3. The Protection of Civil Rights Act, 1955

(i) Section 7 penalises incitement to, and encouragement of untouchability through words, either
spoken or written, or by signs or by visible representations or otherwise

2 https://theprint.in/india/hate-speech-on-social-media-has-to-be-taken-note-of-says-scs-judge-
justice-l-nageswara-rao/866059/
Page 6

4. The Religious Institutions (Prevention of Misuse) Act, 1988

(i) Section 3(g) prohibits religious institution or its manager to allow the use of any premises
belonging to, or under the control of, the institution for promoting or attempting to promote
disharmony, feelings of enmity, hatred, ill-will between different religious, racial, language or
regional groups or castes or communities.

5. The Cable Television Network Regulation Act, 1995

(i) Sections 5 and 6 of the Act prohibits transmission or re- transmission of a programme through
cable network in contravention to the prescribed programme code or advertisement code.
These codes have been defined in rule 6 and 7 respectively of the Cable Television Network
Rules, 1994.

6. The Cinematograph Act, 1952

(i) Sections 4, 5B and 7 empower the Board of Film Certification to prohibit and regulate the
screening of a film.

7. The Code of Criminal Procedure, 1973

(i) Section 95 empowers the State Government, to forfeit publications that are punishable under
sections 124A, 153A, 153B, 292, 293 or 295A IPC.

(ii) Section 107 empowers the Executive Magistrate to prevent a person from committing a breach
of the peace or disturb the public tranquillity or to do any wrongful act that may probably
cause breach of the peace or disturb the public tranquillity.

(iii) Section 144 empowers the District Magistrate, a Sub-divisional Magistrate or any other
Executive Magistrate specially empowered by the State Government in this behalf to issue
order in urgent cases of nuisance or apprehended danger. The above offences are cognizable.
Thus, have serious repercussions on liberties of citizens and empower a police officer to arrest
without orders from a magistrate and without a warrant as in section 155 CrPC.

Page 7

Impact of Hate Speech on Freedom of Expression

Right to freedom of speech and expression is one of the most essential liberties recognized by the
democratic States. The concept of liberty has been primarily influenced by the principle of
individual autonomy. The liberal theory of free speech views speech as an intrinsic aspect of
autonomous individual, hence any restriction on exercise of this liberty is always subject to judicial
scrutiny. The objective of free speech in a democracy is to promote plurality of opinions. The
importance of allowing expression

Hate speech is an expression which is likely to cause distress or offend other individuals on the
basis of their association with a particular group or incite hostility towards them. There is no
general legal definition of hate speech, perhaps for the apprehension that setting a standard for
determining unwarranted speech may lead to suppression of this liberty.

Free speech has always been considered to be the quintessence of every democracy. The doctrine of
free speech has evolved as a bulwark against state’s power to regulate speech. The liberal doctrine
was a measure against the undemocratic power of the state. The freedom of expression was one of
the core freedoms that were incorporated in the Bill of Human Rights. The greater value accorded
to the expression, in the scheme of rights, explains the reluctance of the law makers and judiciary in
creating exceptions that may curtail the spirit of this freedom. Perhaps, this is the reason behind the
reluctance in defining hate speech.

The issue of hate speech has assumed greater significance in the era of internet, since the
accessibility of internet allows offensive speeches to affect a larger audience in a short span of time.
Recognising this issue, the Human Rights Council’s ‘Report of the Special Rapporteur on the
promotion and protection of the right to freedom of opinion and expression’38 on content regulation
on internet, expressed that freedom of expression can be restricted on the following grounds39,
namely:3

(i) child pornography (to protect the rights of children),

(ii) hate speech (to protect the rights of affected communities)

(iii) defamation (to protect the rights and reputation of others against unwarranted attacks)

3 https://lawstreet.co/legal-insiders/hate-speech-social-media-justice-l-nageswara-rao/
Page 8

(iv) direct and public incitement to commit genocide (to protect the rights of others)

(v) advocacy of national, racial or religious hatred that constitutes incitement to discrimination,
hostility or violence (to protect the rights of others, such as the right to life).

Hate Speech and Internet4

While internet has made the globe a small and connected place, it has also created a space for
unregulated forms of expression. In Delfi v. Estonia, the applicants approached the court against the
order of the Estonian court, wherein the applicants (owners of the internet news portal) had been
made liable for user generated comments posted on their website. This was the first case where the
court had to examine the scope of article 10 in the field of technological innovations.

The court observed that while internet is an important tool for disseminating information and
opinions, it also serves as a platform for disseminating unlawful speech. The court emphasized on
the need to ‘harmonise these two conflicting realities’ as the freedom of expression cannot be
exercised at the cost of other rights and values enunciated in the Convention. The court upholding
the decision of the Estonian Court held that:

... in cases such as the present one, where third-party user comments are in the form of hate speech
and direct threats to the physical integrity of individuals, as understood in the Court’s case-law, the
Court considers ... that the rights and interests of others and of society as a whole may entitle
Contracting States to impose liability on Internet news portals, without contravening article 10 of
the Convention, if they fail to take measures to remove clearly unlawful comments without delay,
even without notice from the alleged victim or from third parties.

The content and context of the expression plays an important role in analysing the permissibility of
the speech. The court takes into account various factors before excluding speech from protection
under the Convention like, nature of remarks, dissemination and potential impact of remarks, status
of targeted person, status of the author of the remarks, nature and severity of penalty imposed (to
determine the proportionality of the interference) etc.

4 https://www.protegopress.com/tackling-social-medias-hate-speech-problem-in-india/
Page 9

Analysis

On 11th August 2020, an offensive Facebook post about Prophet Muhammad played a significant
role in inciting violent clashes in Bengaluru in India, the worst that the city has seen in recent
history. Unfortunately, this incident is not an isolated one. Hate speech and misinformation
propagated through platforms like Facebook, Twitter and WhatsApp has resulted in mob violence,
lynching, communal riots, and claimed many innocent lives in India.

Facebook has come under particular scrutiny recently. An article published in the Wall Street
Journal has alleged that Facebook India’s Public Policy Head selectively shielded offensive posts of
leaders of the ruling Bhartiya Janata Party (BJP). Facebook has since then banned the BJP leader
from its platform and acknowledged the “need to do more”. While the Parliamentary Standing
Committee on Information Technology led by an Opposition member has initiated an investigation
into the WSJ claims, competing claims have subsequently been made by India’s Information
Technology Minister accusing Facebook of bias against supporters of right-of-centre ideology.

The recent allegations against Facebook have led to widespread concern about the platform’s lack
of sincerity in tackling hate speech in India. As the platform continues to be embroiled in a heated
political controversy in India, it is time for greater transparency on what goes on behind the heavy
curtains of outsourced content moderation in the platform’s biggest market in the world.

The current scenario has understandably amplified calls for new legislation to moderate and
regulate content on social media platforms. However, it is crucial that any legislation of this nature
is carefully thought out and balances public interest with constitutionally protected rights of free
speech and privacy. This caveat arises from the fact that similar instances in the past have spurred
the Indian Government to propose hasty and ill-thought regulations. In December 2018, the
Government proposed amendments to the existing Information Technology (Intermediaries
Guidelines Rules), 2011 under the Information Technology Act in India. The IT Act provides a ‘safe
harbour’ to social media platforms like Facebook and Twitter. The proposed Guidelines seek to
impose an obligation on platforms to identify the originator of private messages, and proactively
monitor communication. Considered to be a knee-jerk reaction to the proliferation of
misinformation on WhatsApp, the proposed amendments have been heavily criticised for
threatening free speech and privacy of users, weakening encryption, and enabling State
surveillance. These guidelines have not been enacted yet.
Page 10

Admittedly, any potential approach on regulation or moderation of social media content will run the
risk of offending the free speech and privacy of users. While it is important to think about the
impact of such laws, it is equally important to address the circumstances under which American
social media platforms operate in India. The recent critiques against Facebook demonstrate a clear
lack of accountability and transparency in the content moderation and removal practices of social
media platforms in India. This means that investigative reports by journalists or leaked e-mails /
correspondences are often the only source of information regarding these problematic issues.5

Further, the Indian Government often finds it difficult to engage with platforms like Facebook or
Twitter. On several occasions, founders and global teams of these platforms have shown reluctance
to actively engage with Indian lawmakers to address challenges which are unique to the Indian user
base. For instance, in light of the various instances of mob lynching and communal violence fuelled
by rumours and misinformation on WhatsApp, the Indian Government made several requests to
WhatsApp in 2018 to devise “suitable interventions” to contain fake news and sensational messages
on its platform. In the absence of a suitable response by the platform besides token changes, the
Government had to once again write to WhatsApp stating that “it may not be out of place to
consider the medium [WhatsApp] as an abettor (albeit unintentional)” in the instances of lynching
and mob violence. Similarly, in 2019, the Parliamentary Committee on Information Technology
made unsuccessful attempts to summon Jack Dorsey to investigate Twitter’s alleged political bias
ahead of the General Elections.

As India may still be considering amendments to the Intermediary Guidelines, Facebook and other
platforms will have to proactively cooperate with Indian lawmakers. Lack of timely cooperation in
the past, as seen with the proposed amendments to the Intermediary Guidelines in 2018, has spurred
the Government to initiate hasty and counterintuitive legislative proposals.

According to Time, Facebook has commissioned an independent study to analyse the platform’s
impact on human rights in India. This is the first time the news of such a study has surfaced on a
public platform. It is also not clear if the State or civil society members have been consulted or
included in this process. Expressing deep concern over opaque platform practices in India, over 40

5https://www.hindustantimes.com/analysis/it-is-time-to-regulate-hate-speech-on-social-media/
story-x2JfnAcZ4mh404CM2wQLpO.html
Page 11

NGOs have written an open letter to Mark Zuckerberg to address Facebook India’s bias and ensure
on-ground engagement with human rights organisations on the “India audit”.

Based on these reports, it is quite evident that social media platforms need to enter into
comprehensive and participatory dialogue with members of civil society, academia, and users, as
well as the Government. Further, given the recent allegations of biased moderation, Facebook and
other platforms should voluntarily release information on how moderation decisions are made in
India, and more importantly information about the people involved in the process. For instance, the
Facebook Transparency Report and the quarterly Community Standards Enforcement Report can
provide more granular information on its decision-making processes for India rather than superficial
content restriction statistics. Platforms, along with stakeholders and experts, can also consider the
possibility of collaborative and auditable content moderation policies which are specific to local
contexts, and assess their possible impact on global information exchange. The sheer scale at which
Facebook and other platforms deal with decisions on speech is mind-boggling. At the same time,
they shouldn’t have to do it in silos. Community involvement and user empowerment is key.6

Social media platforms are built around an idea of facilitating the free flow of information. In this
context, the lack of information sharing and engagement with key stakeholders is particularly
glaring. In the Indian context, with a government that is seriously considering new rules that would
be harmful to both their business models and to their users freedom of expression and privacy, this
insularity is particularly self-defeating. Platforms need to change tack, and get serious about robust
engagement and accountability, to assure both civil society, the public, and Indian lawmakers that
they can be trusted.

6https://www.tribuneindia.com/news/comment/hate-speeches-put-social-media-under-
scrutiny-393024
Page 12

References

1. https://en.wikipedia.org/wiki/Online_hate_speech
2. https://theprint.in/india/hate-speech-on-social-media-has-to-be-taken-note-of-says-scs-judge-
justice-l-nageswara-rao/866059/
3. https://lawstreet.co/legal-insiders/hate-speech-social-media-justice-l-nageswara-rao/
4. https://www.protegopress.com/tackling-social-medias-hate-speech-problem-in-india/
5. https://www.hindustantimes.com/analysis/it-is-time-to-regulate-hate-speech-on-social-media/
story-x2JfnAcZ4mh404CM2wQLpO.html
6. https://www.tribuneindia.com/news/comment/hate-speeches-put-social-media-under-
scrutiny-393024

Page 13

You might also like