Download as pdf or txt
Download as pdf or txt
You are on page 1of 88

CYBER LAWS

MODULE-I:
Introduction
a) Need and role of law in cyber space.

What is Cyber Law?

In order to arrive at an acceptable definition of the term Cyber


Law, we must first understand the meaning of the term law.
Simply put, law encompasses the rules of conduct:
1. that have been approved by the government, and
2. which are in force over a certain territory, and
3. which must be obeyed by all persons on that territory. Violation of
these rules will lead to government action such as imprisonment or
fine or an order to pay compensation.
The term cyber or cyberspace has today come to signify everything
related to computers, the Internet, websites, data, emails, networks,
software, data storage devices (such as hard disks, USB disks etc)
and even Airplanes, ATM machines, Baby monitors, Biometric
devices, Bitcoin wallets, Cars, CCTV cameras, Drones, Gaming
consoles, Health trackers, Medical devices, Power plants, Self-
aiming rifles, Ships, Smart-watches, Smartphones & more.
Thus a simplified definition of cyber law is that it is the “law
governing cyber space”.
The issues addressed by cyber law include cyber crime, electronic
commerce
1. Cyber crime
An interesting definition of cyber crime was provided in the
“Computer Crime: Criminal Justice Resource Manual” published
in 1989. According to this manual, cyber crime covered the
following:
1. computer crime i.e. any violation of specific laws that relate to
computer crime,
2. computer related crime i.e. violations of criminal law that involve a
knowledge of computer technology for their perpetration,
investigation, or prosecution,
3. computer abuse i.e. intentional acts that may or may not be
specifically prohibited by criminal statutes. Any intentional act
involving knowledge of computer use or technology is computer
abuse if one or more perpetrators made or could have made gain
and / or one or more victims suffered or could have suffered loss.
2. Electronic commerce
The term electronic commerce or E-commerce is used to refer to
electronic data used in commercial transactions. Electronic
commerce laws usually address issues of data authentication by
electronic and/or digital signatures.
3. Intellectual Property in as much as it applies to cyberspace
This encompasses:
1. copyright law in relation to computer software, computer source
code, websites, cell phone content etc;
2. software and source code licenses;
3. trademark law with relation to domain names, meta tags,
mirroring, framing, linking etc.;
4. semiconductor law which relates to the protection of
semiconductor integrated circuits design and layouts;
5. patent law in relation to computer hardware and software.
Data protection & privacy
Data protection and privacy laws address legal issues arising in the
collecting, storing and transmitting sensitive personal data by data
controllers such as banks, hospitals, email service providers etc.
The need for Cyber Law
The first question that a student of cyber law will ask is whether
there is a need for a separate field of law to cover cyberspace. Isn’t
conventional law adequate to cover cyberspace?
Let us consider cases where so-called conventional crimes are
carried out using computers or the Internet as a tool. Consider
cases of spread of pornographic material, criminal threats delivered
via email, websites that defame someone or spread racial hatred
etc. In all these cases, the computer is merely incidental to the
crime. Distributing pamphlets promoting racial enmity is in
essence similar to putting up a website promoting such ill feelings.
Of course, it can be argued that when technology is used to commit
such crimes, the effect and spread of the crime increases
enormously. Printing and distributing pamphlets even in one
locality is a time consuming and expensive task while putting up a
globally accessible website is very easy.
In such cases, it can be argued that conventional law can handle
cyber cases. The Government can simply impose a stricter liability
(by way of imprisonment and fines) if the crime is committed
using certain specified technologies. A simplified example would
be stating that spreading pornography by electronic means should
be punished more severely than spreading pornography by
conventional means.
As long as we are dealing with such issues, conventional law
would be adequate. The challenges emerge when we deal with
more complex issues such as ‘theft’ of data. Under conventional
law, theft relates to “movable property being taken out of the
possession of someone”.
The General Clauses Act defines movable property as “property of
every description, except immovable property”. The same law
defines immovable property as “land, benefits to arise out of land,
and things attached to the earth, or permanently fastened to
anything attached to the earth”. Using these definitions, we can say
that the computer is movable property.
Let us examine how such a law would apply to a scenario where
data is ‘stolen’. Consider my personal computer on which I have
stored some information. Let us presume that some unauthorized
person picks up my computer and takes it away without my
permission. Has he committed theft? The elements to consider are
whether some movable property has been taken out of the
possession of someone. The computer is movable property and I
am the legal owner entitled to possess it. The thief has dishonestly
taken this movable property out of my possession. It is theft.
Now consider that some unauthorized person simply copies the
data from my computer onto his pen drive. Would this be theft?
Presuming that the intangible data is movable property, the concept
of theft would still not apply as the possession of the data has not
been taken from me. I still have the ‘original’ data on the computer
under my control. The ‘thief’ simply has a ‘copy’ of that data. In
the digital world, the copy and the original are indistinguishable in
almost every case.
Consider another illustration on the issue of ‘possession’ of data. I
use the email account rohasnagpal@gmail.com for personal
communication. Naturally, a lot of emails, images, documents etc
are sent and received by me using this account. The first question
is, who ‘possesses’ this email account? Is it me because I have the
username and password needed to ‘login’ and view the emails? Or
it is Google Inc because the emails are stored on their computers?
Another question would arise if some unauthorized person obtains
my password. Can it be said that now that person is also in
possession of my emails because he has the password to ‘login’
and view the emails?
Another legal challenge emerges because of the ‘mobility’ of data.
Let us consider an example of international trade in the
conventional world. Sameer purchases steel from a factory in
China uses the steel to manufacture nails in a factory in India and
then sells the nails to a trader in the USA. The various
Governments can easily regulate and impose taxes at various
stages of this business process.
Now consider that Sameer has shifted to an ‘online’ business. He
sits in his house in Pune (India) and uses his computer to create
pirated versions of expensive software. He then sells this pirated
software through a website (hosted on a server located in Russia).
People from all over the world can visit Sameer’s website and
purchase the pirated software. Sameer collects the money using a
PayPal account that is linked to his bank account in a tax haven
country like the Cayman Islands.
It would be extremely difficult for any Government to trace
Sameer’s activities.
It is for these and other complexities that conventional law is unfit
to handle issues relating to cyberspace. This brings in the need for
a separate branch of law to tackle cyberspace.
b) Authority and scope of Governments to regulate Internet.

Over the past decades, the Internet has become an indispensable element
of our social, professional, and economic lives. Some would even argue
that it should be labelled a critical infrastructure (cf. Latham 2003). The
Internet has brought a significant segment of the world’s population
more and better possibilities to connect. It has led to an increase in
efficiency in accessing and sharing information. And it has been a boost
for global trade, facilitating a new online economy and digitising
traditional economies. At the same time, it has become increasingly
clear that the Internet can also be used to create harms of many kinds,
ranging from cybercrimes (hacking, identity theft, and other forms of
fraud) to cyber-espionage, cyber-terrorism, and cyber-warfare. Due to
the centrality of the Internet in the global economy and in our personal
and professional lives, moreover, the impact of cybersecurity risks can
potentially be very severe. This is why, over the past decade,
cybersecurity – keeping the Internet and the networked digital
technologies connected to that network safe and secure – has become an
increasingly important topic.

One of the answers that is often given by governments and regulators is


that we can make the Internet safer and more secure by gaining
more control over it. If we can keep a tighter rein on the behaviours of
actors online – be they individuals, communities, organisations,
businesses, or public parties – then this will increase security in
cyberspace. More security is essential, governments argue. Not only
does it ensure effective and uninterrupted network operations, but it also
strengthens users’ trust in the system. This bolsters the innovative
potential that the Internet offers and will make it flourish even more in
years to come.
In this article, we will look at one of the key strategies that governments
and regulators have embraced in the past decade or so in the name of
increasing security in cyberspace. This strategy is called ‘techno-
regulation’: ‘influencing individuals’ behaviours by building legal norms
into technological devices’. Techno-regulation means that we build
barriers into technological artefacts or systems, so that individuals using
them cannot commit undesired or illegal actions anymore. The key
element of techno-regulation is that, by implementing norms and (legal)
rules into technological artefacts, these artefacts become the enforcer of
the rule or the norm.
It is easy to see why techno-regulation might be a popular regulatory
strategy for governments.1 If the technological systems offered only
allow end users a clearly circumscribed set of actions, and if deviating
from that set of actions is made impossible because systems simply do
not offer the possibility to do so, then the level of control increases.
When techno-regulation is deployed to increase security the outcome is
easy to predict: such systems will, in fact, be more secure. There are
numerous examples of cases where techno-regulation is used, both in the
offline and in the online world, for a wide variety of purposes.
Increasing security is one of them, and it turns out that this strategy is, in
fact, very effective in raising security levels.
The main contribution of this article lies in asking the question
that precedes the deployment of techno-regulation. It takes a critical
stance towards the silent assumption that more cybersecurity must
mean more control? Can we not use regulatory strategies to increase
cybersecurity that do not require, or lead to, more control? In our view,
the answer is a wholehearted ‘yes, we can’. Yes, we can make
cyberspace more secure, and yes, we can do this, for instance, by using
another interpersonal strategy that we commonly use in our everyday,
offline lives: trust. In this article, we will show that rather than viewing
control as the only, or even the most important path to cybersecurity, we
should expand our view on the means and ends of securing cyberspace
and allow for other key mechanisms as well.
We will start the article with a brief discussion on cybersecurity: what it
is, why it matters and why governments feel they have a responsibility in
(contributing to) cybersecurity. This is followed by a discussion of
techno-regulation, explaining its strengths and weaknesses. It is these
weaknesses, and the drive for control that leads to increased adoption of
techno-regulatory interventions, that are at the heart of this article. We
will argue that rather than focusing on increased control for
cybersecurity, we can also use other mechanisms to increase security,
most importantly trust. The second part of this article discusses what
trust is and how we can put it to good use for cybersecurity. We will
explain that trust as a regulatory strategy for a more secure Internet can
take different forms: we can use trust as an implicit strategy, or as an
explicit mechanism for improved cybersecurity. We conclude that
reliance on techno-regulation as a dominant strategy for improving
cybersecurity has serious shortcomings, and that it is vital that we
broaden our views on how to make the Internet more safe and secure.
Using trust as a regulatory strategy could be a good candidate.
c) Free speech and expression on Internet.
For decades, concerns over the security of networked technologies were
originally largely left to the technical community. Under the banner of
‘information security’, research and development in this area focused
predominantly on three domains: protecting the confidentiality of data,
keeping the integrity of data safe and secure, and ensuring
the availability of systems, networks, and data (cf. Schneier 2004;
Nissenbaum 2005; Hansen and Nissenbaum 2009; Singer and
Friedman 2013; Appazov 2014). Over the years, much progress has been
made on these three topics in the technical sciences.
In recent years, however, researchers and stakeholders outside the
technical domain have started pointing out that to create adequate
protection levels for security on/off the Internet a purely technical focus
is too limited. ‘Cybersecurity’ became a novel term for a much broader
understanding of security in relation to the Internet, emphasising not
only the role of technologies and systems themselves, but also those of
human beings that use these systems. Understanding cybersecurity risks,
and findings ways to reduce these to acceptable levels (whatever we
define these to be), can, only be done effectively when both the human
and the technological component are taken into consideration (cf. Van
den Berg et al. 2014). Cybersecurity issues not only require technical
remedies and solutions, but also demand responses and interventions by
governments, by regulators and policy-makers, by businesses and
organisations, and even by end users themselves. Consequently, many
Western countries now consider cybersecurity an element of their
national security strategy (cf. Nissenbaum 2005). These countries have
started developing regulatory mechanisms to improve cybersecurity. But
many of them wonder whether they can, and should, do more. One of
the strategies they increasingly turn to is implementing rules and norms
into technical systems, also known as ‘techno-regulation’ or ‘code as
law’ (Lessig 2006).
The foundations for the notion of techno-regulation were developed by
Lawrence Lessig in his landmark book Code 2.0 (Lessig 2006; also see
Lessig 1999). In this book, Lessig pointed out that legal scholars tend to
be predisposed to use legal rules as a remedy to any societal problem
they encounter. But there are, in fact, a number of other regulatory
forces that may shape individuals’ actions as well. For example,
individuals’ actions may be influenced by market forces, such as price
mechanisms or taxation. Or they may be steered in their choices and
behaviours by social norms. And finally – and most importantly for this
article – Lessig pointed out that individuals behaviours are shaped by
architectures (Lessig 2006, 123). This is so in both a literal and a more
figurative meaning of the word ‘architecture’. Architecture as in
buildings, the physical organisation of our public and private spaces,
affects what people can and cannot do. We are contained in, and
constrained by, the shape and size of the spaces we work and live in, or
travel through. But architecture also has a more figurative meaning.
Software and hardware also qualify as forms of architecture: they shape
what we can and cannot do when using digital technologies. They are
regulatory forces, steering, guiding and influencing behaviours, and
actions of end users. Lessig (2006) used the terms ‘code as law’ and
‘regulation by design’ to illustrate the idea that architectures can be used
to regulate behaviours of individuals. Over the years this idea has come
to be called ‘techno-regulation’ (Wu 2003; Kesan and Shah 2006;
Brownsword and Yeung 2008a; Kerr 2010; Hildebrandt 2011;
Leenes 2011; Yeung 2011; Van den Berg and Leenes 2013; Van den
Berg 2014).

Limiting and enabling actions through design.

Techno-regulation refers to the implementation of (legal) rules or norms


into artefacts or systems, with the goal of shaping, steering, or
influencing end user’s actions. Note that techno-regulation has both
a limiting and an enabling function. On the one hand, the design of
software and hardware provides a limited action space for users: they
can only use an artefact or system in the way it was intended by the
designers. Other functionalities or options are simply not open to them
(except maybe to a very small minority of very tech-savvy end users or
hackers). In this way, techno-regulation enables designers (or regulators)
to keep end users away from undesired or illegal actions. Such actions
are literally coded away. But techno-regulation does not only offer
limitations. Techno-regulation also means that we build specific
incentives into artefacts or systems to encourage users towards desired
or desirable actions.
One key characteristic of techno-regulation is that end users are often
entirely unaware of the fact that their actions are being regulated in the
first place. Techno-regulation invokes such implicit, almost automatic
responses, that end users do not realise that their action space is limited
by the artefacts’ offerings. Thus, when (legal) rules or norms are
implemented into artefacts or systems, it becomes difficult, if not
outright impossible, to disobey these rules or norms. These artefacts and
systems become implicit managers and enforcers of rules: they
automatically, and oftentimes even unconsciously, steer or guide users’
actions in specific directions – towards preferred (set of) actions and
away from undesirable or inappropriate ones.

Benefits and drawbacks of using techno-regulation

For regulators, this last point is considered one of the key benefits of
using techno-regulation. Using techno-regulation is efficient, easy, and
very effective. It leads to very high levels of compliance. Because
disobeying the rule that is implemented into an artefact or system is
(nearly) impossible, techno-regulatory interventions are among some of
the most effective forms of regulation. Moreover, techno-regulation is
one of the most cost-effective ways of regulating. The enforcement of
the rule for example is delegated to the artefact or system, which means
there is no need for expensive law enforcement personnel. And since
end users are much less likely to break the rule, costs for enforcement
are expected to be reduced.
These two benefits play a key role in the increasing popularity of using
this regulatory strategy. This combination is a regulator’s dream: a
regulatory strategy that will lead to substantial, if not complete,
compliance at a fraction of the cost of many other types of regulatory
interventions, including but not limited to laws that need enforcement or
norms that do not have the same level of effectiveness. What’s more,
when regulators push for the use of techno-regulation on the Internet, for
example, to increase security, in the process this automatically leads to
significantly more control: more end users will ‘colour within the lines’
thus leading to a higher sense of command over the safe and secure use
of cyberspace.
Having said that, over the years, several lines of critique have been
launched against the use of techno-regulation. For one, this form of
regulation is incredibly opaque (cf. Leenes 2011; Yeung 2011). When
rules and norms get implemented into a technological artefact often end
users who are confronted with such a device are not aware of the fact
that they are being regulated. Sometimes instances of techno-regulation
can be labelled as illegitimate, precisely because end users do not know
if, and when, they are being regulated (cf. Yeung 2011). Under the rule
of law, transparency and accountability are key values to uphold for
governments when they seek to regulate the behaviours of citizens.
Citizens need to know under which rule (laws, standards, institutional
frameworks) they live and need to be able to hold government
accountable for the proper implementation of laws and law enforcement
that government executes on their behalf. When techno-regulatory
interventions take place outside the awareness of citizens, the
requirement of transparency is not met. Moreover, techno-regulation
runs the risk of being undemocratic (cf. Benoliel 2004). If end users do
not know that their actions are regulated by specific architectures, they
have no possibility to question, appeal or object to this.
Despite these reservations, techno-regulation is an increasingly popular
regulatory strategy for both public and private parties on the Internet.
Why is this the case? We explain the rising popularity of techno-
regulation by pointing to an underlying assumption. When techno-
regulation is deployed on the Internet, especially to increase
cybersecurity, regulators assume that when end users have less room for
manoeuvring, there will be less risks to security. If end users ‘cannot be
bad’ anymore, cybersecurity increases. Deploying techno-regulatory
interventions entails that the regulator increases the level of control,
because the action space of regulatees is more clearly defined and has
sharp boundaries. Hence, when using techno-regulation regulators may,
to borrow an idiom, strike two birds with one stone: it makes the Internet
more secure, and it increases the level of control. Especially for an
environment that is so valuable yet also appears to be so ‘untameable’ as
cyberspace, reaching these dual goals through a single type of
intervention may be very alluring for governments.
The question we need to ask ourselves, though, is: Is there really a
relationship between control over the Internet and cybersecurity, and
what role does techno-regulation play in this? Does an increase in
control over the Internet, brought about by using techno-regulation,
make the Internet a safer place?

More control equals more (cyber)security?

Answering that question is difficult for several reasons. First, there is the
issue of how we would qualify ‘making the Internet a safer place’. For
whom would the Internet become safer? And at what price? As we have
seen there are significant drawbacks to using techno-regulation. End
users carry the burden of the costs, while regulators (both from private
and public parties) stand to gain: more efficiency and effectiveness,
more compliance, a cheap implementation, and so on.
Second, it is debatable whether techno-regulatory interventions target
the right audience if they are designed to increase cybersecurity. As we
have seen, it is likely that techno-regulatory interventions lead to high
levels of compliance, since end users will almost always automatically
follow the implemented rule or norm. Hence, fewer end users will
engage in ‘risky behaviours’, whatever these may be in a given context.
While it is true that techno-regulation may prevent end users from
making mistakes that can have a negative effect on their own
cybersecurity or that of others, using this strategy will not weed out the
biggest threat to security: that of intentional attackers. Hackers,
cybercriminals, and those who engage in acts of cyber-espionage or
cyber-terrorism go to great lengths to find weaknesses in systems and
services and to exploit these to their benefit. Currently, the risks posed
by these intentional attackers are considered to be far greater (both in
terms of probability of occurrence and in terms of impact) than those
created by genuine errors that random end users will make. Techno-
regulatory interventions, or more generally the idea that a system’s
design will delineate the action space of end users, have no effect on
those who intentionally seek to exploit vulnerabilities in it. This
argument casts severe doubts on the assumption that more control
(through techno-regulation) will also lead to more cybersecurity.
While techno-regulation as a strategy has clear and admitted benefits, we
argue that the increasing tendency to seek to ‘solve’ cyber risks through
techno-regulation is in need of reconsideration. Should we not also look
at other strategies as well? In the rest of this article, we will contrast
techno-regulation with another regulatory strategy, which up until this
point in time has received little attention in the field of cybersecurity:
using trust.
d) Impact of Telecommunication and Broadcasting Law on Internet
Regulation.
Without some basic sense of trust, we would not be able to get up in the
morning. Overwhelmed by just thinking about all the possible turns fate
might take, we would not dare to leave the warm safety of our beds.
Trust is a strategy to deal with the complexities inherent in life. The fact
that, to a certain extent, we are aware of the unpredictable character of
the future and that we cannot foresee all the actions of others make that
we need trust to set aside some of these uncertainties. To trust is to act as
if we know for certain what tomorrow will bring, while in fact we are
groping in the dark.
Although most people intuitively have some ideas on what trust is, this
concept cannot be pinned down easily. In their review article, Seppanen,
Blomqvist, and Sundqvist (2007) counted more than 70 different
definitions of trust and this was only in the domain of inter-
organizational trust. As Simon (2013, 1) rightly concludes: ‘As
pervasive trust appears as a phenomenon, as elusive it seems as a
concept’.

What is trust?

While it is not feasible to provide an all-encompassing definition of


trust, there are some characteristics that reoccur in inter-disciplinary
scholarly discussions on trust. A proposed starting point is that trust is
closely connected to having positive expectations about the actions of
others. When we trust someone, we assume that the other will not act
opportunistically but take into account our interests (e.g. see the
encapsulated-trust account of Hardin 2006). This also entails that when
we speak of trust there must be at least two actors, often referred to as
a trustor (who vests trust) and a trustee (who receives trust). The trustor
and trustee do not interact in a vacuum. Trust is contextual in the sense
that certain – often implicit – societal roles, institutions, norms, and
values guide the expectations of actors involved. The notion of
expectations assumes that the trustor is not completely sure of the
actions of the trustee. The trustor does not possess all the information or
does not have full control to determine the outcome of a certain event.
Moreover, the trustor depends on the trustee and the trustee has the
possibility to act in a way that was not expected by the trustor. To speak
of trust, the trustee, therefore, has to have some kind of agency. In
addition, for the trustor there has to be something at stake. Trust consists
of at least three components: The trustor (A) trusts the trustee (B) to do
something (X) (Hardin 2001, 2006). If the actions of the trustee do not
make a difference to the position of the trustor, that is, if the trustor is
not affected by the performance of the trustee, we cannot meaningfully
speak of trust. Trust is inextricably connected to vulnerability (also see
Baier 1986). The trustee may betray the trustor, who as a consequence
runs the risk of getting hurt – physically, mentally, or otherwise. Vesting
trust in another person is a risky business (Luhmann 1979).
Traditionally, trust is located in interpersonal relations and interactions
(Good 1988; McLeod 2014). However, increasingly, due to – amongst
others – a wide range of societal and technological developments, trust
is not only being placed in persons but in systems as well, which is
referred to as system trust or confidence (Luhmann 1979, 1988;
Giddens 1990, 1991; Giddens and Pierson 1998). Recently, there also
has been an increased interest in trust in and on the Internet – referred to
as e-trust (cf. Lahno and Matzat 2004; Taddeo 2010; Taddeo and
Floridi 2011; Keymolen 2016) and trust in Artificial Intelligence and
robots (cf. Taddeo 2011; Coeckelbergh 2012). This trust in and through
(technological) systems, such as the Internet, leads to inquiries in the
specific shape trust takes because of the mediating workings of the
technologies at hand. Focusing on the Internet, some scholars argue that
interpersonal trust is nearly impossible online (cf. Pettit 2004). However,
generally, it is concluded that trust is ‘translated’ in an online context by
developing trust tools such as reputation schemes and other actor-
identifying measures (de Laat 2005; Simpson 2011).
The function of trust
In order to be able to investigate if and how trust might be of value for
regulating the cyber domain, we have to focus on the functionality of
trust. Which problems trust solves that could be fruitful for cybersecurity
measures as well?
Luhmann (1979, 1988) defines the function of trust as a strategy to
reduce complexity. This complexity resides in the fact that, as human
beings, we have to deal with uncertainty and the unpredictable actions of
all other human beings with whom we share our world. As we often do
not have full information at hand and cannot be sure about what
tomorrow brings, trust is a ‘blending of knowledge and ignorance’
(Luhmann 1979, 25); it is acting as if the future is certain. Although we
do not know for certain, we assume that an anticipated future will
become reality based on the assumption that other actors will act in a
stable and predictable way. Trust, therefore, is not about diminishing
uncertainty, but about accepting it. When trust is set in motion,
vulnerabilities and uncertainties are not removed; they are suspended
(Möllering 2006, 6). We could say that trust is a very productive fiction
because ‘certain dangers which cannot be removed but which should not
disrupt action are neutralized’ by it (Luhmann 1979, 25).
As a complexity-reducing strategy, trust is a very useful way to cope
with the uncertainties of everyday life. For example, in economics, trust
is generally recognised as an important mechanism to reduce
transactional costs. Since people have to operate in situations
characterised by uncertainty that may negatively impact their
willingness to interact, investments have to be made in all sorts of
safeguards and controls to reduce this uncertainty and to facilitate
interaction. However, when there is (mutual) trust, this uncertainty is
neutralised and risk management measures and costs of address
damaging, opportunistic or hostile behaviour can be avoided (cf.
Möllering 2006, 26–29).
Trust has also been seen as an important prerequisite for innovation
practices (cf. Nooteboom 2013). Because innovation is inherently
uncertain, it is more difficult to rely on complexity-reducing
mechanisms such as contracts, which are focused on determining the
process. Trust, however, is about holding positive expectations about the
outcome without too much control over the course of action. Trust,
therefore, is specifically needed in innovation processes where the
process benefits from openness and the results are uncertain. Moreover,
when it comes to adopting new goods or services trust is acknowledged
as an important feature (McKnight and Chervany 2002; McKnight,
Choudhury, and Kacmar 2002; Warkentin et al. 2002; Keymolen, Prins,
and Raab 2012). Without some trust in the functioning of a good or a
service, it becomes less likely that customers will move to purchase.
In summary, trust is valued as a fruitful strategy to cope with uncertainty
because, amongst others, trusting actors can keep transactional costs
low, foster innovation, and feel confident enough to adopt new services
and goods.

The limits of trust

For trust to be a very fruitful and robust strategy in dealing with the
uncertainties of everyday life, its limitations should not be
underestimated. First, trust is never the sole strategy at work to reduce
complexity. Rather, we should think of an amalgam of interdependent
strategies that are used to reduce complexity. Tamanaha (2007), for
example, explains how the rule of law enhances certainty, predictability,
and security between citizens and government (vertical) and among
citizens themselves (horizontal). The existence of rules and the fact that
citizens know there is a safety net in the form of a legal system fosters
trust. Möllering (2006, 111) explains how trust ‘is an ongoing process of
building on reason, routine, and reflexivity’. Estimating chances, relying
on roles and norms in society, and step-by step building confidence in
interactions provide meaningful grounds for the suspension of
uncertainties characteristic for trust. Luhmann (1979) states that trust
can only take place in a familiar world, an environment that is already
known by its inhabitants to a certain extent. A world is familiar when
shared norms and values are present; when people assume that others
perceive the world in a similar way and certain aspects of everyday life
are taken for granted. We can conclude that trust can be the dominant
strategy to reduce complexity; but it is never the sole strategy.
Second, trust is not the preferred dominant strategy to reduce complexity
in all situations. When it is of the utmost importance that a certain goal
be reached or a, particularly, protocol be followed, other strategies may
be more appropriate. For example, although trust is an important aspect
in securing a nuclear plant, it is not the dominant strategy. Using control
mechanisms such as security protocols that one cannot circumvent, or
using security badges and biometric logging systems that control the
access to different parts of the plant are better strategies to reduce the
risks associated with a nuclear installation. To a certain extent, there still
should be trust vested in the employees to follow the rules and in the
technological systems in place. However, trust is not the dominant
strategy in regulating the functioning of the nuclear plant.
Trust as a goal of cyber security
Trust is a fruitful strategy to deal with uncertainties. However, it is
generally not recognised as an adequate regulatory strategy for
cybersecurity. ‘Trust is good, security is better’, or so the saying goes
(Keymolen, Corien, and Raab 2012, 24). Cybersecurity tends to focus on
implementing rules and norms into systems to ensure the integrity of the
communication and the stable functioning of the infrastructure. Rather
than neutralising certain dangers, much work in the field of
cybersecurity is focused on removing them. Although it is widely
accepted by technicians and regulators alike that 100% security does not
exist (Chandler 2009, 126–127; Singer and Friedman 2013, 70;
Zedner 2003, 159), there still appears to be a tacit intention to come as
close as possible to that goal. Consequently, in the relation ‘trust-
cybersecurity’, security is seen as a necessary condition for trust but
trust is not considered as part of cybersecurity. Security measures are put
in place to enable a familiar world, necessary for trust to thrive. The
Internet should have a taken-for-granted character to its users. To put it
differently:
If users had to decide every time they use an online service or make use
of an application on their smartphone, whether or not to trust the
underpinning infrastructure of the Internet, the costs would simply
become too high. People would be overwhelmed by the uncertainty of
such a complex and unstable environment and probably reject it as a
valuable means of interaction. For trust to take place on the interpersonal
level – or between a user and an organization –, the environment,
whether or not online, should be familiar first, that is, stable and
predictable. (Keymolen 2016, 108)
We also see this ‘trust as a goal of cybersecurity perspective’ reflected in
policy documents (cf. Broeders 2015). For example, the Dutch National
Cyber Security Agenda (NSCA) states that it is an explicit goal of
cybersecurity to ensure a stable infrastructure to foster trust of users:
[W]e cannot afford to let cyber criminals erode the trust we have – and
need to have in the ICT infrastructure and the services it provides. Trust
is a conditio sine qua non for normal economic transactions and inter-
human communication. It is at the core of social order and economic
prosperity, and in an increasingly ICT-dependent world, the security of
ICT plays an ever more important role here (National Cyber Security
Research Agenda II: 6–7)
In the academic literature, Nissenbaum (2001) has coined the
perspective described above as ‘trust through security’. She also
recognises the strong emphasis on computer security as a means through
which trust online can be established. Nissenbaum discerns three
security mechanisms put in place to ensure trust online: access
control, transparency of identity, and surveillance. The first refers to
passwords, firewalls, and other measures to ensure that only those actors
who are allowed to enter – clients, citizens, members – do enter and
those who are not allowed –hackers, spies, criminals – are blocked. The
second mechanism is about making actors more identifiable. Even if a
user does not willingly provide personal data, all sorts of cryptographic
and profiling techniques are used to authenticate users. The basic idea is
that if your identity is known you will think twice before acting
malicious, as you can be held accountable for your deeds. The third
mechanism is based on the idea that monitoring actions and behaviour
online can prevent bad things from happening or at least could help to
quickly and easily find the wrongdoers.
Nissenbaum is not against security. It is conspicuously clear that for
specific online activities such as banking or e-commerce, high security is
necessary. However, Nissenbaum (2004) warns against the seemingly
self-evident move to strive for more security in all online domains and
practices as it may endanger trust and the worthwhile practices trust
facilitates. Reducing complexity and uncertainties should never become
a permit to strive for overall security in a way that it narrows down
freedom, which is nurtured by trust to develop (amongst others)
innovative, creative, or political practices. She states:
In a world that is complex and rich, the price of safety and certainty is
limitation. Online as off, the cost of surety – certainty and security – is
freedom and wide-ranging opportunity.

Trust as an implicit part of a cybersecurity strategy

It is beyond doubt that trust is an important goal for cybersecurity.


However, often under the radar, trust is also part of cybersecurity
measures. Trust may be not the dominant strategy, but it certainly
contributes to cybersecurity as a supporting, complexity-reducing
strategy. In current regulatory strategies for cybersecurity, we discern at
least three points where trust in fact plays a significant role: trust in
human actors, trust in the functioning of technical systems, and trust as
an attribute of risk assessment.

Trust in human actors

When taking a holistic approach in cyber security risk assessments (also


see Section 2), not only the environmental and technical (hardware,
software) aspects are considered. Human factors – defenders, users, and
attackers – become part of the analysis as well. Including human factors
in the risk assessment entails anticipating that certain expectations about
human behaviour may not be fulfilled. Whereas in risk assessments it is
a given that attackers act in unforeseen ways, it is sometimes overlooked
that this also applies to defenders and users. They too may behave in
deviant ways. For instance: defenders may circumvent protocols and
users may be unable to detect risky situations due to information
asymmetry. Notwithstanding their sometimes-precarious behaviour,
defenders and users are expected to take on certain cybersecurity tasks
and follow guidelines. For instance, defenders are supposed to have
situational awareness and to be able to report and/or react to the
information gathered; users are expected to take care of their passwords
and other credentials to secure their interactions online.
The human actor in cybersecurity strategies, therefore, confronts
regulators with a profound challenge: on the one hand, human actors
with their specific roles and tasks are part of the cybersecurity measures;
on the other hand, their actions can only be steered and controlled by
cybersecurity measures. To put it differently, in cybersecurity, defenders
and users are both risk and security enabler.
Trust helps reconcile these conflicting roles. In the end, when
cybersecurity assessments are developed and carried out, human actors
are trusted to display certain behavioural responses based on which
appropriate measures will be taken. Recently, Henshel et al. (2015) have
taken a first step to make this implicit trust in human actors more
tangible by developing a model that can be used to assess the
trustworthiness of the human actors in cybersecurity.2 Also from a
policy perspective, the awareness grows that a mere cybersecurity
strategy is not enough. Trust between the actors – often distributed over
the public–private sphere – who have to put these strategies into practice
is crucial as well (Heuvel and Baltink 2014).

Trust in systems

Techno-regulation is, as we have seen, an important regulatory strategy


in the cybersecurity domain. Generally, the assumption is that inscribing
rules into technology leads to high levels of compliance and thus is a
very effective form of regulation.
While it is obviously true that technological control mechanisms such as
cryptography, logging, and surveillance reduce complexity in the
cybersecurity domain, we should not be blind for the new complexity
these technologies at the same time also cause. First, where information
and communication technology (ICT) systems generally are designed in
such a way that they are easy to use, this does not necessarily imply they
are also easy to understand (Keymolen 2016, 152). If defenders have no
alternative means at their disposal to check if what the computer tells
them is true; when they simply have to rely on the results the system
displays. The system becomes what van den Hoven (1998) has coined an
‘artificial authority’ in which users have to put their trust. The fact that
technology increasingly functions as a black box, increases dependence
on system trust or confidence in systems (Luhmann 1988; Jalava 2003;
Henshel et al. 2015). Second, as these systems themselves are often too
complex to be completely understood, users not only have to trust the
system but the experts who maintain the system as well
(Luhmann 1979). Nowadays, parts of the system are delegated to
experts. Experts may also function as ‘facework commitments’
(Giddens 1990). They mediate the interaction of the users and the
systems. They become ‘the face’ of the system; making the system
accessible to the user to a certain extent. However, the control of experts
over the system also has its limits. Even computer experts acknowledge
that the system sometimes can perform in a way that goes beyond their
understanding (see for example Aupers 2002: who found that computer
experts often experience these systems as autonomous forces). New
systems are built upon old ones, continuous updates are needed and ICT
systems always remain ‘under construction’. All in all, this results in a
technological knot only few are able to unravel.
f) Related issues under International Law Jurisdiction.
Finally, trust is also implicitly part of the cybersecurity risk assessments,
of the activity involved in defining which risks exist in cyberspace for a
given party, what their probability and impact may be, and which
measures and interventions ought to be taken to reduce these risks to
‘acceptable levels’ (whatever we define these to be). Just as trust is a
strategy to reduce complexity, a risk assessment is also a way of
reducing complexity. More specifically, it reduces the complexity of a
certain situation or event by translating it in risks and uncertainties. With
risk we refer to an active and explicit engagement with future threats.
When we talk about risks, we talk about the chance or probability that a
certain – often undesirable – event will occur. When we refer to
uncertainty, on the other hand, we face possible unpredictable outcomes
(Knight 1921; Keymolen 2016, 69). Here, trust also implicitly enters the
scene, as we have to trust that these unpredictable outcomes will not
hinder our actions. These uncertainties cannot be removed, but only be
neutralised. With trust these possible negative outcomes are moved to
the background and the focus is turned to this one desired future state of
affairs. However, risks and uncertainties cannot always be so easily
discerned. So-called systemic risks – risks that are embedded in larger
societal processes (Renn, Klinke, and van Asselt 2011, 235) – are highly
complex in nature, making it more difficult to clearly detect chains of
events. Moreover, the ambiguity surrounding these systemic risks often
results in several viewpoints which all seem legitimate (Renn, Klinke,
and van Asselt 2011, 235). Consequently, these systemic risks cannot
easily be calculated as a function of probability and effects (Renn et
al. 2011). In addition, the world does not only confront us with
uncertainty because of the variability and indeterminacy of social
processes, we are also aware that our knowledge of risk-determinacy,
impact, and causal effect are limited. To put it differently, even when we
talk about risks there is uncertainty because we might have doubts about
our own risk perception. We may be more convinced about some risks
than about others. Some authors, therefore, see uncertainty as an
attribute of risk (Van Asselt 2000). To conclude, it becomes clear that
risk assessments cannot eradicate complexity completely. Risk
assessments provide regulators and defenders with models they can use
to base their cybersecurity strategies on, but by default these models are
never as rich as the empirical case they represent and therefore there is
always the possibility of missing out on – what later turn out to be –
important aspects. Consequently, while risk assessment is a dominant
strategy in cybersecurity, it should not be neglected that in fact it
functions in an amalgam of interdependent complexity-reducing
strategies, of which trust is an important element.

Trust as an explicit part of a cybersecurity strategy

After establishing that trust is not merely a goal of cybersecurity but is


also an implicit part of cybersecurity, we now want to go a step further
and see if and how trust can also be an explicit part of cybersecurity.
That trust can also be an explicit part of security strategies is probably
most evident in cases where security is ‘outsourced’. Especially after
9/11, we witness an increase in what has been referred to as the
introduction of deputy sheriffs (cf. Lahav and Guiraudon 2000;
Torpey 2000). Organisations or people are asked to take on a role as
deputy sheriff. They become the eyes and ears, as authorities cannot be
everywhere. For example, when the public is actively asked to call if
they see something suspicious, they become the deputy sheriffs of the
police. Or when different organisations intensify their cooperation this
can lead to the recruitment of professionals for assignments that lay
beyond their initial field of expertise (e.g. in the domain of youth care:
cf. Keymolen and Broeders 2013). On a similar note, Garland (2001)
speaks of responsibilisation. People who are not an actual expert or are
not trained in a certain field are nevertheless giving responsibility to
report to the police or other authorities.
Responsibilisation and the introduction of deputy sheriffs can be seen as
a trust-based (cyber)security strategy.3 For example, when governments
call upon citizens to report suspicious situations, they trust citizens to be
aware and react quickly by calling the number and report what they have
seen. This strategy of course is not full-proof (if it was, it would not be a
trust-based strategy). As the trustor, the government is vulnerable by
relying on non-trained, perhaps not really interested citizens, because it
might miss crucial information. This vulnerability, however, does not
necessarily make it a bad strategy. Perhaps it is simply not possible
because of capacity issues or costs involved to have ears and eyes
everywhere and this trust-based approach might then be a good
alternative. The trustor might not receive as much meaningful
information as when there would be professionals on the ground, but
still more information than when there would be no one paying
attention.
When focusing on the domain of cybersecurity, we will now look into
two examples in which security is being outsourced and a trust-based
approach is being used to underpin the cybersecurity strategy. First, we
will look into security reward programs as initiated by Google and
Facebook. Second, we will look at peer-to-peer flagging of suspicious
behaviour on the platform of Airbnb.
g) Issues of enforcement.
As no software program or hardware device can be 100% full-proof
safe, it is absolutely necessary – from a cybersecurity perspective – to
continuously monitor and investigate the used technologies to discover
bugs and weaknesses. As this is a very time-consuming task that,
moreover, can only be carried out by people with sufficient expertise, it
is not always feasible as a company to cover this in-house. Companies
such as Facebook and Google have therefore captured this problem by
calling upon the tech community to help them detect security issues in
their services. In return for their efforts, people who find weaknesses and
report them to the companies involved receive a reward.
Several reward programs are created to stimulate savvy people to engage
in this security quest. Google, for example has installed a Google
Vulnerability Reward Program to stimulate the detection of
vulnerabilities in Google-owned websites and bugs in Google-developed
apps and extensions. Rewards may range from $100 for finding
information leakages to $20.000 for detecting bugs that give direct
access to Google servers (e.g. sandbox escapes).4 Other reward
programs Google runs are a patch reward program, a Chrome reward
program, and an Android reward program.5
Facebook has created the Facebook Bug Bounty program, 6 which is
more or less similar to Google’s. People who find security issues on
Facebook or on another member of the Facebook imperium, can file a
report, which Facebook will then investigate. If the bug is indeed a
security risk the discoverer will be rewarded.
All in all, we can conclude that in the above-described cases, companies
are trusting outsiders to be part of their cybersecurity strategy. On the
one hand this makes companies vulnerable, as they come to depend on
these outsiders, who they can (at best) nudge into participating by
promising rewards but who are, in the end, out of their direct control. On
the other hand, this trust-based approach may fill a blank in the
cybersecurity strategy of a company, as it would be too time-consuming
or costly to keep these matters in their own hands.
Another example of security being outsourced on a trust-based manner
is the reporting of suspicious behaviour of users by other peers. In the
shared economy, which is characterized by cutting out the middle man
and bottom-up regulation (Botsman and Rogers 2010; Botsman 2012),
peer-to-peer security is an important part of the overall platform
security.
The most obvious tool that is set in place to ensure peer-to-peer security
is the possibility of rating the trustworthiness of other peers. By rating
the interactions with other peers, it becomes clear for the whole
community who is a secure partner to deal with and who is to be trusted
less.
Some platforms, such as Airbnb, go a step further and enable their users
to ‘flag’ other people. On any moment in the interaction on Airbnb,
users have the possibility to click on a flag when they believe something
is suspicious or inappropriate.7 Airbnb investigates each flag on a case-
by-case basis.
Alternatively, Airbnb could also choose to screen all users of their
platform and monitor every interaction. And although they certainly do
have some (technological) security measures set in place (Chesky 2011;
Gannes 2013; Tanz 2014), by relying on their users to flag suspicious
events, they make their users part of their cybersecurity strategies.
Airbnb depends on their users to report fishy situations based on which
they then take action. Controlling every interaction intensively would
probably prevent dubious situations from happening at all, however, it
would come at a price of losing freedom (cf. Nissenbaum 2001), which
is key to the shared economy.
e) conclusion
This article started from the assessment that cybersecurity is an
increasingly important theme, and that, aside from the technical
community, it also increasingly receives attention from governments as
well. Governments in many Western countries are grappling with ways
in which they can impact cybersecurity in a positive way, predominantly
through the use of regulatory interventions. As we have argued in this
article, we find it striking that certain regulatory strategies have gained
widespread implementation, while others have largely been ignored. We
have discussed techno-regulation as a prime example of the former, and
trust of the latter. Underneath the reliance on techno-regulation, we have
argued, is a deep-seated – though probably false – assumption that using
techno-regulation leads to more control, and more control equals more
cybersecurity. Since the most important threat to cybersecurity comes
from intentional, highly savvy attackers that seek to exploit systems and
find vulnerabilities, the use of a strategy such as techno-regulation in all
likelihood has very little effect with respect to increasing security. After
all, these kinds of attackers make it their business to break
into any system, regardless of its design and the values or rules or norms
this design promotes. In contrast, techno-regulatory interventions may
have negative effects for end users, because they limit and shape their
action space. So while techno-regulatory interventions may promise
more control and more cybersecurity, we suspect that oftentimes these
benefits will not materialise, while end users pay a price in terms of their
freedom to act.
An overly strong reliance on techno-regulation as the dominant solution
for cybersecurity issues might be unwise, therefore, and this is why it
makes sense to also turn to other regulatory strategies. In this article, we
have focused on trust as a key candidate. While trust is currently
considered a key goal for cybersecurity, up until this point in time it is
not explicitly considered to be a potential candidate to help improve
cybersecurity as well. We have shown, however, that trust is already a
key, albeit implicit, element of many cybersecurity strategies (both
technical and non-technical). We have also shown that it can be used
very effectively as an explicit element of cybersecurity strategies, for
companies and potentially also for governments themselves.
MODULE-II:
Property in Cyber Space
Intellectual property encompasses the rights involving the original
expression of work and inventions in a tangible form that has the
capacity to be duplicated multiple times. The protection of such property
against unfair competition is essential to the rights of the human
endeavour in its intellectually innovative form. Resources and skills are
invested in intellectual property, where the creator stimulates research
and development to achieve economic development and technological
advancements for the country and society at large. Intellectual property
rights engrossed itself as a strong tool to keep intellectual endeavours
intangible medium.
With the onset of cyber technology, global markets have benefited
copyright owners. This is true, but beyond the considered benefits, the
risks loom large, if the consequences of emerging trends are left
unaddressed. Like any desired invention, cyber technology has its
pitfalls. Unlicensed use of trademarks, trade names, service marks,
images, codes, audios, videos, literature content by way of illegitimate
practices of hyperlinking, framing, meta-tagging, spamming, and the list
is endless, emerge as the regular infringements to the new universe of
intellect and skills in the cyber domain.

This article analyses some of the legal challenges faced by intellectual


property in cyberspace, limiting its scope to some major conditions in
the copyright domain for the time being. Copyright is threatened by the
emerging digital presence of technology, particularly the ‘internet’,
where the distribution of digitally copied files has the potential to occur
at jet speed and nil cost.

Challenges faced re copyright interpretations

At the instant of being digitised, the various components of the


copyright element gain the capability to be molded into a variety of
forms by way of linking, in-lining, and framing with practically no
permissions required. This poses a challenge to the copyright owners’
ownership and control over their own innovation.

1. Deep linking is a challenge, where the law is thus far ambiguous -


both international and domestic. A balance of interests is
influenced in the process, namely rights of the copyright owner
viz-a-viz the free flow of information accessible to the society,
which is the sine qua non for the working of the internet. A legal
issue in regard to deep learning emerges while reading Sections
14 and 51, Indian Copyright Act, 1957, whereby it is not clear as to
the exact stage when the reproduction of the copyrighted work is
actually being committed. Is it at the stage of employment of deep-
linking by the person without a disclaimer that the links do not
constitute approval, or is it when the user visits the linked page at
its discretion through that deep-link; therein lies the source of
ambiguity.

2. In-lining linking poses a similar challenge. The link’s creator


merely provides a visiting browser with the road map to retrieve
the image/s, which is then copied by the final user who is unaware
of the fact that its browser is pulling different components from
various websites. Here too, the anomaly is with regard to the exact
phase of reproduction of the copyrighted image(s). The in-line link
creator is not communicating or distributing, though definitely
aiding in it or at least making adaptations to the copyrighted piece
thereby coming under the ambit of Section 14 (a) (vi) Copyright
Act, 1957. On the other hand, the final user has no mens rea or
even knowledge of any violation of copyright and is thus caught
unaware. Further, if Section 2 (ff) Copyright Act, 1957, is to be
read in interpretation, then the expression “by any means of
display” as communication to the public at large, would be too far
stretched to include all content on the websites on the internet.
3. Framing too is a concern, and subject to legal debate under the
interpretation of derivation and adaptation under Section 14(a)(vi)
Copyrights Act, 1957. The framer cannot be held responsible for
copying, communicating, or distributing the copyright, it merely
provides the user browser with modus operandi to retrieve the
copyright content which is fetched from the owner’s website by the
user’s browser. The pertinent question of legal interpretation is
whether fetching some elements from the multimedia setting of the
copyright owner and thereby combining them with more to create
its own, amounts to derivation or adaptation under law.

Protection of intellectual property

It is clear from the above that protection of data in cyberspace


accompanies myriad issues, but the common point of interpretation
segregating the levels for copyright infringement may be based on
‘intent’. Dishonest intentions ought to lead to reading the legal and
regulatory provisions in more stringent forms to derive the capability to
disallow any of the copyright infringements seemingly permissible or at
the least sitting over the fence on the superficial. Rampant violations of
copyright rights on the internet affecting a larger mass, call for sharper
legal protections. The responsibility is not restricted to the lawmakers or
the law enforcement systems, but also to the copyright owners and
software companies. Copyright notices and displays of licenses and
warnings with limited permissions on the websites by the copyright
owners need to be ensured. Blanket prohibitions may no longer serve the
purpose, as technology is rapidly evolving and copyright may not seem
to be affected if judged cursorily, especially where control over serious
indirect violations are the emerging possibilities.

There are abundant theories about protecting intellectual property rights,


but the common thread of all such opinions is the necessity to guard,
recompense, and kindle innovation in the creative works and initiatives
of the innovator. Regulatory framework and legal interpretation are
essential keys to safeguard creators from the malpractices prevalent in
the disguise of internet freedom. The harmonization of international law
and positive domestic laws is essential to bolstering intellectual property
rights in cyberspace, which is essentially and practically without
borders.

International law governing intellectual property in cyberspace

Berne Convention (1886), Madrid Agreement Concerning the


International Registration of Trademarks (1891), Hague
Agreement Concerning the Registration of International Designs
(1925), Rome Convention for Protection of Performers, Producers of
Phonograms and Broadcasting Organizations (1961), Patent Cooperation
Treaty (1970) Agreement on the Trade-Related Aspects of Intellectual
Property Rights (1994), World Intellectual Property Organization
Copyright Treaty (1996), World Intellectual Property Organization
Performances and Phonograms Treaty (1996), and Uniform Domain
Name Dispute Resolution Policy (1999), in consolidation form the
international instruments that govern Intellectual Property Rights.

Berne Convention (1886) protects the rights in Literary and Artistic


Works, excluding daily news or press information. Special provisions
are provided for developing countries.

Rome Convention (1961), extended copyright protection to authors of


creative works and owners of physical indicators of intellectual property,
for the first time. It allows domestic implementation enacted by member
countries, where the dispute is subject to the International Court of
Justice for remedy unless arbitration.
TRIPS (1994) is a multilateral agreement on intellectual property that
covers copyrights and related rights in the widest range.

WPT (1996) is for the protection of the copyright of authors in their


literary and artistic works in international law.
WPTT (1996) is for the protection of the rights of performers and
producers in international law.

UDRP (1999) is for the resolution of disputes on registration and use of


internet domain names.

The international treaties have a long way to tread before they are
capable of protecting intellectual property rights on the ground and
within the nations. Until practical realization of the best practices of the
treaties into domestic law takes a front seat, the standardization of
protection in the intellectual property rights domain would remain a
distant dream, miles away from reality.

Applicable laws in India


In India, Section 51 (a) (ii) Copyrights Act, 1957 is very clear that
exclusive rights are vested in the copyright owner and anything to the
contrary constitutes copyright infringement thereof. This legal provision,
in the absence of any express provision for determining the liability of
internet service provider (ISP), may be interpreted to come under the
purview of expression ‘any place’ and ‘permits for profit’ where ISPs
allow server facilities to stockpile user data at their business locations
and make available for broadcast for making profit through charging for
services and advertisements. But such interpretation faces difficulty to
gain ground by way of added ingredients of ‘knowledge’ and ‘due
diligence’ to be fulfilled before the ISP can be held to have abetted
infringement of copyright.

Information Technology (Intermediaries Guidelines) Rules 2011


and Section 79 IT Act, 2000 grant conditional safe harbour from liability
of the online intermediaries, though keeping it open for interpretation on
their liability under any other civil or criminal Act. IT Act 2000 makes
an intermediary non-liable for any third-party content hosted on its site.
The 2011 Guidelines provide a diligence framework to be followed by
intermediaries to avail the exemption granted in Section 79 IT Act,
2000. This makes it important for proactive judicial interpretation
depending on the facts of each case.

In Super Cassettes Industries Ltd. vs. Myspace Inc. & Anr., the Hon’ble
Court held the intermediary liable for allowing viewing and sharing
images over the intellectual property ownership of Super Cassettes. The
case pronounced judicial activism by granting precedence to the Indian
Copyright Act, 1957 over the safe havens of IT Act, 2000, through
reading Section 81 of the IT Act 2000 in conjunction with and
over Section 79 of the IT Act 2000.
Section 14 Copyrights Act, 1957 elucidates what constitutes exclusive
rights. The Hon’ble High Court of Calcutta had recently passed an ex-
parte injunction at the instance of the petitioners Phonographic
Performance Ltd. (PPL), Indian Music Industry (IMI), and Sagarika
Music Pvt. Ltd., to restrict an array of ISPs namely Dishnet Wireless
Ltd, Reliance Wimax Ltd, Hathway Cable & Datacom Pvt Ltd, Hughes
Communications Ltd India, Tata Teleservices (Maharashtra) Ltd,
Reliance Communications Infrastructure Ltd, Wipro Ltd, Sify
Technologies Ltd, Bharti Airtel Ltd, Vodafone India Ltd, and BG
Broadband India Pvt Ltd., from providing access to www.songs.pk.

It is clear that a Napster-like network in India would fall within the


ambit of this Section whereby it would be held liable for encroaching
upon the exclusive copyright rights of the intellectual property rights
owner through communication or facilitation of communication to the
public.

Section 51 (b) (ii) Copyrights Act, 1957 suggests the infringement of


copyright through distribution either for the purpose of business/trade or
to prejudice the copyright owner. P2P network in India thereby would be
distributing such work that would be prejudicial to the interests of the
copyright owner, even if the component of trade/business is missing in
it. Hon’ble Courts ought to be cautious while granting the defence of fair
dealing for copyright infringement under Section 52, Copyright Act
1957.
Jurisdiction of intellectual property issues in cyberspace

Cross-border disputes against private parties and hybrid infringements


are an emerging concern, as the globe shrinks into cyberspace with no
borders. Courts face a constant dilemma of which cases come under the
purview of their jurisdiction for prescription, adjudication, and
enforcement. The objective aspect of territorial jurisdiction is crucial to
this. A sovereign has the power to adopt a criminal law that may be
applicable to offences that effectuate within its borders even though the
offensive act was committed outside its borders. The courts may assume
jurisdiction for prosecuting a cyber offender based on universal
jurisdiction, where the offensive acts are known universally by
international law.

Out of the variety of theories and legal concepts that have evolved in the
recent past to address this major impediment of the jurisdiction of courts
to try infringements of intellectual property in the open world of
cyberspace, the most noteworthy of which are the Minimum Contacts
Test, the Effects Test, and the Sliding Scale Test or ‘Zippo Test’. These
are theories derived from US Courts. The Minimum contacts test is
applicable in circumstances where one or both parties are from outside
the court’s territorial jurisdiction whereby there is an element of contact
with the state where the court is located. The Effects test is applicable
where the consequence of the injury is felt at the particular state where
the court is located. The Sliding Test determines personal jurisdiction
through non-resident interactivities and the exchange of commercial
information over the internet of non-resident online operators.
Section 75 IT Act, 2000 applies to offences committed outside India if
the conduct constituted an offence involving a computer, computer
system, or computer network located in India. Section 4 IPC,
1860 extends its jurisdiction to offences committed in any place outside
India targeting a computer resource located in India. Indian courts have
the legal tools to adjudicate against the infringers of intellectual property
in the cyber domain, and judicial activism followed with effective
jurisprudence would come much to the rescue of the intellectual
property owners.

Conclusion
With the emerging trend of modernization of technology, it is crucial to
have a meaningful legal discourse on the intellectual property issues that
are set to barrage the cyber world. Solutions are critical to the present
discourse. Traditional regulations revolving around intellectual property
protection are not enough to be applied in cyberspace – more is vital, for
reasons of the typical challenges faced by the realm of cyberspace.

Border control measures, in the context of global trade and international


market and e-commerce, are necessary to be granted a safe environment
for import and export free from infringing intellectual property
endeavours. Technological protection measures are crucial to the
protection of copyright content in the digital environment by way of
encryption, cryptography, digital signatures, and digital watermarks.
Protection of rights management information to help identify the work,
its author, its owner, the numbers or codes involved to represent such
works, is imperative. Solutions cannot be restricted to the legal
indulgence of regulations and enforcement thereof but stretch itself to
encompass the proactiveness of the copyright owners, their successors,
software companies, and last but not the least: the perceptions of
common people of ‘fairness’ and ‘equity. Social engineering originates
from people, and the solutions to ban such manipulations would best
come from the same “people”.

c) Berne Convention.
Berne Convention, Berne also spelled Bern, formally International
Convention for the Protection of Literary and Artistic Works,
international copyright agreement adopted by an international
conference in Bern (Berne) in 1886 and subsequently modified several
times (Berlin, 1908; Rome, 1928; Brussels, 1948; Stockholm, 1967; and
Paris, 1971). Signatories of the Convention constitute the Berne
Copyright Union.

The core of the Berne Convention is its provision that each of the
contracting countries shall provide automatic protection for works first
published in other countries of the Berne union and for unpublished
works whose authors are citizens of or resident in such other countries.

Each country of the union must guarantee to authors who are nationals
of other member countries the rights that its own laws grant to its
nationals. If the work has been first published in a Berne country but the
author is a national of a nonunion country, the union country may
restrict the protection to the extent that such protection is limited in the
country of which the author is a national. The works protected by the
Rome revision of 1928 include every production in the literary,
scientific, and artistic domain, regardless of the mode of expression,
such as books, pamphlets, and other writings; lectures, addresses,
sermons, and other works of the same nature; dramatic or dramatico-
musical works, choreographic works and entertainments in dumb show,
the acting form of which is fixed in writing or otherwise; musical
compositions; drawings, paintings, works of architecture, sculpture,
engraving, and lithography; illustrations, geographical charts, plans,
sketches, and plastic works relative to geography, topography,
architecture, or science. It also includes translations, adaptations,
arrangements of music, and other reproductions in an altered form of a
literary or artistic work, as well as collections of different works. The
Brussels revision of 1948 added cinematographic works and
photographic works. In addition, both the Rome and Brussels revisions
protect works of art applied to industrial purposes so far as the domestic
legislation of each country allows such protection.
In the Rome revision the term of copyright for most types of works
became the life of the author plus 50 years, but it was recognized that
some countries might have a shorter term. Both the Rome and the
Brussels revisions protected the right of making translations; but the
Stockholm Protocol and the Paris revision somewhat liberalized the
rights of translation, in a compromise between developing and
developed countries.

d) WIPO Copyright Convention

The WIPO Copyright Treaty (WCT) is a special agreement under


the Berne Convention which deals with the protection of works and the
rights of their authors in the digital environment. In addition to the rights
recognized by the Berne Convention, they are granted certain economic
rights. The Treaty also deals with two subject matters to be protected
by copyright: (i) computer programs, whatever the mode or form of their
expression; and (ii) compilations of data or other material ("databases").

A treaty adopted in 1996 by member states of the World Intellectual


Property Organization (WIPO) under the Berne Convention for the
Protection of Literary and Artistic Works (Berne Convention) that
provides additional copyright protections to account for information
technology advancements.

Among other things, the treaty:

 Expands copyright protection to computer programs and the


expressive selection and arrangement of database data and
material.
 Subject to certain limitations and exceptions, provides authors with
certain exclusive rights concerning their literary and artistic works,
including:
 general distribution;
 rental rights for computer programs, sound recordings, and
cinematographic works; and
 the right to communicate the works to the public by wire or
wireless means (online transmission).
The treaty also requires that signatories:

 Protect against the:


 circumvention of technological copy-prevention systems for
copyrighted works; and
 unauthorized modification of electronic rights management
information.
 Adopt measures to ensure that the treaty is applied and that
enforcement procedures and remedies are available for any acts of
copyright infringement covered by the treaty.
The treaty was implemented in the US through the Digital Millennium
Copyright Act (DMCA).

Additional information, including the full text of the treaty, is available


on the WIPO website.

The areas of intellectual property that it covers are: copyright and related
rights (i.e. the rights of performers, producers of sound recordings and
broadcasting organizations); trademarks including service
marks; geographical indications including appellations of
origin; industrial designs; patents including the protection of new
varieties of plants; the layout-designs of integrated circuits;
and undisclosed information including trade secrets and test data.

The three main features of the Agreement are:

 Standards. In respect of each of the main areas of intellectual


property covered by the TRIPS Agreement, the Agreement sets out
the minimum standards of protection to be provided by each
Member. Each of the main elements of protection is defined,
namely the subject-matter to be protected, the rights to be
conferred and permissible exceptions to those rights, and the
minimum duration of protection. The Agreement sets these
standards by requiring, first, that the substantive obligations of the
main conventions of the WIPO, the Paris Convention for the
Protection of Industrial Property (Paris Convention) and the Berne
Convention for the Protection of Literary and Artistic Works
(Berne Convention) in their most recent versions, must be
complied with. With the exception of the provisions of the Berne
Convention on moral rights, all the main substantive provisions of
these conventions are incorporated by reference and thus become
obligations under the TRIPS Agreement between TRIPS Member
countries. The relevant provisions are to be found in Articles 2.1
and 9.1 of the TRIPS Agreement, which relate, respectively, to the
Paris Convention and to the Berne Convention. Secondly, the
TRIPS Agreement adds a substantial number of additional
obligations on matters where the pre-existing conventions are
silent or were seen as being inadequate. The TRIPS Agreement is
thus sometimes referred to as a Berne and Paris-plus agreement.

 Enforcement. The second main set of provisions deals with


domestic procedures and remedies for the enforcement of
intellectual property rights. The Agreement lays down certain
general principles applicable to all IPR enforcement procedures. In
addition, it contains provisions on civil and administrative
procedures and remedies, provisional measures, special
requirements related to border measures and criminal procedures,
which specify, in a certain amount of detail, the procedures and
remedies that must be available so that right holders can effectively
enforce their rights.

 Dispute settlement. The Agreement makes disputes between WTO


Members about the respect of the TRIPS obligations subject to the
WTO's dispute settlement procedures.

In addition the Agreement provides for certain basic principles, such as


national and most-favoured-nation treatment, and some general rules to
ensure that procedural difficulties in acquiring or maintaining IPRs do
not nullify the substantive benefits that should flow from the Agreement.
The obligations under the Agreement will apply equally to all Member
countries, but developing countries will have a longer period to phase
them in. Special transition arrangements operate in the situation where a
developing country does not presently provide product patent protection
in the area of pharmaceuticals.

The TRIPS Agreement is a minimum standards agreement, which allows


Members to provide more extensive protection of intellectual property if
they so wish. Members are left free to determine the appropriate method
of implementing the provisions of the Agreement within their own legal
system and practice.

Certain general provisions

As in the main pre-existing intellectual property conventions, the basic


obligation on each Member country is to accord the treatment in regard
to the protection of intellectual property provided for under the
Agreement to the persons of other Members. Article 1.3 defines who
these persons are. These persons are referred to as “nationals” but
include persons, natural or legal, who have a close attachment to other
Members without necessarily being nationals. The criteria for
determining which persons must thus benefit from the treatment
provided for under the Agreement are those laid down for this purpose
in the main pre-existing intellectual property conventions of WIPO,
applied of course with respect to all WTO Members whether or not they
are party to those conventions. These conventions are the Paris
Convention, the Berne Convention, International Convention for the
Protection of Performers, Producers of Phonograms and Broadcasting
Organizations (Rome Convention), and the Treaty on Intellectual
Property in Respect of Integrated Circuits (IPIC Treaty).
Articles 3, 4 and 5 include the fundamental rules on national and most-
favoured-nation treatment of foreign nationals, which are common to all
categories of intellectual property covered by the Agreement. These
obligations cover not only the substantive standards of protection but
also matters affecting the availability, acquisition, scope, maintenance
and enforcement of intellectual property rights as well as those matters
affecting the use of intellectual property rights specifically addressed in
the Agreement. While the national treatment clause forbids
discrimination between a Member's own nationals and the nationals of
other Members, the most-favoured-nation treatment clause forbids
discrimination between the nationals of other Members. In respect of the
national treatment obligation, the exceptions allowed under the pre-
existing intellectual property conventions of WIPO are also allowed
under TRIPS. Where these exceptions allow material reciprocity, a
consequential exception to MFN treatment is also permitted (e.g.
comparison of terms for copyright protection in excess of the minimum
term required by the TRIPS Agreement as provided under Article 7(8) of
the Berne Convention as incorporated into the TRIPS Agreement).
Certain other limited exceptions to the MFN obligation are also provided
for.

The general goals of the TRIPS Agreement are contained in the


Preamble of the Agreement, which reproduces the basic Uruguay Round
negotiating objectives established in the TRIPS area by the 1986 Punta
del Este Declaration and the 1988/89 Mid-Term Review. These
objectives include the reduction of distortions and impediments to
international trade, promotion of effective and adequate protection of
intellectual property rights, and ensuring that measures and procedures
to enforce intellectual property rights do not themselves become barriers
to legitimate trade. These objectives should be read in conjunction with
Article 7, entitled “Objectives”, according to which the protection and
enforcement of intellectual property rights should contribute to the
promotion of technological innovation and to the transfer and
dissemination of technology, to the mutual advantage of producers and
users of technological knowledge and in a manner conducive to social
and economic welfare, and to a balance of rights and obligations. Article
8, entitled “Principles”, recognizes the rights of Members to adopt
measures for public health and other public interest reasons and to
prevent the abuse of intellectual property rights, provided that such
measures are consistent with the provisions of the TRIPS Agreement.

During the Uruguay Round negotiations, it was recognized that the


Berne Convention already, for the most part, provided adequate basic
standards of copyright protection. Thus it was agreed that the point of
departure should be the existing level of protection under the latest Act,
the Paris Act of 1971, of that Convention. The point of departure is
expressed in Article 9.1 under which Members are obliged to comply
with the substantive provisions of the Paris Act of 1971 of the Berne
Convention, i.e. Articles 1 through 21 of the Berne Convention (1971)
and the Appendix thereto. However, Members do not have rights or
obligations under the TRIPS Agreement in respect of the rights
conferred under Article 6bis of that Convention, i.e. the moral rights (the
right to claim authorship and to object to any derogatory action in
relation to a work, which would be prejudicial to the author's honour or
reputation), or of the rights derived therefrom. The provisions of the
Berne Convention referred to deal with questions such as subject-matter
to be protected, minimum term of protection, and rights to be conferred
and permissible limitations to those rights. The Appendix allows
developing countries, under certain conditions, to make some limitations
to the right of translation and the right of reproduction.

In addition to requiring compliance with the basic standards of the Berne


Convention, the TRIPS Agreement clarifies and adds certain specific
points.

Article 9.2 confirms that copyright protection shall extend to expressions


and not to ideas, procedures, methods of operation or mathematical
concepts as such.

Article 10.1 provides that computer programs, whether in source or


object code, shall be protected as literary works under the Berne
Convention (1971). This provision confirms that computer programs
must be protected under copyright and that those provisions of the Berne
Convention that apply to literary works shall be applied also to them. It
confirms further, that the form in which a program is, whether in source
or object code, does not affect the protection. The obligation to protect
computer programs as literary works means e.g. that only those
limitations that are applicable to literary works may be applied to
computer programs. It also confirms that the general term of protection
of 50 years applies to computer programs. Possible shorter terms
applicable to photographic works and works of applied art may not be
applied.
Article 10.2 clarifies that databases and other compilations of data or
other material shall be protected as such under copyright even where the
databases include data that as such are not protected under copyright.
Databases are eligible for copyright protection provided that they by
reason of the selection or arrangement of their contents constitute
intellectual creations. The provision also confirms that databases have to
be protected regardless of which form they are in, whether machine
readable or other form. Furthermore, the provision clarifies that such
protection shall not extend to the data or material itself, and that it shall
be without prejudice to any copyright subsisting in the data or material
itself.

Article 11 provides that authors shall have in respect of at least computer


programs and, in certain circumstances, of cinematographic works the
right to authorize or to prohibit the commercial rental to the public of
originals or copies of their copyright works. With respect to
cinematographic works, the exclusive rental right is subject to the so-
called impairment test: a Member is excepted from the obligation unless
such rental has led to widespread copying of such works which is
materially impairing the exclusive right of reproduction conferred in that
Member on authors and their successors in title. In respect of computer
programs, the obligation does not apply to rentals where the program
itself is not the essential object of the rental.
According to the general rule contained in Article 7(1) of the Berne
Convention as incorporated into the TRIPS Agreement, the term of
protection shall be the life of the author and 50 years after his death.
Paragraphs 2 through 4 of that Article specifically allow shorter terms in
certain cases. These provisions are supplemented by Article 12 of the
TRIPS Agreement, which provides that whenever the term of protection
of a work, other than a photographic work or a work of applied art, is
calculated on a basis other than the life of a natural person, such term
shall be no less than 50 years from the end of the calendar year of
authorized publication, or, failing such authorized publication within 50
years from the making of the work, 50 years from the end of the
calendar year of making.

Article 13 requires Members to confine limitations or exceptions to


exclusive rights to certain special cases which do not conflict with a
normal exploitation of the work and do not unreasonably prejudice the
legitimate interests of the right holder. This is a horizontal provision that
applies to all limitations and exceptions permitted under the provisions
of the Berne Convention and the Appendix thereto as incorporated into
the TRIPS Agreement. The application of these limitations is permitted
also under the TRIPS Agreement, but the provision makes it clear that
they must be applied in a manner that does not prejudice the legitimate
interests of the right holder.

The provisions on protection of performers, producers of phonograms


and broadcasting organizations are included in Article 14. According to
Article 14.1, performers shall have the possibility of preventing the
unauthorized fixation of their performance on a phonogram (e.g. the
recording of a live musical performance). The fixation right covers only
aural, not audiovisual fixations. Performers must also be in position to
prevent the reproduction of such fixations. They shall also have the
possibility of preventing the unauthorized broadcasting by wireless
means and the communication to the public of their live performance.

In accordance with Article 14.2, Members have to grant producers of


phonograms an exclusive reproduction right. In addition to this, they
have to grant, in accordance with Article 14.4, an exclusive rental right
at least to producers of phonograms. The provisions on rental rights
apply also to any other right holders in phonograms as determined in
national law. This right has the same scope as the rental right in respect
of computer programs. Therefore it is not subject to the impairment test
as in respect of cinematographic works. However, it is limited by a so-
called grand-fathering clause, according to which a Member, which on
15 April 1994, i.e. the date of the signature of the Marrakesh Agreement,
had in force a system of equitable remuneration of right holders in
respect of the rental of phonograms, may maintain such system provided
that the commercial rental of phonograms is not giving rise to the
material impairment of the exclusive rights of reproduction of right
holders.

Broadcasting organizations shall have, in accordance with Article 14.3,


the right to prohibit the unauthorized fixation, the reproduction of
fixations, and the rebroadcasting by wireless means of broadcasts, as
well as the communication to the public of their television broadcasts.
However, it is not necessary to grant such rights to broadcasting
organizations, if owners of copyright in the subject-matter of broadcasts
are provided with the possibility of preventing these acts, subject to the
provisions of the Berne Convention.

The term of protection is at least 50 years for performers and producers


of phonograms, and 20 years for broadcasting organizations (Article
14.5).

Article 14.6 provides that any Member may, in relation to the protection
of performers, producers of phonograms and broadcasting organizations,
provide for conditions, limitations, exceptions and reservations to the
extent permitted by the Rome Convention.

The basic rule contained in Article 15 is that any sign, or any


combination of signs, capable of distinguishing the goods and services
of one undertaking from those of other undertakings, must be eligible for
registration as a trademark, provided that it is visually perceptible. Such
signs, in particular words including personal names, letters, numerals,
figurative elements and combinations of colours as well as any
combination of such signs, must be eligible for registration as
trademarks.

Where signs are not inherently capable of distinguishing the relevant


goods or services, Member countries are allowed to require, as an
additional condition for eligibility for registration as a trademark, that
distinctiveness has been acquired through use. Members are free to
determine whether to allow the registration of signs that are not visually
perceptible (e.g. sound or smell marks).
Members may make registrability depend on use. However, actual use of
a trademark shall not be permitted as a condition for filing an application
for registration, and at least three years must have passed after that filing
date before failure to realize an intent to use is allowed as the ground for
refusing the application (Article 14.3).

The Agreement requires service marks to be protected in the same way


as marks distinguishing goods (see e.g. Articles 15.1, 16.2 and 62.3).

The owner of a registered trademark must be granted the exclusive right


to prevent all third parties not having the owner's consent from using in
the course of trade identical or similar signs for goods or services which
are identical or similar to those in respect of which the trademark is
registered where such use would result in a likelihood of confusion. In
case of the use of an identical sign for identical goods or services, a
likelihood of confusion must be presumed (Article 16.1).

The TRIPS Agreement contains certain provisions on well-known


marks, which supplement the protection required by Article 6bis of the
Paris Convention, as incorporated by reference into the TRIPS
Agreement, which obliges Members to refuse or to cancel the
registration, and to prohibit the use of a mark conflicting with a mark
which is well known. First, the provisions of that Article must be applied
also to services. Second, it is required that knowledge in the relevant
sector of the public acquired not only as a result of the use of the mark
but also by other means, including as a result of its promotion, be taken
into account. Furthermore, the protection of registered well-known
marks must extend to goods or services which are not similar to those in
respect of which the trademark has been registered, provided that its use
would indicate a connection between those goods or services and the
owner of the registered trademark, and the interests of the owner are
likely to be damaged by such use (Articles 16.2 and 3).

Members may provide limited exceptions to the rights conferred by a


trademark, such as fair use of descriptive terms, provided that such
exceptions take account of the legitimate interests of the owner of the
trademark and of third parties (Article 17).

Initial registration, and each renewal of registration, of a trademark shall


be for a term of no less than seven years. The registration of a trademark
shall be renewable indefinitely (Article 18).

Cancellation of a mark on the grounds of non-use cannot take place


before three years of uninterrupted non-use has elapsed unless valid
reasons based on the existence of obstacles to such use are shown by the
trademark owner. Circumstances arising independently of the will of the
owner of the trademark, such as import restrictions or other government
restrictions, shall be recognized as valid reasons of non-use. Use of a
trademark by another person, when subject to the control of its owner,
must be recognized as use of the trademark for the purpose of
maintaining the registration (Article 19).

It is further required that use of the trademark in the course of trade shall
not be unjustifiably encumbered by special requirements, such as use
with another trademark, use in a special form, or use in a manner
detrimental to its capability to distinguish the goods or services
(Article 20).

Geographical indications are defined, for the purposes of the Agreement,


as indications which identify a good as originating in the territory of a
Member, or a region or locality in that territory, where a given quality,
reputation or other characteristic of the good is essentially attributable to
its geographical origin (Article 22.1). Thus, this definition specifies that
the quality, reputation or other characteristics of a good can each be a
sufficient basis for eligibility as a geographical indication, where they
are essentially attributable to the geographical origin of the good.
In respect of all geographical indications, interested parties must have
legal means to prevent use of indications which mislead the public as to
the geographical origin of the good, and use which constitutes an act of
unfair competition within the meaning of Article 10bis of the Paris
Convention (Article 22.2).

The registration of a trademark which uses a geographical indication in a


way that misleads the public as to the true place of origin must be
refused or invalidated ex officio if the legislation so permits or at the
request of an interested party (Article 22.3).

Article 23 provides that interested parties must have the legal means to
prevent the use of a geographical indication identifying wines for wines
not originating in the place indicated by the geographical indication.
This applies even where the public is not being misled, there is no unfair
competition and the true origin of the good is indicated or the
geographical indication is accompanied be expressions such as “kind”,
“type”, “style”, “imitation” or the like. Similar protection must be given
to geographical indications identifying spirits when used on spirits.
Protection against registration of a trademark must be provided
accordingly.

Article 24 contains a number of exceptions to the protection of


geographical indications. These exceptions are of particular relevance in
respect of the additional protection for geographical indications for
wines and spirits. For example, Members are not obliged to bring a
geographical indication under protection, where it has become a generic
term for describing the product in question (paragraph 6). Measures to
implement these provisions shall not prejudice prior trademark rights
that have been acquired in good faith (paragraph 5). Under certain
circumstances, continued use of a geographical indication for wines or
spirits may be allowed on a scale and nature as before (paragraph 4).
Members availing themselves of the use of these exceptions must be
willing to enter into negotiations about their continued application to
individual geographical indications (paragraph 1). The exceptions
cannot be used to diminish the protection of geographical indications
that existed prior to the entry into force of the TRIPS Agreement
(paragraph 3). The TRIPS Council shall keep under review the
application of the provisions on the protection of geographical
indications.

Article 25.1 of the TRIPS Agreement obliges Members to provide for


the protection of independently created industrial designs that are new or
original. Members may provide that designs are not new or original if
they do not significantly differ from known designs or combinations of
known design features. Members may provide that such protection shall
not extend to designs dictated essentially by technical or functional
considerations.

Article 25.2 contains a special provision aimed at taking into account the
short life cycle and sheer number of new designs in the textile sector:
requirements for securing protection of such designs, in particular in
regard to any cost, examination or publication, must not unreasonably
impair the opportunity to seek and obtain such protection. Members are
free to meet this obligation through industrial design law or through
copyright law.

Article 26.1 requires Members to grant the owner of a protected


industrial design the right to prevent third parties not having the owner's
consent from making, selling or importing articles bearing or embodying
a design which is a copy, or substantially a copy, of the protected design,
when such acts are undertaken for commercial purposes.

Article 26.2 allows Members to provide limited exceptions to the


protection of industrial designs, provided that such exceptions do not
unreasonably conflict with the normal exploitation of protected
industrial designs and do not unreasonably prejudice the legitimate
interests of the owner of the protected design, taking account of the
legitimate interests of third parties.

The duration of protection available shall amount to at least 10 years


(Article 26.3). The wording “amount to” allows the term to be divided
into, for example, two periods of five years.

The TRIPS Agreement requires Member countries to make patents


available for any inventions, whether products or processes, in all fields
of technology without discrimination, subject to the normal tests of
novelty, inventiveness and industrial applicability. It is also required that
patents be available and patent rights enjoyable without discrimination
as to the place of invention and whether products are imported or locally
produced (Article 27.1).

There are three permissible exceptions to the basic rule on patentability.


One is for inventions contrary to ordre public or morality; this explicitly
includes inventions dangerous to human, animal or plant life or health or
seriously prejudicial to the environment. The use of this exception is
subject to the condition that the commercial exploitation of the invention
must also be prevented and this prevention must be necessary for the
protection of order public or morality (Article 27.2).

The second exception is that Members may exclude from patentability


diagnostic, therapeutic and surgical methods for the treatment of humans
or animals (Article 27.3(a)).

The third is that Members may exclude plants and animals other than
micro-organisms and essentially biological processes for the production
of plants or animals other than non-biological and microbiological
processes. However, any country excluding plant varieties from patent
protection must provide an effective sui generis system of protection.
Moreover, the whole provision is subject to review four years after entry
into force of the Agreement (Article 27.3(b)).
The exclusive rights that must be conferred by a product patent are the
ones of making, using, offering for sale, selling, and importing for these
purposes. Process patent protection must give rights not only over use of
the process but also over products obtained directly by the process.
Patent owners shall also have the right to assign, or transfer by
succession, the patent and to conclude licensing contracts (Article 28).

Members may provide limited exceptions to the exclusive rights


conferred by a patent, provided that such exceptions do not unreasonably
conflict with a normal exploitation of the patent and do not unreasonably
prejudice the legitimate interests of the patent owner, taking account of
the legitimate interests of third parties (Article 30).

The term of protection available shall not end before the expiration of a
period of 20 years counted from the filing date (Article 33).

Members shall require that an applicant for a patent shall disclose the
invention in a manner sufficiently clear and complete for the invention
to be carried out by a person skilled in the art and may require the
applicant to indicate the best mode for carrying out the invention known
to the inventor at the filing date or, where priority is claimed, at the
priority date of the application (Article 29.1).

If the subject-matter of a patent is a process for obtaining a product, the


judicial authorities shall have the authority to order the defendant to
prove that the process to obtain an identical product is different from the
patented process, where certain conditions indicating a likelihood that
the protected process was used are met (Article 34).
Compulsory licensing and government use without the authorization of
the right holder are allowed, but are made subject to conditions aimed at
protecting the legitimate interests of the right holder. The conditions are
mainly contained in Article 31. These include the obligation, as a
general rule, to grant such licences only if an unsuccessful attempt has
been made to acquire a voluntary licence on reasonable terms and
conditions within a reasonable period of time; the requirement to pay
adequate remuneration in the circumstances of each case, taking into
account the economic value of the licences and a requirement that
decisions be subject to judicial or other independent review by a distinct
higher authority. Certain of these conditions are relaxed where
compulsory licences are employed to remedy practices that have been
established as anticompetitive by a legal process. These conditions
should be read together with the related provisions of Article 27.1,
which require that patent rights shall be enjoyable without
discrimination as to the field of technology, and whether products are
imported or locally produced.

Article 35 of the TRIPS Agreement requires Member countries to


protect the layout-designs of integrated circuits in accordance with the
provisions of the IPIC Treaty (the Treaty on Intellectual Property in
Respect of Integrated Circuits), negotiated under the auspices of WIPO
in 1989. These provisions deal with, inter alia, the definitions of
“integrated circuit” and “layout-design (topography)”, requirements for
protection, exclusive rights, and limitations, as well as exploitation,
registration and disclosure. An “integrated circuit” means a product, in
its final form or an intermediate form, in which the elements, at least one
of which is an active element, and some or all of the interconnections are
integrally formed in and/or on a piece of material and which is intended
to perform an electronic function. A “layout-design (topography)” is
defined as the three-dimensional disposition, however expressed, of the
elements, at least one of which is an active element, and of some or all
of the interconnections of an integrated circuit, or such a three-
dimensional disposition prepared for an integrated circuit intended for
manufacture. The obligation to protect layout-designs applies to such
layout-designs that are original in the sense that they are the result of
their creators' own intellectual effort and are not commonplace among
creators of layout-designs and manufacturers of integrated circuits at the
time of their creation. The exclusive rights include the right of
reproduction and the right of importation, sale and other distribution for
commercial purposes. Certain limitations to these rights are provided
for.
In addition to requiring Member countries to protect the layout-designs
of integrated circuits in accordance with the provisions of the IPIC
Treaty, the TRIPS Agreement clarifies and/or builds on four points.
These points relate to the term of protection (ten years instead of eight,
Article 38), the applicability of the protection to articles containing
infringing integrated circuits (last sub clause of Article 36) and the
treatment of innocent infringers (Article 37.1). The conditions in Article
31 of the TRIPS Agreement apply mutatis mutandis to compulsory or
non-voluntary licensing of a layout-design or to its use by or for the
government without the authorization of the right holder, instead of the
provisions of the IPIC Treaty on compulsory licensing (Article 37.2).

The TRIPS Agreement requires undisclosed information -- trade secrets


or know-how -- to benefit from protection. According to Article 39.2,
the protection must apply to information that is secret, that has
commercial value because it is secret and that has been subject to
reasonable steps to keep it secret. The Agreement does not require
undisclosed information to be treated as a form of property, but it does
require that a person lawfully in control of such information must have
the possibility of preventing it from being disclosed to, acquired by, or
used by others without his or her consent in a manner contrary to honest
commercial practices. “Manner contrary to honest commercial
practices” includes breach of contract, breach of confidence and
inducement to breach, as well as the acquisition of undisclosed
information by third parties who knew, or were grossly negligent in
failing to know, that such practices were involved in the acquisition.

The Agreement also contains provisions on undisclosed test data and


other data whose submission is required by governments as a condition
of approving the marketing of pharmaceutical or agricultural chemical
products which use new chemical entities. In such a situation the
Member government concerned must protect the data against unfair
commercial use. In addition, Members must protect such data against
disclosure, except where necessary to protect the public, or unless steps
are taken to ensure that the data are protected against unfair commercial
use.

Control of anti-competitive practices in contractual licences


Article 40 of the TRIPS Agreement recognizes that some licensing
practices or conditions pertaining to intellectual property rights which
restrain competition may have adverse effects on trade and may impede
the transfer and dissemination of technology (paragraph 1). Member
countries may adopt, consistently with the other provisions of the
Agreement, appropriate measures to prevent or control practices in the
licensing of intellectual property rights which are abusive and anti-
competitive (paragraph 2). The Agreement provides for a mechanism
whereby a country seeking to take action against such practices
involving the companies of another Member country can enter into
consultations with that other Member and exchange publicly available
non-confidential information of relevance to the matter in question and
of other information available to that Member, subject to domestic law
and to the conclusion of mutually satisfactory agreements concerning the
safeguarding of its confidentiality by the requesting Member (paragraph
3). Similarly, a country whose companies are subject to such action in
another Member can enter into consultations with that Member.
MODULE-III:

Electronic Commerce

E-commerce (electronic commerce) is the buying and selling of goods


and services, or the transmitting of funds or data, over an electronic
network, primarily the internet. These business transactions occur either
as business-to-business (B2B), business-to-consumer (B2C), consumer-
to-consumer or consumer-to-business. The terms e-commerce and e-
business are often used interchangeably. The term e-tail is also
sometimes used in reference to the transactional processes that make up
online retail shopping.

In the last decade, widespread use of e-commerce platforms such as


Amazon and eBay has contributed to substantial growth in online retail.
In 2007, e-commerce accounted for 5.1% of total retail sales; in 2019, e-
commerce made up 16.0%.

How does e-commerce work?


E-commerce is powered by the internet, where customers can access an
online store to browse through, and place orders for products or services
via their own devices.

As the order is placed, the customer's web browser will communicate


back and forth with the server hosting the online store website. Data
pertaining to the order will then be relayed to a central computer known
as the order manager -- then forwarded to databases that manage
inventory levels, a merchant system that manages payment information
(using applications such as PayPal), and a bank computer -- before
circling back to the order manager. This is to make sure that store
inventory and customer funds are sufficient for the order to be
processed. After the order is validated, the order manager will notify the
store's web server, which will then display a message notifying the
customer that their order has been successfully processed. The order
manager will then send order data to the warehouse or fulfillment
department, in order for the product or service to be successfully
dispatched to the customer. At this point tangible and/or digital products
may be shipped to a customer, or access to a service may be granted.

Platforms that host e-commerce transactions may include online


marketplaces that sellers simply sign up for, such as Amazon.com;
software as a service (SaaS) tools that allow customers to 'rent' online
store infrastructures; or open source tools for companies to use in-house
development to manage.

Types of e-commerce

Business-to-business (B2B) e-commerce refers to the electronic


exchange of products, services or information between businesses rather
than between businesses and consumers. Examples include online
directories and product and supply exchange websites that allow
businesses to search for products, services and information and to
initiate transactions through e-procurement interfaces.

In 2017, Forrester Research predicted that the B2B e-commerce market


will top $1.1 trillion in the U.S. by 2021, accounting for 13% of all B2B
sales in the nation.

Business-to-consumer (B2C) is the retail part of e-commerce on the


internet. It is when businesses sell products, services or information
directly to consumers. The term was popular during the dot-com
boom of the late 1990s, when online retailers and sellers of goods were a
novelty.

Today, there are innumerable virtual stores and malls on the internet
selling all types of consumer goods. The most recognized example of
these sites is Amazon, which dominates the B2C market.
Consumer-to-consumer (C2C) is a type of e-commerce in which
consumers trade products, services and information with each other
online. These transactions are generally conducted through a third party
that provides an online platform on which the transactions are carried
out.

Online auctions and classified advertisements are two examples of C2C


platforms, with eBay and Craigslist being two of the most popular of
these platforms. Because eBay is a business, this form of e-commerce
could also be called C2B2C -- consumer-to-business-to-consumer.

Consumer-to-business (C2B) is a type of e-commerce in which


consumers make their products and services available online for
companies to bid on and purchase. This is the opposite of the traditional
commerce model of B2C.

A popular example of a C2B platform is a market that sells royalty-free


photographs, images, media and design elements, such as iStock.
Another example would be a job board.

Business-to-administration refers to transactions conducted online


between companies and public administration or government bodies.
Many branches of government are dependent on e-services or products
in one way or another, especially when it comes to legal documents,
registers, social security, fiscals and employment. Businesses can supply
these electronically. B2A services have grown considerably in recent
years as investments have been made in e-government capabilities.

Consumer-to-administration (C2A) refers to transactions conducted


online between individual consumers and public administration or
government bodies. The government rarely buys products or services
from citizens, but individuals frequently use electronic means in the
following areas:

 Education. Disseminating information, distance learning/online


lectures, etc.
 Social security. Distributing information, making payments, etc.
 Taxes. filing tax returns, making payments, etc.
 Health. Making appointments, providing information about
illnesses, making health services payments, etc.

Mobile e-commerce (M-commerce) is a type of e-commerce on the rise


that features online sales transactions made using mobile devices, such
as smartphones and tablets. M-commerce includes mobile shopping,
mobile banking and mobile payments. Mobile chatbots also provide e-
commerce opportunities to businesses, allowing consumers to complete
transactions with companies via voice or text conversations.

Advantages and disadvantages of e-commerce

Benefits of e-commerce include its around-the-clock availability, the


speed of access, the wide availability of goods and services for the
consumer, easy accessibility and international reach.

 Availability. Aside from outages or scheduled maintenance, e-


commerce sites are available 24x7, allowing visitors to browse and
shop at any time. Brick-and-mortar businesses tend to open for a
fixed number of hours and may even close entirely on certain days.
 Speed of access. While shoppers in a physical store can be slowed
by crowds, e-commerce sites run quickly, which is determined by
compute and bandwidth considerations on both consumer device
and e-commerce site. Product pages and shopping cart pages load
in a few seconds or less. An e-commerce transaction can comprise
a few clicks and take less than five minutes.
 Wide availability. Amazon's first slogan was "Earth's Biggest
Bookstore." They could make this claim because they were an e-
commerce site and not a physical store that had to stock each book
on its shelves. E-commerce enables brands to make a wide array of
products available, which are then shipped from a warehouse after
a purchase is made. Customers will likely have more success
finding what they want.
 Easy accessibility. Customers shopping a physical store may have
a hard time determining which aisle a particular product is in. In e-
commerce, visitors can browse product category pages and use the
site search feature the find the product immediately.
 International reach. Brick-and-mortar businesses sell to
customers who physically visit their stores. With e-commerce,
businesses can sell to any customer who can access the web. E-
commerce has the potential to extend a business' customer base
 Lower cost. pure play e-commerce businesses avoid the cost
associated with physical stores, such as rent, inventory and
cashiers, although they may incur shipping and warehouse costs.
 Personalization and product recommendations. E-commerce
sites can track visitors' browse, search and purchase history. They
can use this data to present useful and personalized product
recommendations, and obtain valuable insights about target
markets. Examples include the sections of Amazon product pages
labeled "Frequently bought together" and "Customers who viewed
this item also viewed."

The perceived disadvantages of e-commerce include sometimes


limited customer service, consumers not being able to see or touch a
product prior to purchase and the wait time for product shipping.

 Limited customer service. If a customer has a question or issue in


a physical store, he or she can see a clerk, cashier or store manager
for help. In an e-commerce store, customer service may be limited:
The site may only provide support during certain hours of the day,
or a call to a customer service phone number may keep the
customer on hold.
 Not being able to touch or see. While images on a webpage can
provide a good sense about a product, it's different from
experiencing it "directly," such as playing music on speakers,
assessing the picture quality of a television or trying on a shirt or
dress. E-commerce can lead consumers to receive products that
differ from their expectations, which leads to returns. In some
scenarios, the customer bears the burden for the cost of shipping
the returned item to the retailer.
 Wait time. If a customer sees an item that he or she likes in a
store, the customer pays for it and then goes home with it. With e-
commerce, there is a wait time for the product to be shipped to the
customer's address. Although shipping windows are decreasing as
next day delivery is now quite common, it's not instantaneous.
 Security. Skilled hackers can create authentic-looking websites
that claim to sell well-known products. Instead, the site sends
customers forfeit or imitation versions of those products -- or,
simply collects customers' credit card information. Legitimate e-
commerce sites also carry risk, especially when customers store
their credit card information with the retailer to make future
purchases easier. If the retailer's site is hacked, hackers may come
into the possession of customers' credit card information.

E-commerce applications
E-commerce is conducted using a variety of applications, such as Email,
online catalogs and shopping carts, Electronic Data Interchange (EDI),
the file transfer protocol, web services and mobile devices. This includes
B2B activities and outreach, such as using email for unsolicited ads,
usually viewed as spam, to consumers and other business prospects, as
well as sending out e-newsletters to subscribers and SMS texts to mobile
devices. More companies now try to entice consumers directly online,
using tools such as digital coupons, social media marketing and targeted
advertisements.

The rise of e-commerce has forced IT personnel to move beyond


infrastructure design and maintenance to consider numerous customer-
facing aspects, such as consumer data privacy and security. When
developing IT systems and applications to accommodate e-commerce
activities, data governance-related regulatory compliance mandates,
personally identifiable information privacy rules and information
protection protocols must be considered.
E-commerce platforms and vendors

An e-commerce platform is a tool that is used to manage an e-commerce


business. E-commerce platform options exist for clients ranging in size
from small businesses to large enterprises. These e-commerce platforms
include online marketplaces such as Amazon and eBay, that simply
require signing up for user accounts, and little to no IT implementation.
Another e-commerce platform model is SaaS, where store owners can
subscribe to "rent" space in a cloud-hosted service that does not require
in-house development or on-premises infrastructure. Other e-commerce
platforms may come in the form of open source platforms that require a
hosting environment (cloud or on premises), complete manual
implementation and maintenance.

A few examples of e-commerce marketplace platforms include:

 Amazon
 eBay
 Walmart Marketplace
 Chewy
 Wayfair
 Newegg
 Alibaba
 Etsy
 Overstock
 Rakuten

Vendors offering e-commerce platform services for clients hosting their


own online store sites include:

 Shopify
 WooCommerce
 Magento
 Squarespace
 BigCommerce
 Ecwid
 Salesforce Commerce Cloud (B2B and B2C options)
 Oracle SuiteCommerce

Government regulations for e-commerce


In the United States, the Federal Trade Commission (FTC) and
the Payment Card Industry (PCI) Security Standards Council are among
the primary agencies that regulate e-commerce activities. The FTC
monitors activities such as online advertising, content marketing and
customer privacy, while the PCI Security Standards Council develops
standards and rules, including PCI Data Security Standard compliance,
which outlines procedures for the proper handling and storage of
consumers' financial data.

To ensure the security, privacy and effectiveness of e-commerce,


businesses should authenticate business transactions, control access to
resources such as webpages for registered or selected users, encrypt
communications, and implement security technologies, such as the
Secure Sockets Layer and two-factor authentication.

History of e-commerce

The beginnings of e-commerce can be traced to the 1960s, when


businesses started using EDI to share business documents with other
companies. In 1979, the American National Standards Institute
developed ASC X12 as a universal standard for businesses to share
documents through electronic networks.

After the number of individual users sharing electronic documents with


each other grew in the 1980s, the rise of eBay and Amazon in the 1990s
revolutionized the e-commerce industry. Consumers can now purchase
endless amounts of items online, from e-tailers, typical brick-and-mortar
stores with e-commerce capabilities. Now, almost all retailer companies
are integrating online business practices into their business models.
Disruption to physical retail

Given the large rise in e-commerce in recent years, many analysts,


economists and consumers have debated whether the online B2C market
will soon make physical, brick-and-mortar stores obsolete. There is little
question that online shopping is growing at a significant rate.

Research from BigCommerce has found that Americans are about


evenly split on online versus offline, traditional retail shopping, with
51% of Americans preferring e-commerce and 49% preferring physical
stores. However, 67% of millennials prefer shopping online over offline.
According to Forbes, 40% of millennials are also already using voice
assistants to make purchases, with that number expected to surpass 50%
by 2020.

An example of the impact e-commerce has had on physical retail is the


post-Thanksgiving Black Friday and Cyber Monday shopping days in
the United States. According to Rakuten Marketing data, in 2017, Cyber
Monday, which features sales that are exclusively online, saw 68%
higher revenues than Black Friday, which is traditionally the biggest
brick-and-mortar shopping day of the year.

According to data from Shopper Trak in 2017, physical store traffic on


Black Friday declined by 1% year over year, and the two-day
Thanksgiving-Black Friday period saw a 1.6% decline in traffic. Nearly
40% of sales on Black Friday came via a mobile device, up nearly 10%
from the previous year, an indication that e-commerce is becoming m-
commerce. Along with physical retail, e-commerce is transforming
supply chain management practices among businesses, as distribution
channels become increasingly digitized.

MODULE-IV:

Cyber Crime
As technology advances, man has become reliant on the Internet for all
of his requirements. The internet has provided man with quick access to
everything while seated in one location. Every imaginable thing that
man can think of can be done through the medium of the internet,
including social networking, online shopping, data storage, gaming,
online schooling, and online jobs. The internet is used in nearly all
aspects of life. As the internet and its associated advantages grew in
popularity, so did the notion of cybercrime. Different types of
cybercrime are committed.

Cybercrime is defined as illegal conduct in which a computer is utilised


as a tool, a target, or both. Cybercrime is a rapidly developing field of
crime these days, and it is increasing at a rapid rate. Criminals have
discovered that the cyber world makes it easier to commit crimes, thus it
has become a major issue that adequate laws and infrastructure be put in
place to regulate cybercrime. Women and children are the primary
victims of the violation. According to studies, the number of social
network users in India has risen dramatically from 202.7 million in 2016
to 326.1 million in 2019 and is predicted to rise to 390 million in 2021,
with 40% of females and 60% of men participating.

However, only 25% of the ladies in this group of 40% are merely aware
of cyber-crime. The majority of cyber-crime goes unreported owing to
the victim’s reluctance and shyness, as well as the fear of defamation of
the family’s name. Frequently, the girl believes she is to blame for the
crime. Women sometimes neglect to report crimes due to their concerns,
enabling the perpetrator’s spirits to become even more harmed.
Basically, cybercrime is on the rise because it is regarded to be the
easiest method to commit a crime, and people who have a lot of
computer expertise but are unable to find work or do not have a lot of
money turn to this source and begin abusing the internet. It is simple for
cyber criminals to gain access to data from here and then use it to
withdraw money, blackmail, or do other crimes.
Cyber thieves are on the rise because they don’t see much of a threat and
because they are so well-versed in the networking system that they
believe they are safe. They are also the ones that create phoney accounts
and then commit crimes.

Because there are so many different forms of cyber crime, individuals


don’t simply use it for fraud, stalking, harassment, morphing, bullying,
email-spoofing, defamation, hacking, and many other things.

Solutions to Fight Cyber Crime-


Computer systems must be protected- Firewall and antivirus
applications are vital security software. Make sure all of your
programmes are up to date and patched. Consumers only update their
security software every 8.5 months, according to a Get Safe Online poll.

Make sure your passwords are strong- Passwords of at least eight


characters and a mix of upper and lower case letters, numbers, and
symbols are considered strong. Keep your passwords safe and don’t use
the same one for all of your services and accounts. Passwords should be
changed every 90 days.

Wi-Fi in public places should be disregarded- When utilising public Wi-


Fi, never make online payments, email personal information, or
introduce crucial account passwords. Cyber thieves establish networks
that appear to be free internet but give them access to your personal
information.

Unsolicited emails and SMS communications should be avoided-


Never click on a link, picture, or video sent to you by an unknown
source. Check for spelling errors, bad language, unusual phrasing, and
urgent requests for money or action to ensure that emails are authentic.
Verify correspondence by personally contacting the sender. Also, be
sure that the websites you’re looking at are reputable. Malicious
websites may appear to be identical to legal sites, however the URL is
frequently misspelt or uses a different domain.
Protect personal information on social media- Cybercriminals utilise
social media to gather personal information that they may subsequently
exploit in phishing schemes. Before you provide personal information
like your name, home address, phone number, or email address, think
again.
Limit physical access to critical information by turning off your
computer while you’re not using it. To keep private data safe, lock
mobile devices and encrypt confidential data. Limit who in your
workplace has access to certain network drives.
All gadgets should be used with caution. Phones and other mobile
devices are popular targets. Always be mindful of where your mobile
devices are; they should never be left unattended and visible.

Review your bank and credit card accounts on a regular basis.


According to research, discovering identity theft and online crimes early
reduces the effect.

Do not amass a collection of computers or digital data- Keep digital data


organised and up to date, and delete files on a regular basis. Dispose of
old or unneeded computer hard drives in a secure manner at your office.

Legal Remedies

The Information Technology Act of 2000 subsequently revised in 2008,


was enacted to set limitations for these types of attackers when it came
to committing Cyber-crimes. For laypeople, this is known as the Cyber-
law. This Act establishes sanctions and compensation for offences
involving technology. When a person is a victim of a cybercrime, he has
the option of going to court to pursue legal action against the perpetrator.

The victim has the right to file an appeal in court for compensation for
the wrong done to him under section 43A of the Information Technology
Act of 2000, as this section covers the penalties and compensations for
offences such as “damage to the computer, computer system, or
computer networks, etc.”Anybody corporate that deals with sensitive
data, information, or maintains it on its own or on behalf of others and
negligently compromises such data or information will be liable under
this section and will be required to pay compensation according to the
court’s discretion.
Section 65 of the Act covers the punishment for the offences which
involve “tampering with computer source documents”, where according
to the section, “Whoever knowingly or intentionally conceals, destroys
or alters or intentionally or knowingly causes another to conceal,
destroy, or alter any computer source code used for a computer,
computer program, computer system or computer network, when the
computer source code is required to be kept or maintained by law for the
time being in force, shall be punishable with imprisonment up to three
years, or with fine which may extend up to two lakh rupees, or with
both”.

There are still some gaps in the IT Act of 2008 since there are new and
unknown cyber-offenses for which the law needs to stretch its arms and
tighten its grip.

Here are also offences that are not covered by the IT Act because they
are already covered by other laws, such as “Cyber-defamation,” which is
governed by the Indian Penal Code, 1860. The term “Defamation” and
its punishment are defined under this Act, so there is no need for a
separate definition elsewhere because the impact of such an online
offence is the same as it is offline.

We are living in the twenty-first century in a developing country, where


technology is progressing and humans are being displaced from their
jobs. As a result of the increased use of the internet, the rate of online
crime is growing as well, for which the victim has legal recourse.

However, we’ve seen that the number of cyber-crime case files is


growing faster than the number of solved cases, which is attributable to
people’s negligence and a shortage of cyber professionals who can
handle these situations. As a result, we require better training institutions
as well as a mandatory branch of Cyber-security topics that will aid in
the resolution of cyber-cases and increase trust in security; nonetheless,
we can attempt to safeguard ourselves. No institution or anti-virus can
totally safeguard someone, but by using the measures described above,
we can secure our information, data, devices, and ourselves.

MODULE-V:

Genetic and Medical Technologies.

Rapid scientific developments are not uncommon. Less than 70 years


after the Wright brothers completed the first successful powered
flight, three NASA astronauts flew to the Moon - 385,000 kilometres
away.

However, the speed of recent developments in genetic science has


thrown up a range of economic, environmental and ethical issues that
society is still attempting to process. The fact that your genome can
now be sequenced safely and relatively quickly and cheaply means
that the difficulty of carrying out related genetic procedures - for
example genetic testing or genome editing - is also reduced. It also
opens up the technology to a far wider range of individuals and
organisations.

So how do you feel about this? How do you think using these
technologies might change society? Here we will explore some of the
key genetic technologies and probe some of the dilemmas and debates
around their use.
MODULE-VI:

Broadcasting

Broadcasting is the distribution of audio or video content to a


dispersed audience via any electronic mass communications medium,
but typically one using the electromagnetic spectrum (radio waves), in
a one-to-many model. Broadcasting began with AM radio, which came
into popular use around 1920 with the spread of vacuum tube radio
transmitters and receivers. Before this, all forms of electronic
communication (early radio, telephone, and telegraph) were one-to-one,
with the message intended for a single recipient. The
term broadcasting evolved from its use as the agricultural method of
sowing seeds in a field by casting them broadly about. It was later
adopted for describing the widespread distribution of information by
printed materials or by telegraph. Examples applying it to "one-to-
many" radio transmissions of an individual station to multiple listeners
appeared as early as 1898.

Over the air broadcasting is usually associated with radio and television,
though more recently, both radio and television transmissions have
begun to be distributed by cable (cable television). The receiving parties
may include the general public or a relatively small subset; the point is
that anyone with the appropriate receiving technology and equipment
(e.g., a radio or television set) can receive the signal. The field of
broadcasting includes both government-managed services such as public
radio, community radio and public television, and private commercial
radio and commercial television. The U.S. Code of Federal Regulations,
title 47, part 97 defines "broadcasting" as "transmissions intended for
reception by the general public, either direct or relayed". Private or two-
way telecommunications transmissions do not qualify under this
definition. For example, amateur ("ham") and citizens band (CB) radio
operators are not allowed to broadcast. As defined, "transmitting" and
"broadcasting" are not the same.
Transmission of radio and television programs from a radio or television
station to home receivers by radio waves is referred to as "over the air"
(OTA) or terrestrial broadcasting and in most countries requires
a broadcasting license. Transmissions using a wire or cable, like cable
television (which also retransmits OTA stations with their consent), are
also considered broadcasts but do not necessarily require a license
(though in some countries, a license is required). In the 2000s,
transmissions of television and radio programs via streaming digital
technology have increasingly been referred to as broadcasting as well.

MODULE-VII:

Information Technology Act, 2000.

An Act to provide legal recognition for transactions carried out by


means of electronic data interchange and other means of electronic
communication, commonly referred to as "electronic commerce", which
involve the use of alternatives to paper-based methods of
communication and storage of information, to facilitate electronic filing
of documents with the Government agencies and further to amend the
Indian Penal Code, the Indian Evidence Act, 1872, the Bankers' Books
Evidence Act, 1891 and the Reserve Bank of India Act, 1934 and for
matters connected therewith or incidental thereto. WHEREAS the
General Assembly of the United Nations by resolution A/RES/51/162,
dated the 30th January, 1997 has adopted the Model Law on Electronic
Commerce adopted by the United Nations Commission on International
Trade Law; AND WHEREAS the said resolution recommends inter alia
that all States give favourable consideration to the said Model Law when
they enact or revise their laws, in view of the need for uniformity of the
law applicable to alternatives to paper-cased methods of communication
and storage of information; AND WHEREAS it is considered necessary
to give effect to the said resolution and to promote efficient delivery of
Government services by means of reliable electronic records. BE it
enacted by Parliament in the Fifty-first Year of the Republic of India as
follows:—

CHAPTER I
PRELIMINARY 1. Short title, extent, commencement and application
(1) This Act may be called the Information Technology Act, 2000. (2) It
shall extend to the whole of India and, save as otherwise provided in this
Act, it applies also to any offence or contravention thereunder
committed outside India by any person.

(3) It shall come into force on such date as the Central Government may,
by notification, appoint and different dates may be appointed for
different provisions of this Act and any reference in any such provision
to the commencement of this Act shall be construed as a reference to the
commencement of that provision. (4) Nothing in this Act shall apply to,
— (a) a negotiable instrument as defined in section 13 of the Negotiable
Instruments Act, 1881; (b) a power-of-attorney as defined in section 1A
of the Powers-of-Attorney Act, 1882; (c) a trust as defined in section 3
of the Indian Trusts Act, 1882; (d) a will as defined in clause (h) of
section 2 of the Indian Succession Act, 1925 including any other
testamentary disposition by whatever name called; (e) any contract for
the sale or conveyance of immovable property or any interest in such
property; (f) any such class of documents or transactions as may be
notified by the Central Government in the Official Gazette.

DIGITAL SIGNATURE CERTIFICATES 35. Certifying Authority to


issue Digital Signature Certificate. (1) Any person may make an
application to the Certifying Authority for the issue of a Digital
Signature Certificate in such form as may be prescribed by the Central
Government (2) Every such application shall be accompanied by such
fee not exceeding twentyfive thousand rupees as may be prescribed by
the Central Government, to be paid to the Certifying Authority: Provided
that while prescribing fees under sub-section (2) different fees may be
prescribed for different classes of applicants'. (3) Every such application
shall be accompanied by a certification practice statement or where there
is no such statement, a statement containing such particulars, as may be
specified by regulations. (4) On receipt of an application under sub-
section (1), the Certifying Authority may, after consideration of the
certification practice statement or the other statement under subsection
(3) and after making such enquiries as it may deem fit, grant the Digital
Signature Certificate or for reasons to be recorded in writing, reject the
application: Provided that no Digital Signature Certificate shall be
granted unless the Certifying Authority is satisfied that— (a) the
applicant holds the private key corresponding to the public key to be
listed in the Digital Signature Certificate; (b) the applicant holds a
private key, which is capable of creating a digital signature; (c) the
public key to be listed in the certificate can be used to verify a digital
signature affixed by the private key held by the applicant: Provided
further that no application shall be rejected unless the applicant has been
given a reasonable opportunity of showing cause against the proposed
rejection. 36. Representations upon issuance of Digital Signature
Certificate. A Certifying Authority while issuing a Digital Signature
Certificate shall certify that-- (a) it has complied with the provisions of
this Act and the rules and regulations made thereunder, (b) it has
published the Digital Signature Certificate or otherwise made it available
to such person relying on it and the subscriber has accepted it; (c) the
subscriber holds the private key corresponding to the public key, listed
in the Digital Signature Certificate; (d) the subscriber's public key and
private key constitute a functioning key pair, (e) the information
contained in the Digital Signature Certificate is accurate; and (f) it has
no knowledge of any material fact, which if it had been included in the
Digital Signature Certificate would adversely affect the reliability of the
representations made in clauses (a) to (d). 37. Suspension of Digital
Signature Certificate. (1) Subject to the provisions of sub-section (2), the
Certifying Authority which has issued a Digital Signature Certificate
may suspend such Digital Signature Certificate,— (a) on receipt of a
request to that effect from— (i) the subscriber listed in toe Digital
Signature Certificate; or (ii) any person duly authorised to act on behalf
of that subscriber, (b) if it is of opinion that the Digital Signature
Certificate should be suspended in public interest (2) A Digital Signature
Certificate shall not be suspended for a period exceeding fifteen days
unless the subscriber has been given an opportunity of being heard in the
matter. (3) On suspension of a Digital Signature Certificate under this
section, the Certifying Authority shall communicate the same to the
subscriber. 38. Revocation of Digital Signature Certificate. (1) A
Certifying Authority may revoke a Digital Signature Certificate issued
by it— (a) where the subscriber or any other person authorised by him
makes a request to that effect; or (b) upon the death of the subscriber, or
(c) upon the dissolution of the firm or winding up of the company where
the subscriber is a firm or a company. (2) Subject to the provisions of
sub-section (3) and without prejudice to the provisions of sub-section
(1), a CertifyingAuthority may revoke a Digital Signature Certificate
which has been issued by it at any time, if it is of opinion that— (a) a
material fact represented in the Digital Signature Certificate is false or
has been concealed; (b) a requirement for issuance of the Digital
Signature Certificate was not satisfied; (c) the Certifying Authority's
private key or security system was compromised in a manner materially
affecting the Digital Signature Certificate's reliability; (d) the subscriber
has been declared insolvent or dead or where a subscriber is a firm or a
company, which has been dissolved, wound-up or otherwise ceased to
exist (3) A Digital Signature Certificate shall not be revoked unless the
subscriber has been given an opportunity of being heard in the matter.
(4) On revocation of a Digital Signature Certificate under this section,
the Certifying Authority shall communicate the same to the subscriber.
39. Notice of suspension or revocation. (1) Where a Digital Signature
Certificate is suspended or revoked under section 37 or section 38, the
Certifying Authority shall publish a notice of such suspension or
revocation, as the case may be, in the repository specified in the Digital
Signature Certificate for publication of such notice. (2) Where one or
more repositories are specified, the Certifying Authority shall publish
notice of such suspension or revocation, as the case may he. in all such
repositories.
g) Cyber Appellate Tribunal: Offences and Punishment under Act

Establishment of Cyber Appellate Tribunal.

(1) The Central Government shall, by notification, establish one or more


appellate tribunals to be known as the Cyber Regulations Appellate
Tribunal.

(2) The Central Government shall also specify, in the notification


referred to in subsection

(1), the matters and places in relation to which the Cyber Appellate
Tribunal may exercise jurisdiction.

49. Composition of Cyber Appellate Tribunal. A Cyber Appellate


Tribunal shall consist of one person only (hereinafter referred to as the
Residing Officer of the Cyber Appellate Tribunal) to be appointed, by
notification, by the Central Government

50. Qualifications for appointment as Presiding Officer of the Cyber


Appellate Tribunal. A person shall not be qualified for appointment as
the Presiding Officer of a Cyber Appellate Tribunal unless he—

(a) is, or has been. or is qualified to be, a Judge of a High Court; or

(b) is or has been a member of the Indian Legal Service and is holding
or has held a post in Grade I of that Service for at least three years.

51. Term of office The Presiding Officer of a Cyber Appellate Tribunal


shall hold office for a term of five years from the date on which he
enters upon his office or until he attains the age of sixty-five years,
whichever is earlier.

52. Salary, allowances and other terms and conditions of service of


Presiding Officer. The salary and allowances payable to, and the other
terms and conditions of service including pension, gratuity and other
retirement benefits of. the Presiding Officer of a Cyber Appellate
Tribunal shall be such as may be prescribed: Provided that neither the
salary and allowances nor the other terms and conditions of service of
the Presiding Officer shall be varied to his disadvantage after
appointment.

53. Filling up of vacancies. If, for reason other than temporary absence,
any vacancy occurs in the office n the Presiding Officer of a Cyber
Appellate Tribunal, then the Central Government shall appoint another
person in accordance with the provisions of this Act to fill the vacancy
and the proceedings may be continued before the Cyber Appellate
Tribunal from the stage at which the vacancy is filled.

54. Resignation and removal.

(1) The Presiding Officer of a Cyber Appellate Tribunal may, by notice


in writing under his hand addressed to the Central Government, resign
his office: Provided that the said Presiding Officer shall, unless he is
permitted by the Central Government to relinquish his office sooner,
continue to hold office until the expiry of three months from the date of
receipt of such notice or until a person duly appointed as his successor
enters upon his office or until the expiry of his term of office, whichever
is the earliest.

(2) The Presiding Officer of a Cyber Appellate Tribunal shall not be


removed from his office except by an order by the Central Government
on the ground of proved misbehaviour or incapacity after an inquiry
made by a Judge of the Supreme Court in which the Presiding Officer
concerned has been informed of the charges against him and given a
reasonable opportunity of being heard in respect of these charges.

(3) The Central Government may, by rules, regulate the procedure for
the investigation of misbehaviour or incapacity of the aforesaid
Presiding Officer.

55. Orders constituting Appellate Tribunal to be final and not to


invalidate its proceedings. No order of the Central Government
appointing any person as the Presiding Officer of a Cyber Appellate
Tribunal shall be called in question in any manner and no act or
proceeding before a Cyber Appellate Tribunal shall be called in question
in any manner on the ground merely of any defect in the constitution of
a Cyber Appellate Tribunal.

56. Staff of the Cyber Appellate Tribunal.

(1) The Central Government shall provide the Cyber Appellate Tribunal
with such officers and employees as that Government may think fit

(2) The officers and employees of the Cyber Appellate Tribunal shall
discharge their functions under general superintendence of the Presiding
Officer.

(3) The salaries, allowances and other conditions of service of the


officers and employees or' the Cyber Appellate Tribunal shall be such as
may be prescribed by the Central Government.

57. Appeal to Cyber Appellate Tribunal.

(1) Save as provided in sub-section

(2), any person aggrieved by an order made by Controller or an


adjudicating officer under this Act may prefer an appeal to a Cyber
Appellate Tribunal having jurisdiction in the matter.

(2) No appeal shall lie to the Cyber Appellate Tribunal from an order
made by an adjudicating officer with the consent of the parties.

(3) Every appeal under sub-section (1) shall be filed within a period of
tony-five days from the date on which a copy of the order made by the
Controller or the adjudicating officer is received by the person aggrieved
and it shall be in such form and be accompanied by such fee as may be
prescribed: Provided that the Cyber Appellate Tribunal may entertain an
appeal after the expiry of the said period of tony-five days if it is
satisfied that there was sufficient cause tor not filing it within that
period.

(4) On receipt of an appeal under sub-section (1), the Cyber Appellate


Tribunal may, after giving the parties to the appeal, an opportunity of
being heard, pass such orders thereon as it thinks fit, confirming,
modifying or setting aside the order appealed against.

(5) The Cyber Appellate Tribunal shall send a copy of every order made
by it to" the parties to the appeal and to the concerned Controller or
adjudicating officer.

(6) The appeal filed before the Cyber Appellate Tribunal under sub-
section (1) shall be dealt with by it as expeditiously as possible and
endeavour shall be made by it to dispose of the appeal finally within six
months from the date of receipt of the appeal.

58. Procedure and powers of the Cyber Appellate Tribunal.

(1) The Cyber Appellate Tribunal shall not be bound by the procedure
laid down by the Code of civil Procedure, 1908 but shall be guided by
the principles of natural justice and, subject to the other provisions of
this Act and of any rules, the Cyber Appellate Tribunal shall have
powers to regulate its own procedure including the place at which it
shall have its sittings.

(2) The Cyber Appellate Tribunal shall have, for the purposes of
discharging its functions under this Act, the same powers as are vested
in a civil court under the Code of Civil Procedure, 1908, while trying a
suit, in respect of the following matters, namely:—

(a) summoning and enforcing the attendance of any person and


examining him on oath;
(b) requiring the discovery and production of documents or other
electronic records;
(c) receiving evidence on affidavits;

(d) issuing commissions for the examination of witnesses or documents;

(e) reviewing its decisions;

(f) dismissing an application for default or deciding it ex pane;

(g) any other matter which may be prescribed.

(3) Every proceeding before the Cyber Appellate Tribunal shall be


deemed to be a judicial proceeding within the meaning of sections 193
and 228, and for the purposes of section 196 of the Indian Penal Code
and the Cyber Appellate Tribunal shall be deemed to be a civil court for
the purposes of section 195 and Chapter XXVI of the Code of Criminal
Procedure, 1973.

59. Right to legal representation. The appellant may either appear in


person or authorise one or more legal practitioners or any of its officers
to present his or its case before the Cyber Appellate Tribunal.
60. Limitation. The provisions of the Limitation Act, 1963, shall, as far
as may be, apply to an appeal made to the Cyber Appellate Tribunal.

61. Civil court not to have jurisdiction. No court shall have jurisdiction
to entertain any suit or proceeding in respect of any matter which an
adjudicating officer appointed under this Act or the Cyber Appellate
Tribunal constituted under this Act is empowered by or under this Act to
determine and no injunction shall be granted by any court or other
authority in respect of any action taken or to be taken in pursuance of
any power conferred by or under this Act.

62. Appeal to High Court. Any person aggrieved by any decision or


order of the Cyber Appellate Tribunal may file an appeal to the High
Court within sixty days from the date of communication of the decision
or order of the Cyber Appellate Tribunal to him on any question of fact
or law arising out of such order Provided that the High Court may, if it is
satisfied that the appellant was prevented by sufficient cause from filing
the appeal within the said period, allow it to be filed within a further
period not exceeding sixty days.

63. Compounding of contraventions.

(1) Any contravention under this Chapter may, either before or after the
institution of adjudication proceedings, be compounded by the
Controller or such other officer as may be specially authorised by him in
this behalf or by the adjudicating officer, as the case may be, subject to
such conditions as the Controller or such other officer or the
adjudicating officer may specify: Provided that such sum shall not, in
any case, exceed the maximum amount of the penalty which may be
imposed under this Act for the contravention so compounded.

(2) Nothing in sub-section (1) shall apply to a person who commits the
same or similar contravention within a period of three years from the
date on which the first contravention, committed by him, was
compounded. Explanation.—For the purposes of this sub-section, any
second or subsequent contravention committed after the expiry of a
period of three years from the date on which the contravention was
previously compounded shall be deemed to be a first contravention.

(3) Where any contravention has been compounded under sub-section


(1), no proceeding or further proceeding, as the case may be, shall be
taken against the person guilty of such contravention in respect of the
contravention so compounded.

64. Recovery of penalty A penalty imposed under this Act, if it is not


paid, shall be recovered as an arrear of land revenue and the licence or
the Digital Signature Certificate, as the case may be, shall be suspended
till the penalty is paid.
MODULE-VIII:

Liabilities

A major amendment was made in 2008. It introduced Section 66A


which penalized sending "offensive messages". It also introduced
Section 69, which gave authorities the power of "interception or
monitoring or decryption of any information through any computer
resource". Additionally, it introduced provisions addressing -
pornography, child porn, cyber terrorism and voyeurism. The
amendment was passed on 22 December 2008 without any debate in Lok
Sabha. The next day it was passed by the Rajya Sabha. It was signed
into law by President Pratibha Patil, on 5 February 2009.

Offences

List of offences and the corresponding penalties:

Section Offence Penalty


Imprisonment up to three
Tampering with
65 years, or/and with fine up
computer source documents
to ₹200,000
Imprisonment up to three
66 Hacking with computer system years, or/and with fine up
to ₹500,000
Imprisonment up to three
Receiving stolen computer or
66B years, or/and with fine up
communication device
to ₹100,000
Imprisonment up to three
Using password of another
66C years, or/and with fine up
person
to ₹100,000
Imprisonment up to three
Cheating using computer
66D years, or/and with fine up
resource
to ₹100,000
Imprisonment up to three
Publishing private images of
66E years, or/and with fine up
others
to ₹200,000
66F Acts of cyberterrorism Imprisonment up to life.
Imprisonment up to five
Publishing information which
67 years, or/and with fine up
is obscene in electronic form.
to ₹1,000,000
Imprisonment up to seven
Publishing images
67A years, or/and with fine up
containing sexual acts
to ₹1,000,000
Imprisonment up to three
67C Failure to maintain records
years, or/and with fine.
Imprisonment up to 2
Failure/refusal to comply with
68 years, or/and with fine up
orders
to ₹100,000
Imprisonment up to seven
69 Failure/refusal to decrypt data
years and possible fine.
Securing access or attempting
Imprisonment up to ten
70 to secure access to a protected
years, or/and with fine.
system
Imprisonment up to 2
71 Misrepresentation years, or/and with fine up
to ₹100,000
Imprisonment up to 2
Breach of confidentiality and
72 years, or/and with fine up
privacy
to ₹100,000
Imprisonment up to 3
Disclosure of information in
72A years, or/and with fine up
breach of lawful contract
to ₹500,000
Publishing electronic signature Imprisonment up to 2
73
certificate false in certain years, or/and with fine up
particulars to ₹100,000
Imprisonment up to 2
Publication for fraudulent
74 years, or/and with fine up
purpose
to ₹100,000

Notable cases

Section 66

 In February 2001, in one of the first cases, the Delhi police arrested
two men running a web-hosting company. The company had shut
down a website over non-payment of dues. The owner of the site
had claimed that he had already paid and complained to the police.
The Delhi police had charged the men for hacking under Section
66 of the IT Act and breach of trust under Section 408 of
the Indian Penal Code. The two men had to spend 6 days in Tihar
jail waiting for bail.
 In February 2017, A Delhi based Ecommerce Portal made a
Complaint with Hauz Khas Police Station against some hackers
from different cities accusing them for IT Act / Theft / Cheating /
Misappropriation / Criminal Conspiracy / Criminal Breach of Trust
/ Cyber Crime of Hacking / Snooping / Tampering with Computer
source documents and the Web Site and extending the threats of
dire consequences to employees, as a result four hackers were
arrested by South Delhi Police for Digital Shoplifting.[9]

Section 66A
 In September 2012, a freelance cartoonist Aseem Trivedi was
arrested under the Section 66A of the IT Act, Section 2
of Prevention of Insults to National Honour Act, 1971 and
for sedition under the Section 124 of the Indian Penal Code.[10] His
cartoons depicting widespread corruption in India were considered
offensive.
 On 12 April 2012, a Chemistry professor from Jadavpur
University, Ambikesh Mahapatra, was arrested for sharing a
cartoon of West Bengal Chief Minister Mamata Banerjee and
then Railway Minister Mukul Roy.[13] The email was sent from the
email address of a housing society. Subrata Sengupta, the secretary
of the housing society, was also arrested. They were charged under
Section 66A and B of the IT Act, for defamation under Sections
500, for obscene gesture to a woman under Section 509, and
abetting a crime under Section 114 of the Indian Penal Code.[14]
 On 30 October 2012, a Puducherry businessman Ravi Srinivasan
was arrested under Section 66A. He had sent tweet accusing Karti
Chidambaram, son of then Finance Minister P. Chidambaram, of
corruption. Karti Chidambaram had complained to the police. [15]
 On 19 November 2012, a 21-year-old girl was arrested
from Palghar for posting a message on Facebook criticising the
shutdown in Mumbai for the funeral of Bal Thackeray. Another
20-year-old girl was arrested for "liking" the post. They were
initially charged under Section 295A of the Indian Penal Code
(hurting religious sentiments) and Section 66A of the IT Act.
Later, Section 295A was replaced by Section 505(2) (promoting
enmity between classes).[16] A group of Shiv Sena workers
vandalised a hospital run by the uncle of one of girls.[17] On 31
January 2013, a local court dropped all charges against the girls. [18]
 On 18 March 2015, a teenaged boy was arrested
from Bareilly, Uttar Pradesh, for making a post on Facebook
insulting politician Azam Khan. The post allegedly contained hate
speech against a community and was falsely attributed to Azam
Khan by the boy. He was charged under Section 66A of the IT Act,
and Sections 153A (promoting enmity between different religions),
504 (intentional insult with intent to provoke breach of peace) and
505 (public mischief) of Indian Penal Code. After the Section 66A
was repealed on 24 March, the state government said that they
would continue the prosecution under the remaining charges.[19][20]
Section 69A

 On 29 June 2020, the Indian Government banned


59 Chinese mobile apps, most notably TikTok, supported by
Section 69A and citing national security interests.[21][22]
 On 24 November 2020, another 43 Chinese mobile apps were
banned supported by the same reasoning, most
[23][24]
notably AliExpress.
 54 more apps including popular video game Garena Free Fire were
banned on 14 February 2022 under the same section. [25]

Criticisms

Section 66A and restriction of free speech


From its establishment as an amendment to the original act in 2008,
Section 66A attracted controversy over its unconstitutional nature:

Section Offence Description Penalty


Any person who sends by any
means of a computer resource
any information that is grossly
offensive or has a menacing
Publishing character; or any information
Imprisonment
offensive, which he knows to be false,
up to three
66A false or but for the purpose of causing
years, with
threatening annoyance, inconvenience,
fine.
information danger, obstruction, insult
shall be punishable with
imprisonment for a term
which may extend to three
years and with fine.

In December 2012, P Rajeev, a Rajya Sabha member from Kerala, tried


to pass a resolution seeking to amend the Section 66A. He was
supported by D. Bandyopadhyay, Gyan Prakash Pilania, Basavaraj Patil
Sedam, Narendra Kumar Kashyap, Rama Chandra Khuntia
and Baishnab Charan Parida. P Rajeev pointed that cartoons and
editorials allowed in traditional media, were being censored in the new
media. He also said that law was barely debated before being passed in
December 2008.[26]
Rajeev Chandrasekhar suggested the 66A should only apply to person to
person communication pointing to a similar section under the Indian
Post Office Act, 1898. Shantaram Naik opposed any changes, saying
that the misuse of law was sufficient to warrant changes. The
then Minister for Communications and Information Technology, Mr
Kapil Sibal defended the existing law, saying that similar laws existed in
US and UK. He also said that a similar provision existed under Indian
Post Office Act, 1898. However, P Rajeev said that the UK dealt only
with communication from person to person.[26]

Petitions challenging constitutionality

In November 2012, IPS officer Amitabh Thakur and his wife social
activist Nutan Thakur, filed a petition in the Lucknow bench of
the Allahabad High Court claiming that the Section 66A violated the
freedom of speech guaranteed in the Article 19(1)(a) of the Constitution
of India. They said that the section was vague and frequently misused. [27]

Also in November 2012, a Delhi-based law student, Shreya Singhal,


filed a Public Interest Litigation (PIL) in the Supreme Court of India.
She argued that the Section 66A was vaguely phrased, as result it
violated Article 14, 19 (1)(a) and Article 21 of the Constitution. The PIL
was accepted on 29 November 2012.[28][29]

In August 2014, the Supreme Court asked the central government to


respond to petitions filed by the Internet and Mobile Association of
India (IAMAI) which claimed that the IT Act gave the government
power to arbitrarily remove user-generated content.

You might also like