Global Buisness Law

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 7

Q.

How does AI, big data, and two-sided platform markets impact consumer welfare and

Business, and how should law and regulation respond?

Artificial intelligence has no specific definition, we can say Artificial intelligence is the action
devoted to making machines smarter, and intelligence is a trait that permits an entity to operate
efficiently and with foresight in its environment.

When we think about AI, we mostly associate it (AI) with cyborgs, robots, and sci-fi thrillers in
which machines are controlling the world. However, the reality is, that AI is already here. It's in our
cell phones, health trackers that track our heartbeats, and appliances that inform us when our stored
items are about to expire.  It can operate automobiles, exchange stocks and bonds, speak various
languages, detect human features more accurately than we could ever, and can even generate
hypotheses to aid in the discovery of new medicines to cure diseases.  AI is closely related to
consumer welfare. Customer welfare refers to maximising consumer utility within the limits of the
consumer's income, supply of goods, accessibility etc. AI improves customer welfare by decreasing
the impact of these limitations by enhancing efficiency and reducing the amount of time spent on
particular tasks.[ CITATION Bev18 \l 2057 ]

There are numerous advantages.  The experiences of the consumer concerning the benefits derived
from AI can be classified into three broad categories based on their interaction with AI-enabled
services.  Furthermore, the government and regulatory agencies are increasingly relying on
this technology.

Voice recognition, face detection, and speech-to-text and text-to-speech conversions are examples of
technologies and apps that can help people make day-to-day tasks easier and more efficient. This is
already being used in devices such as Siri and Google Assistant. Other than the benefits that
consumers presently enjoy, these technologies do not just act as facilitator but also are expected to
take over key roles such as financial experts and tax accountants.[ CITATION Giu14 \l 2057 ]

In commerce, artificial intelligence (AI) helps to improve overall efficiency by matching supply and
demand. Consumers can save money by using browsers, price control, and price comparison tools to
cut down on search and transaction expenses, they can  evaluate complex offers quickly, and pick the
best deal. Consumers would benefit greatly from AI-powered applications since they will be able to
find new items that are tailored to their needs, more convenient, and time-saving.

According to a PwC analysis concentrating on the realities of AI in United States, consumers feel AI
systems would be far more effective for customer service.   In the same way that AI help consumers
in individual work, it can also help in consumers protection in general by boosting the work of
consumer organisations, research, and governmental authorities. Improvement in accuracy could
make market research and forecasting easier. AI systems could be used by law enforcement agencies,
for example, to automatically analyse consumer contracts and alert consumers to potentially unfair
conditions before they make an online purchase.[ CITATION Sus16 \l 2057 ]

AI isn't just for tech companies and commercial services, governments all around the world are
developing policies to capitalise on it for better public service delivery. Access to education,
healthcare, and agriculture are just a few of the areas where such projects have already begun. In
Karnataka, for example, the Indian government and Microsoft are collaborating on a project. An AI
software is being created as part of the project to notify farmers about the ideal date to sow crops in a
given season. Several other countries, such as Colombia and Brazil, are also working on similar ideas.

However, with the advantages come challenges and hazards. When we talk about AI and customers,
the first thing that comes to mind is, of course, issues like privacy and data protection.AI
needs massive amounts of training data, as inputs. To build big data sets Consumer data is
continuously gathered by on-line and offline consumer activity tracking, stored and combined with
other data sources, and processed to elicit additional information about customers through profiling.
However, because of the novelty of issues such as discrimination, bias, and manipulation that may
arise from the use of AI, the general public is largely unaware of the data industry's rapid growth and
the severity to which their personal data has become a commodity that is traded between private and
public agencies. Even more concerning is that, the average person has little idea of how much
personal information is collected by third parties or how private firms and the government have begun
to use this information. Individual online behaviours have led to the creation of massive databases,
which are continually being updated. Individual transactions, email, video, photos, clickstream,
logging, web searches, medical histories, and social media interactions are frequently included in
these collections of data. Secondly, personal information about consumers can be obtained and
compiled from a variety of offline sources, including public documents (such as criminal records,
deeds, and business filings), retailer sales data, credit agencies, clinics and hospitals, and so on.
According to the Dutch Data Protection Authority, Facebook allows advertisers to post ads where
People can be targeted by labelling their sensitive information. sexual preferences and other qualities
which have been derived from the stream Individual data is collected in a variety of ways.

Finally, massive volumes of data are collected from an ever-increasing number of smart devices such
as cell phones, security and traffic surveillance cameras, global positioning satellites, and other AI-
driven gadgets such as Alexa and Google Assistants that can record and transfer data. Following the
illegal or unauthorised collecting of personal information from customers, other prevalent issue is the
increased danger of this information being improperly accessed or given to other parties. There are
two scenarios in which data leaks can be harmful. The first is when a data-holding business wilfully
shares personal data in a way that does not adequately protect individuals' privacy. The second
scenario is one in which the data-holding body fails to install adequate controls, allowing a third party
to gain access to the information they have on hand.

Identity theft is perhaps the most obvious harm that security breaches and hacks can do. Identity theft
affects 17.6 million (7%) of all US inhabitants aged 16 and over, according to the US Bureau of
Justice Statistics. 25 Identity theft has consistently been one of the most common consumer
complaints in the United States, ranking first in 2014, second in 2015, and third in 2016. Identity theft
accounted for 13% of consumer complaints in 2016, trailing debt collection (28%), and imposter scam
(13%), all of which could be fuelled by stolen personal information.

Mobile applications have gained access to some of a customer's most personal information, putting
their privacy at risk. It has been discovered that Amazon's Alexa records private chats and sends them
to random recipients. The same privacy concerns apply to Google Assistant. Furthermore, there is the
possibility for new types of hazards to emerge as a result of unethical data use, such as the formation
of deep false and uninvited social network profiles. A deep fake video of the president generated with
AI technology was recently shared in Nairobi, requiring a security advisory to stop the spread of
dubious content.

In such circumstances, AI becomes a double-edged sword, since it raises the possibility of both
unethical data gathering and usage. Because consumers' lives are no longer private, various hazards
such as stalking, blackmailing, and bullying, both online and offline, arise.

While our smart devices are intended to make our lives easier and healthier, critics have pointed out
that they are also capable of achieving a number of micro and macro objectives that benefit their
manufacturers rather than the users – even though the users are the ones who own the smart devices in
issue. For example, abusers may use the "Find My iPhone" app to track a partner's location, or they
may purchase a smartphone for a girlfriend or spouse and then restrict how and when they use it.

Furthermore, as more and more products rely on software, Intellectual Property Rights (IPRs) are
becoming increasingly vital (e.g. cars). Today, important components of what makes a product work
are licenced to consumers, and so are protected by various conditions. These licences may impose
time limits on product support, disable particular features without notice, and so on. Copyright rules
and its application via Digital Rights Management (DRM) are going to become an important factor in
consumers' daily lives in this environment. It is already weakening the traditional sense of "product
ownership" by consumers.

Companies can now precisely estimate a person's willingness to pay for a product using big data and
AI. There have been claims that doing so will boost efficiency. But what if there aren't just economic
motivations at work? What if a firm, such as an airline, wants to discriminate against members of a
certain ethnic or religious group by raising prices selectively? Because it feels that doing so will
improve the value that other consumers place on it, hence increasing profits?

Princeton Review, a firm based in the United States, charged different pricing for its virtual tutoring
sessions based on ethnicity and race. For similar services, Asians were charged more. This occurred
because the training data entered into the system came from places with a higher concentration of
Asian students who were willing to pay extra. As a result, there was price discrimination for the
services supplied on the platform, despite the fact that it was not intended by the corporation.
[ CITATION JAn16 \l 2057 ]

Companies can know what traits each customer has, what their willingness to pay is, and present a
price greater than they can afford thanks to AI, big data, and tailored commercial practises. Even in
the best-case scenario, where businesses do not intend to discourage some of us from purchasing their
goods or using their services by selectively displaying prohibitively high prices, consumers may be
made to spend the highest price they are willing to pay because dynamic pricing is at its core. This is
already taking place online, and with developments in facial detection and emotion identification
technologies, it may soon take place offline as well.

In the application of AI technology in commerce and other social and economic activities, there is
also an increased risk of prejudice. Machines are meant to make data-driven, objective judgments and
actions, as opposed to human decisions and acts, which are prone to bias and emotion. However, the
quality of AI systems is determined on the data we feed them. Implicit racial, gender, and ideological
biases can be found in bad data. Many AI systems will continue to be taught on erroneous data,
making this a persistent issue. Of course, the algorithmic model is the other source of bias in the AI
system. There has been documented race and gender discrimination in the delivery of advertisements.
Anupam Gupta of Carnegie Mellon University led a study. It was discovered that Google's internet
advertising showed high-paying positions to men far more frequently than it did to women, resulting
in a gender bias. Government should make sure that there is no discrimination in showing jobs
advertisement to certain genders so that there is equal opportunity for everyone.[ CITATION Sam15 \l
1033 ]

Based on the data fed into such systems, AI advertisement systems may only show advertisements to
consumers who have already used a service or who have chosen comparable options, creating a bias
against consumers who are new to such a service or product.

Discrimination could also emerge in AI-assisted decision-making. Individual creditworthiness has


typically been determined in the financial services industry by a transparent process based on well-
defined standards and a small amount of data. However, when AI is used to drive huge data, this
transparency may not always be possible. As artificial intelligence (AI) is integrated into financial
operations, financial institutions risk their algorithms making biased decisions or acting in ways that
discriminate against protected groups of people, resulting in financial institutions being held liable,
even if the alleged discrimination was unintentional. These worries are amplified in countries with a
diverse population of people of many religions, ethnicities, and socioeconomic backgrounds. Because
the AI training data may be tainted by a history of discrimination inside societal systems, technology-
driven services may be used to perpetuate inequality. The National Fair Housing Alliance of the
United States and three other organisations filed lawsuits against Facebook on March 27, 2018,
alleging that Facebook's advertising platform allows landlords and real estate brokers to discriminate
against certain groups of people by preventing them from receiving relevant housing ads. Facebook
customises the audience for its millions of advertisers based on its massive database of individualised
user data, which has nearly 2 billion users. Despite repeated warnings about its discriminatory
advertising practises, Facebook continued to exploit this information to deny people access to rental
housing and real estate sales ads based on their gender and family status. According to the lawsuit,
Facebook built pre-populated lists that allowed its housing marketers to "exclude" (as Facebook puts
it) home seekers from accessing or getting rental or sales ads based on protected criteria such as
family status and sex. Plaintiffs undertook investigations on Facebook's discriminatory tactics in each
of their different housing markets, which were substantiated by the findings. In November 2018
settlement took place between civil rights and Facebook. According to this settlement Facebook made
significant modifications and took steps to eliminate discrimination in housing, employment, and
credit advertising on its platforms, including Facebook, Instagram, and Messenger. These
modifications show that considerable progress has been made. These were conditions of settlement.
[ CITATION Kat18 \l 1033 ]

Businesses today understand customer tastes and behaviour better than consumers themselves, thanks
to systems powered by big data and AI technologies. The capacity to present personalised commercial
activities such as adverts, offers, and suggestions (such as "you might be interested in this product,"
"people like you also purchased this "), often known as recommendation systems, may be beneficial
to consumers in some cases. But there is a possibility of unwanted influence. Big Data can be used by
AI systems to predict consumer behaviour and try to elicit desirable responses. As a result, customers
can be tricked, manipulated, and coerced into making substandard purchases or making other
unintentional decisions. Consumers are realising that targeted advertising is causing them to over-
shop and nudge them to buy products they don't need, as well as inducing compulsive online shopping
behaviours.

Artificial intelligence and Big data help businesses in advertising their product or service to the right
customers and help reach the target market which is in need of the product where products demand
will be high. For Example Telenor used the system data and came to know that people in rural areas
don’t have access to phone call and they need it. So, Telenor build first tower in rural areas and first
telecommunication company to target people from rural areas. Two sided platform markets also help
business in reaching to extra customers. For Example, people can sale and buy products on eBay.

Since 2005, the Privacy Rights Clearinghouse reports that 7,859 data breaches have been made public,
potentially exposing billions of records with personally identifiable information. Between 2016 and
2018, India had the second-highest number of cyber-attacks, according to a research by the Data
Security Council of India. Similar worries have been expressed in Africa, where hacks are wreaking
havoc on ecommerce businesses and putting people's privacy at danger.

Users should be informed about how their data is utilised, whether AI is being used to make
judgements about them, and whether their data is being used to create AI. They should also be offered
the option of consenting to such data collection.
References
Anyoha, R. ((2017). The Impact of Big Data and Artificial Intelligence on Insurance sector. 10.

Contissa, G. (2014). Towards Consumer-Empowering Artificial Intelligence. Harverd Business Review.

Diane L. Houk. (2019, 03 19). SUMMARY OF SETTLEMENTS BETWEEN CIVIL RIGHTS ADVOCATES AND
FACEBOOK. Retrieved from ACLU: https://www.aclu.org/other/summary-settlements-
between-civil-rights-advocates-and-facebook

Gibbs, S. (2015, 07 8). Women less likely to be shown ads for high-paid jobs on Google. Retrieved
from The Guardian: https://www.theguardian.com/technology/2015/jul/08/women-less-
likely-ads-high-paid-jobs-google-study

J.Anik. (2016). Artificail Intelligence in delivery of public service. UNESCAP.

Ladika, S. (2016). Data Breaches Pose a Greater Risk. foxbusiness.

Lopez-Neira, I. (2017, 6 24). Safe – The Domestic Abuse Quarterly. p. 18.

National Fair Housing Alliance et al. v. Facebook, PA-NY-0004 (1:18-cv-02689 ( S.D.N.Y. ) 03 27,
2018).

Rich, B. (2018). How AI Is Changing Contracts. Harvard Business Review, 18.

Rosenfeld, K. (2018, 03 27). National Fair Housing Alliance (“NFHA”). Retrieved from
nationalfairhousing.org: https://nationalfairhousing.org/2018/03/27/facebook-sued-by-civil-
rights-groups-for-discrimination-in-online-housing-advertisements/

Soni, Y. (2019, 05 30). India Second Most Affected Country Due To Cyber Attacks: Report. Retrieved
from Inc42: https://inc42.com/buzz/cyber-attacks-india/

[ CITATION Any17 \l 2057 ]

You might also like