Professional Documents
Culture Documents
Risks of artificial intelligence
Risks of artificial intelligence
Abdelmalek El Kadoussi
14
07/12/2021
Job Automation
AI will create pressure on the labour market in the years ahead. Since job automation is
often regarded as the most pressing issue. It is no more a question of whether AI will replace
sorts of occupations, but rather to what extent? Disruption is well underway in many
industries, notably but not primarily those in which people perform predictable and repetitive
jobs: Positions needing repetitive activities are the most susceptible, but as machine learning
algorithms improve, jobs requiring degrees may become more vulnerable as well. Experts
concur that the biggest immediate risk of AI applications is job automation. According to a
2019 Brookings Institution report, automation threatens around 25% of American jobs.
According to the report, low-wage employees, particularly those in food service, office
A data breach occurs when information is stolen or removed from a system without the
owner's knowledge or authority. A data breach can occur in either a small or large firm.
Stolen data may contain sensitive, proprietary, or confidential information such as credit card
consequences of a data breach might include damage to the target company's reputation
because of a perceived "betrayal of trust." Victims and their clients may potentially incur
The International Data Corporation projecting that the global datasphere will grow from 33
zettabytes (33 trillion gigabytes) in 2018 to 175 zettabytes (175 trillion gigabytes) by 2025.
As this data world increases at an exponential rate, the risks of disclosing customer or
2021 Data Breach Facts: An estimated 85% of data breaches in 2020 involved a human
element. Phishing is the top threat action that results in a breach. The number of breaches that
involve ransomware has doubled. More than 60% of breaches involve credentials. Over 80%
If left unchecked, it’s possible for AI’s imperfections to cause physical harm.
According to June 2020 research by the Insurance Institute for Highway Safety (IIHS),
driverless cars will still fail to avoid almost two-thirds of collisions. This was especially
behaviours based on driver choice. In a statement about the study, IIHS research
scientist Alexandra Mueller said, “It will be crucial for designers to prioritize safety
over rider preferences if autonomous vehicles are to live up to their promise to be safer
weapons have the potential to unleash widespread devastation if they fall into the hands
war with catastrophic victims. To prevent being foiled by the adversary, these weapons
to lose control of such a situation. This risk exists even with narrow AI, but it escalates
Loss of skills
Smart software makes our life simpler and reduces the number of monotonous chores
phone numbers, being able to anticipate rain by looking at the sky, and so on. It is not
immediately critical. In everyday life, we are losing abilities and entrusting them to
technology. This has been going on for millennia. For example, hardly no one knows how to
build fire by hand today. In this context, I believe it is vital to ask: aren't we becoming overly
reliant on modern technology? How powerless do we want to feel in the absence of digital
technology?
Not to mention that sophisticated computer systems will progressively comprehend who
we are, what we do, and why we do it, and will provide us with tailored services: Isn't it a
significant human?
good as the data it is trained on. Google researcher Timnit Gebru said the root of bias is
social rather than technological and called scientists like herself “some of the most
dangerous people in the world, because we have this illusion of objectivity.” The
scientific field, she noted, “has to be situated in trying to understand the social dynamics
of the world, because most of the radical change happens at the social level.” If a certain
demographic is underrepresented in the data used to train a machine learning model, the
model's output may be biased toward that community. Facial recognition technology are the
most recent applications to be scrutinized, although there have been previous instances of
prejudice in recent years. In most cases, it is often the underlying data that cause the bias.
According to the McKinsey Global Institute, “Models may be trained on data containing
inequities.” Bias can even result from the way data was collected.
Various forms of AI bias are detrimental, too. Speaking recently to the New York
Times, Princeton computer science professor Olga Russakovsky said it goes well beyond
gender and race. In addition to data and algorithmic bias (the latter of which can
“amplify” the former), AI is developed by humans and humans are inherently biased.
driven job loss. Work, with education, has traditionally been a driver of social mobility.
sort that is vulnerable to AI takeover — research has shown that people who are left out
in the cold are considerably less likely to acquire or seek retraining than those in
Sources:
https://futureoflife.org/background/benefits-risks-of-artificial-intelligence/
https://www.mckinsey.com/featured-insights/future-of-work/jobs-lost-jobs-
gained-what-the-future-of-work-will-mean-for-jobs-skills-and-wages#part1