Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 7

Do the benefits of artificial intelligence outweigh the risks?

We need to develop AI that aligns with human values

Towards the end of the second world war, a group of scientists in America working to develop an atomic bomb for the
Manhattan Project warned that using the weapon would inevitably lead to a geopolitical landscape characterized by a
nuclear arms race. This would force America, they said, to outpace other nations in building up nuclear armaments.
They recommended that if the military did choose to use the weapon, an international effort for nuclear non-
proliferation should promptly be established.

The committee’s warnings went unheeded. After the nuclear attacks on Hiroshima and Nagasaki, it turned out they
had been eerily prescient. The arms race between America and the Soviet Union escalated during the cold war and
today rogue states like North Korea threaten peace with their nuclear arsenals.

A potentially even more transformative technology is currently being developed: a technology which could easily be
distributed to rogue nations and terrorist groups without the need for expensive, specialized equipment. Prominent
scientists and technologists like the late Stephen Hawking and Elon Musk have voiced concern for the risks associated
with the accelerating development of artificial intelligence (AI).

Many experts in the field, like Stuart Russell, the founder of the Centre for Human-Compatible AI at UC Berkeley,
believe that concerns about the misuse of AI should be taken seriously. More than 8,000 researchers, engineers,
executives, and investors have signed an open letter recommending a direction for responsible AI research that
recognizes social impact and seeks to build robust technology that aligns with human values.

To avoid repeating history, policymakers should begin to think about regulating AI development now that the
community itself is calling for policy action. As with past technologies, well-structured regulation can mitigate costly
externalities, while ill-informed regulatory measures can interfere with progress. Policymakers must cooperate closely
with researchers to implement protocols that align AI with human values without being overly burdensome to
developers.

The emerging field of AI safety has already begun discussing guidelines to tackle the potential dangers of the
technology. Sessions devoted to AI safety and ethics have taken place at major scientific conferences and several
books and articles on the topic have been published. By understanding researchers’ concerns, regulators can address
the dangers of AI and the benefits of the technology will greatly outweigh the risks.

AI is a general term for software that mimics human cognition or perception. Because AI encompasses a broad set of
algorithms, policymakers must take a nuanced approach to regulation, underscoring the need for technical
collaboration. At a high level, a distinction is made between narrow AI and artificial general intelligence (AGI).

Narrow AI is more intelligent, or at least faster, than humans at a specific task or set of tasks, like playing the board
game Go or finding patterns in large datasets. On the other hand, an AGI would beat humans at a number of cognitive
tasks, termed cognitive superpowers by Nick Bostrom, a philosopher at the University of Oxford. These include
intelligence amplification, strategizing, social manipulation, hacking, technology development and economic
productivity.
Narrow AI is responsible for many useful tools that have already become mainstream: speech and image recognition,
search engines, spam filters, product and movie recommendations. The list goes on. Narrow AI also has the potential
to enable promising technologies like driverless cars, tools for rapid scientific discovery and digital assistants for
medical image analysis.

In the near-term, some of these technologies have the potential to be abused by malicious groups. The cost of attacks
requiring human labor or expertise could be reduced, and new threats exploiting vulnerabilities in AI systems could
emerge. AI can automate labor-intensive cyberattacks, coordinate fleets of drones, allow for mass surveillance through
facial recognition and social data mining, or generate realistic fake videos for political propaganda.

Furthermore, increased automation gives more physical control to digital systems, making cyberattacks even more
dangerous. Regulation can ensure that AI engineers are employing best practices in cybersecurity and limiting
distribution of military technology. Considering the portability of AI, enforcing these rules will be difficult and
international cooperation will likely be necessary.

Some researchers are concerned that, since algorithms are only as good as the data they are fed, narrow AI can make
biased decisions. Biased or incomplete training data will be reflected in the output. One study with a machine learning
program trained on texts found that names associated with being European-American were significantly more likely to
be correlated with pleasant terminology than African-American names. AI that makes consequential decisions, like
hiring job candidates or predicting recidivism, should be screened before being adopted. Regulatory agencies will have
to decide if an AI makes fair decisions by combing through training data for stereotypes.

On the other hand, the possibility of AGI is uncertain but some futurists believe its unchecked consequences could be
apocalyptic. Some speculate that an AGI could appear within the next few decades in a so-called hard take-off, where
its capabilities increase very rapidly as the program undergoes a process of recursive self-improvement. At the same
time, others believe that intelligent agents have intrinsic limitations to augmenting their predictive capabilities
autonomously and doomsday scenarios are unlikely, if not probably impossible.

Nonetheless, researchers are already discussing the dangers that machine superintelligence might pose. One thesis
claims that an AGI with almost any programmed goal would develop a set of “basic AI drives,” such as self-
preservation, self-improvement and resource acquisition. In this model, the AGI would be motivated to spread itself
across computer networks and evade programmers. The AGI would leverage its cognitive superpowers to escape
containment and achieve self-determination.

For example, the AGI might train itself on psychology and economics textbooks and use personal information about its
developers to learn how to bribe its way to freedom. The AGI may then see humans as a threat to its self-preservation
and seek to extinguish the human species.

Researchers have suggested several ways to contain an AGI during testing, which policymakers can use as guidelines
for drafting regulations. Containment strategies range from filtering training data for sensitive information to
significantly handicapping the development process by, for example, limiting output to simple yes/no questions and
answers. Some researchers have suggested dividing containment procedures into light, medium, and heavy categories.
Regulations should avoid slowing progress when possible, so the weight of containment should vary with the maturity
of the AGI program.
Containment is a short-term solution for AGI testing. In the long run, regulations must ensure that an internet-enabled
AGI is indefinitely stable and has benevolent properties such as value learning and corrigibility before being deployed.
Value learning is an AGI’s ability to learn what humans value and act in accordance with those values. Corrigibility
refers to an AGI’s lack of resistance to bug-fixes or recoding.

One can imagine how an ideal AGI with a conception of justice and solidarity would be beneficial. Such an AGI could
replace corrupt governments and biased judicial systems, making decisions according to a democratically-determined
objective function. Moreover, a sufficiently sophisticated AGI could perform virtually any job done by a human. It is
conceivable that the economy would be restructured in such a way that humans are free to pursue their creative
passions while AGI drives productivity. As with past technologies, there will also be useful applications that we cannot
even foresee.

There are many unknowns in the progress of AI and concerns should be met with due caution. But a fear of the
unknown should not stop the advance of responsible AI development. Rather than ignoring researchers’ concerns until
the technology is mature, as with nuclear weapons, governments should open dialogue with AI researchers to design
regulations that balance practicality with security.

Sentences

● Inevitably (adverb)

○ Meanwhile time passes, technology advances, so change is inevitable, soon many people should

change their jobs because of this constant evolution.

● Weapon (noun)

○ Normally, when science discovers or creates something new, the government takes it and transforms

it into a weapon to use in a possible war with other countries.

● Threaten (verb)
○ Factory jobs were threatened by A.I. because it is faster, it has better performance, and can work for

long periods.

● Technology (noun)

○ In the last 50 years, technology advances very fast. Before the 90s people were thinking that talking,

chatting or just communicating simultaneously with a person in another part of the world was just a

crazy dream, after 2000, it was a reality.

● Prominent (Adjective): The vote of the Hispanic community played a prominent role in the elections.

● Misuse (Noun): The misuse of social media platforms can have serious consequences for individuals and

society.

● Mitigate (Verb): Planting trees can help mitigate the effects of climatic change by absorbing carbon dioxide.

● Potential (Noun): The young artist showed great potential with their exceptional talent and unique style.

● Outweigh (Verb): Experience outweighs credentials for some potential employers

● Distinction (Noun): There are a lot of distinctions between alligators and crocodiles.

● Mainstream (Adjective): The mainstream media plays a crucial role in shaping public opinion.

● Analysis (Noun): Financial analysis can look easy but is difficult.

● Malicious (Adjective): Some criminals use malicious software called malware to steal bank account

information.

● Vulnerabilities (Noun): There are a lot of vulnerabilities in cyber security that have to be mitigated.

● Propaganda (Noun): The president’s propaganda campaign was prepared to influence people.

● Cyberattacks (Noun): International Bank was having a lot of cyberattacks last week.

● Researchers (Noun): Scientific researchers want to study some environmental changes.

● Algorithms (Noun): The system engineer was creating a program that includes complex algorithms.

● Capabilities (Noun): He has the capability to make people feel comfortable because he is kind.

Sentences

To Convince (Verb): It was really complicated to convince the teacher but at the end he said “Ok, let’s take a

break”.

To Influence (Verb): Nowadays, parents are trying to influence their children into extracurricular activities

such as: Karate, drawing, sports, etc.


Authorities (Noun): We must have respect for our authorities in our lives.

Evidence (Noun): Detectives have the evidence to accuse him of the murder.

To Enhance (Verb): I believe that I can enhance my English skills by studying every day.

Accurate (adjective): She accurately calculated her steps.

Credibility (Noun): He loses her credibility when he lies.

Persuasion (Noun): With a little persuasion, that boy asked his mother to buy him an ice cream.

Appealing (adjective): We included music to make the meeting more appealing to young people.

Sympathy (Noun): I feel sympathy for the homeless

Sadness (Noun): Her sadness was greater because of her grandfather's death.
WRITE PROS AND CONS ABOUT ELECTRIC CARS.

PROS CONS

AI can Increase efficiency and productivity Security Risks

AI has high costs of creation


AI has 24/7 Availability and it makes
people's lives easier

AI could create new jobs AI has lack of morality (consciousness, and


sensibility)

OUTLINE

POSITION: I disagree with it.

BODY 1: AI could create new jobs

-         AI strives to transform jobs, not eliminate them.

-         AI is created and controlled by humans and has been trained on the data produced
by humans

BODY 2: AI can Increase efficiency and productivity

- Bring more innovation

- It works with efficient management practices

CONCLUSION:

- AI could create a brighter future for human beings because it brings more efficiency
and productivity in many activities, and it could transform and increase employment
by creating new jobs. I recommend people be prepared for the new era of everything
digitized.
Artificial Intelligence: will help us or replace us?

Introduction

No one expected the age of Artificial Intelligence (AI) would be upon us so quickly. Many of
the things that had been promised, like self-driving cars and full-service robot valets, seemed
to always meet delays and disappointment, but nowadays it is all around us bringing
technological benefits. AI is the reason humans could transform the way they live, it
increases efficiency and productivity in normal tasks, and this can open the way to the
creation of new jobs.

Body 1:

AI could create new jobs. Artificial intelligence will be a great job engine in time to come. This
perception is difficult to accept because those jobs look nothing like those that exist today,
but it is true,  AI strives to transform jobs, for instance, the emergence of AI-powered digital
assistants and smart home appliances has opened up new career prospects for hardware
engineers, data analysts, and software developers. AI is controlled by humans that have been
trained on the data produced through courses in the creation and application of algorithms
built into a dynamic computing environment.

Body 2:

AI can Increase efficiency and productivity because it brings more innovatión in the way of
function of its system that makes it possible for machines to learn from experience, adjust to
new inputs and perform human-like tasks. Also, It works with efficient management practices
by automating repetitive and time-consuming tasks, reducing manual work, and improving
operational efficiency.

Conclusion:

AI could create a brighter future for human beings because it brings more efficiency and
productivity in many activities, and it could transform and increase employment by creating
new jobs. I recommend people be prepared for the new era of everything digitized.

You might also like