CCS345-Unit I

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 10

UNIT I

INTRODUCTION
Definition of morality and ethics in AI-Impact on society-Impact on human psychology-
Impact on the legal system-Impact on the environment and the planet-Impact on trust

What is AI?

Artificial intelligence is the study of how to make computers do things which, at the
moment people do better.

FUTURE OF ARTIFICIAL INTELLIGENCE

Cyber security:

Undoubtedly, cyber security is a priority of each organization to ensure data security. There
are some predictions that cyber security with AI will have below changes:

o With AI tools, security incidents will be monitored.


o Identification of the origin of cyber-attacks with NLP.
o Automation of rule-based tasks and processes with the help of RPA bots.

However, being a great technology, it can also be used as a threat by attackers. They
can use AI in a non-ethical way by using automated attacks that may be intangible to defend.

AI in Education

The development of a country depends on the quality of education youth is getting.


Right now, we can see there are lots of courses are available on AI. But in the future AI is
going to transform the classical way of education. Now the world doesn’t need skilled
labourers for manufacturing industries, which is mostly replaced by robots and automation.
The education system could be quite effective and can be according to the individual’s
personality and ability. It would give chance brighter students to shine and to imbecile a
better way to cop up.

Right Education can enhance the power of individuals/nations; on the other hand,
misuse of the same could lead to devastating results.
AI in Finance

Quantification of growth for any country is directly related to its economic and
financial condition. As AI has enormous scope in almost every field, it has great potential to
boost individuals’ economic health and a nation. Nowadays, the AI algorithm is being used in
managing equity funds.

An AI system could take a lot number of parameters while figuring out the best way
to manage funds. It would perform better than a human manager. AI-driven strategies in the
field of finance are going to change the classical way of trading and investing. It could be
devastating for some fund managing firms who cannot afford such facilities and could affect
business on a large scale, as the decision would be quick and abrupt. The competition would
be tough and on edge all the time.

AI in Military and Cyber security

AI-assisted Military technologies have built autonomous weapon systems, which


won’t need humans at all hence building the safest way to enhance the security of a nation.
We could see robot Military in the near future, which is as intelligent as a soldier/ commando
and will be able to perform some tasks.

AI-assisted strategies would enhance mission effectiveness and will provide the safest
way to execute it. The Concerning part with AI-assisted system is that how it performs
algorithm is not quite explainable. The deep neural networks learn faster and continuously
keep learning the main problem here would be explainable AI. It could possess devastating
results when it reaches in the wrong hands or makes wrong decisions on its own.

Transportation:

The fully autonomous vehicle is not yet developed in the transportation sector, but
researchers are reaching in this field. AI and machine learning are being applied in the
cockpit to help reduce workload, handle pilot stress and fatigue, and improve on-time
performance. There are several challenges to the adoption of AI in transportation, especially
in areas of public transportation. There's a great risk of over-dependence on automatic and
autonomous systems.

Although it could take a decade or more to perfect them, autonomous cars will one day ferry
us from place to place.

Manufacturing: AI powered robots work alongside humans to perform a limited range of


tasks like assembly and stacking, and predictive analysis sensors keep equipment running
smoothly.
 Media: Journalism is harnessing AI, too, and will continue to benefit from it. Bloomberg uses
Cyborg technology to help make quick sense of complex financial reports. The Associated
Press employs the natural language abilities of Automated Insights to produce 3,700 earning
reports stories per year — nearly four times more than in the recent past.
 Customer Service: Last but hardly least, Google is working on an AI assistant that can place
human-like calls to make appointments at, say, your neighborhood hair salon. In addition to
words, the system understands context and nuance.

Employment:

Nowadays, employment has become easy for job seekers and simple for employers due to the
use of Artificial Intelligence. AI has already been used in the job search market with strict
rules and algorithms that automatically reject an employee's resume if it does not fulfil the
requirement of the company. It is hoping that the employment process will be driven by most
AI-enabled applications ranging from marking the written interviews to telephonic rounds in
the future.

For jobseekers, various AI applications are helping build awesome resumes and find the best
job as per your skills, such as Rezi, Jobseeker, etc.

APPLICATIONS OF AI

Game playing: IBM's Deep Blue became the first computer program to defeat the world
champion (Garry Kasparov) in a chess match. The value of IBM's stock increased by $18
billion.

Autonomous control: The ALVINN computer vision system was trained to steer a car
to keep it following a lane. The computer-controlled minivan used to navigate across the
United States-for 2850 miles and it was in control of steering the vehicle 98% of the
time. A human took over the other 2%, mostly at exit ramps.

Diagnosis: Medical diagnosis programs based on probabilistic analysis have been able to
perform at the level of an expert physician in several areas of medicine

Logistics Planning: During the Gulf crisis of 1991, U.S. forces deployed a Dynamic
Analysis and Replanning Tool, DART to do automated logistics planning and scheduling
for transportation. This involved up to 50,000 vehicles, cargo, and people at a time, and
had to account for starting points, destinations, routes, and conflict resolution

Robotics: Many surgeons now use robot assistants in microsurgery


Language understanding and problem solving: PROVERB is a computer program that solves
crossword puzzles better than most humans, using constraints on possible word fillers, a large
database of past puzzles, and a variety of information sources including dictionaries and
online databases such as a list of movies and the actors that appear in them.

What are ethics in AI

AI ethics is a system of moral principles and techniques intended to inform the development
and responsible use of artificial intelligence technology

A strong AI code of ethics can include avoiding bias, ensuring privacy of users and their data,
and mitigating environmental risks. Codes of ethics in companies and government-led
regulatory frameworks are two main ways that AI ethics can be implemented.

Principle of ethics
Respect the Law and Act with Integrity. We will employ AI in a manner that respects human
dignity, rights, and freedoms. Our use of AI will fully comply with applicable legal
authorities and with policies and procedures that protect privacy, civil rights, and civil
liberties.
AI ethics important
 AI is a technology designed by humans to replicate, augment or replace human intelligence.
 Poorly designed projects built on data that is faulty, inadequate or biased can have
unintended, potentially harmful, consequences.
 Moreover, the rapid advancement in algorithmic systems means that in some cases it is not
clear to us how the AI reached its conclusions, so we are essentially relying on systems and
affect the society.

Ethical challenges of AI

Enterprises face several ethical challenges in their use of AI technologies.

 Explainability. When AI systems go awry, teams need to be able to trace through a


complex chain of algorithmic systems and data processes to find out why. Organizations
using AI should be able to explain the source data, resulting data, what their algorithms
do and why they are doing that. "AI needs to have a strong degree of traceability to
ensure that if harms arise, they can be traced back to the cause," said Adam Wisniewski,
CTO and co-founder of AI Clearing.

 Responsibility. Society is still sorting out responsibility when decisions made by AI


systems have catastrophic consequences, including loss of capital, health or life. The
process of addressing accountability for the consequences of AI-based decisions should
involve a range of stakeholders, including lawyers, regulators, AI developers, ethics
bodies and citizens. One challenge is finding the appropriate balance in cases where an
AI system may be safer than the human activity it is duplicating but still causes problems,
such as weighing the merits of autonomous driving systems that cause fatalities but far
fewer than people do.

 Fairness. In data sets involving personally identifiable information, it is extremely


important to ensure that there are no biases in terms of race, gender or ethnicity.

 Misuse. AI algorithms may be used for purposes other than those for which they were
created. Wisniewski said these scenarios should be analyzed at the design stage to
minimize the risks and introduce safety measures to reduce the adverse effects in such
cases.

Benefits of ethical AI

The rapid acceleration in AI adoption across businesses has coincided with -- and in many
cases helped fuel -- two major trends: the rise of customer-centricity and the rise in social
activism.

"Businesses are rewarded not only for providing personalized products and services but also
for upholding customer values and doing good for the society in which they operate," said
Sudhir Jha, CEO and executive vice president, Bridgerton unit, Mastercard.

AI plays a huge role in how consumers interact with and perceive a brand. Responsible use is
necessary to ensure a positive impact. In addition to consumers, employees want to feel good
about the businesses they work for. "Responsible AI can go a long way in retaining talent and
ensuring smooth execution of a company's operations," Jha said.

What is an AI code of ethics?

A proactive approach to ensuring ethical AI requires addressing three key areas, according to
Jason Shepherd, CEO at Nubix.

 Policy. This includes developing the appropriate framework for driving standardization
and establishing regulations. Documents like the Asilomar AI Principles can be useful to
start the conversation. Government agencies in the United States, Europe and elsewhere
have launched efforts to ensure ethical AI, and a raft of standards, tools and techniques
from research bodies, vendors and academic institutions are available to help
organizations craft AI policy. See "Resources for developing ethical AI" (below). Ethical
AI policies will need to address how to deal with legal issues when something goes
wrong. Companies should consider incorporating AI policies into their own codes of
conduct. But effectiveness will depend on employees following the rules, which may not
always be realistic when money or prestige are on the line.

 Education. Executives, data scientists, front-line employees and consumers all need to
understand policies, key considerations and potential negative impacts of unethical AI
and fake data. One big concern is the tradeoff between ease of use around data sharing
and AI automation and the potential negative repercussions of oversharing or adverse
automations. "Ultimately, consumers' willingness to proactively take control of their data
and pay attention to potential threats enabled by AI is a complex equation based on a
combination of instant gratification, value, perception and risk," Shepherd said.

 Technology. Executives also need to architect AI systems to automatically detect fake


data and unethical behavior. This requires not just looking at a company's own AI but
vetting suppliers and partners for the malicious use of AI. Examples include the
deployment of deep fake videos and text to undermine a competitor, or the use of AI to
launch sophisticated cyberattacks. This will become more of an issue as AI tools become
commoditized. To combat this potential snowball effect, organizations need to invest in
defensive measures rooted in open, transparent and trusted AI infrastructure. Shepherd
believes this will give rise to the adoption of trust fabrics that provide a system-level
approach to automating privacy assurance, ensuring data confidence and detecting
unethical use of AI.

Examples of AI codes of ethics

An AI code of ethics can spell out the principles and provide the motivation that drives
appropriate behaviour. For example, Mastercard's Jha said he is currently working with the
following tenets to help develop the company's current AI code of ethics:

 An ethical AI system must be inclusive, explainable, have a positive purpose and use data
responsibly.

 An inclusive AI system is unbiased and works equally well across all spectra of society.
This requires full knowledge of each data source used to train the AI models in order to
ensure no inherent bias in the data set. It also requires a careful audit of the trained
model to filter any problematic attributes learned in the process. And the models need to
be closely monitored to ensure no corruption occurs in the future as well.

 An explainable AI system supports the governance required of companies to ensure the


ethical use of AI. It is hard to be confident in the actions of a system that cannot be
explained. Attaining confidence might entail a tradeoff in which a small compromise in
model performance is made in order to select an algorithm that can be explained.

 An AI system endowed with a positive purpose aims to, for example, reduce fraud,
eliminate waste, reward people, slow climate change, cure disease, etc. Any technology
can be used for doing harm, but it is imperative that we think of ways to safeguard AI
from being exploited for bad purposes. This will be a tough challenge, but given the wide
scope and scale of AI, the risk of not addressing this challenge and misusing this
technology is far greater than ever before.

 An AI system that uses data responsibly observes data privacy rights. Data is key to an AI
system, and often more data results in better models. However, it is critical that in the
race to collect more and more data, people's right to privacy and transparency isn't
sacrificed. Responsible collection, management and use of data is essential to creating an
AI system that can be trusted. In an ideal world, data should only be collected when
needed, not continuously, and the granularity of data should be as narrow as possible. For
example, if an application only needs zip code-level geolocation data to provide weather
prediction, it shouldn't collect the exact location of the consumer. And the system should
routinely delete data that is no longer required.

Resources for developing ethical AI

Listed alphabetically, the following are a sampling of the growing number of organizations,
policymakers and regulatory standards focused on developing ethical AI practices:

1. AI Now Institute. Focuses on the social implications of AI and policy research in


responsible AI. Research areas include algorithmic accountability, antitrust concerns,
biometrics, worker data rights, large-scale AI models and privacy. The report "AI Now
2023 Landscape: Confronting Tech Power" provides a deep dive into many ethical issues
that can be helpful in developing ethical AI policies.

2. Berkman Klein Center for Internet & Society at Harvard University. Fosters
research into the big questions related to the ethics and governance of AI. Research
supported by the Berkman Klein Center has tackled topics that include information
quality, algorithms in criminal justice, development of AI governance frameworks and
algorithmic accountability.

3. CEN-CENELEC Joint Technical Committee on Artificial Intelligence (JTC 21). An


ongoing EU initiative for various responsible AI standards. The group plans to produce
standards for the European market and inform EU legislation, policies and values. It also
plans to specify technical requirements for characterizing transparency, robustness and
accuracy in AI systems.

4. Institute for Technology, Ethics and Culture (ITEC) Handbook. A collaborative


effort between Santa Clara University's Markkula Center for Applied Ethics and the
Vatican to develop a practical, incremental roadmap for technology ethics. The handbook
includes a five-stage maturity model, with specific measurable steps that enterprises can
take at each level of maturity. It also promotes an operational approach for implementing
ethics as an ongoing practice, akin to DevSecOps for ethics. The core idea is to bring
legal, technical and business teams together during ethical AI's early stages to root out the
bugs at a time when they're much cheaper to fix than after responsible AI deployment.

5. ISO/IEC 23894:2023 IT-AI-Guidance on risk management. The standard describes


how an organization can manage risks specifically related to AI. It can help standardize
the technical language characterizing underlying principles and how these principles
apply to developing, provisioning or offering AI systems. It also covers policies,
procedures and practices for assessing, treating, monitoring, reviewing and recording risk.
It's highly technical and oriented toward engineers rather than business experts.

6. NIST AI Risk Management Framework (AI RMF 1.0). A guide for government
agencies and the private sector on managing new AI risks and promoting responsible AI.
Abhishek Gupta, founder and principal researcher at the Montreal AI Ethics Institute,
pointed to the depth of the NIST framework, especially its specificity in implementing
controls and policies to better govern AI systems within different organizational contexts.

7. Nvidia NeMo Guardrails. Provides a flexible interface for defining specific behavioral
rails that bots need to follow. It supports the Colang modeling language. One chief data
scientist said his company uses the open source toolkit to prevent a support chatbot on a
lawyer's website from providing answers that might be construed as legal advice.

8. Stanford Institute for Human-Centered Artificial Intelligence (HAI). Provides


ongoing research and guidance into best practices for human-centered AI. One early
initiative in collaboration with Stanford Medicine is Responsible AI for Safe and
Equitable Health, which addresses ethical and safety issues surrounding AI in health and
medicine.

9. "Towards Unified Objectives for Self-Reflective AI." Authored by Matthias Samwald,


Robert Praas and Konstantin Hebenstreit, the paper takes a Socratic approach to identify
underlying assumptions, contradictions and errors through dialogue and questioning
about truthfulness, transparency, robustness and alignment of ethical principles. One goal
is to develop AI meta-systems in which two or more component AI models complement,
critique and improve their mutual performance.

10. World Economic Forum's "The Presidio Recommendations on Responsible


Generative AI." Provides 30 action-oriented recommendations to navigate AI
complexities and harness its potential ethically. This white paper also includes sections
on responsible development and release of generative AI, open innovation and
international collaboration, and social progress.

Impact on human psychology

Psychology is such a vast field the benefits are wide ranging, it could include researching
mental health to help enhance wellbeing, better understanding the relationships we form, self-
improvement, or battling addiction. There can also be benefits to our communication with,
and understanding of, other people.

Affects human psychology


Behaviour is affected by factors relating to the person, including: physical factors - age,
health, illness, pain, influence of a substance or medication. personal and emotional factors -
personality, beliefs, expectations, emotions, mental health life experiences - family, culture,
friends, life events.

Perspectives of psychology

There are seven modern perspectives used by psychologists in the field today.
 Behaviorist Perspective. ...
 Psychodynamic Perspective. ...
 Humanistic Perspective. ...
 Cognitive Perspective. ...
 Biological Perspective. ...
 Evolutionary Perspective. ...
 Cross-Cultural Perspective.
Goals of psychology
 The four major goals of psychology are to describe, explain, predict, and change or
control the mind and behaviour of others. As an interdisciplinary and multifaceted
science, psychology includes a wide range of subfields, such as social behaviour,
human development, and cognitive functions.
impact of using AI
 AI has the potential to bring about numerous positive changes in society,
including enhanced productivity, improved healthcare, and increased access to
education. AI-powered technologies can also help solve complex problems and make
our daily lives easier and more convenient.

Trust important in AI

Building trust in AI is essential for scaling up innovation and ensuring widespread acceptance
and adoption of AI. The workshops organized by Now AI provided valuable insights into
industry-specific trust needs, challenges, and potential opportunities for innovation.

Teach AI the value of trust

To truly achieve and sustain trust in AI, an organization must understand, govern, fine-tune
and protect all of the components embedded within and around the AI system. These
components can include data sources, sensors, firmware, software, hardware, user interfaces,
networks as well as human operators and users.

You might also like