Unit 3

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 19

CCS345-ETHICS & AI-UNIT-III

UNIT III AI STANDARDS AND REGULATION

SYLLABUS

Model Process for Addressing Ethical Concerns During System Design - Transparency of
Autonomous Systems-Data Privacy Process- Algorithmic Bias Considerations - Ontological
Standard for Ethically Driven Robotics and Automation Systems

Introduction

British Standard BS 8611 assumes that physical hazards imply ethical hazards, and defines
ethical harm as affecting 'psychological and/or societal and environmental well-being.' It also
recognises that physical and emotional hazards need to be balanced against expected benefits
to the user.

The standard highlights the need to involve the public and stakeholders in development of
robots and provides a list of key design considerations including:

 Robots should not be designed primarily to kill humans;


 Humans remain responsible agents;
 It must be possible to find out who is responsible for any robot;
 Robots should be safe and fit for purpose;
 Robots should not be designed to be deceptive;
 The precautionary principle should be followed;
 Privacy should be built into the design;
 Users should not be discriminated against, nor forced to use a robot.

Particular guidelines are provided for roboticists, particularly those conducting research.
These include the need to engage the public, consider public concerns, work with experts
from other disciplines, correct misinformation and provide clear instructions. Specific
methods to ensure ethical use of robots include: user validation (to ensure robot can/is
operated as expected), software verification (to ensure software works as anticipated),
involvement of other experts in ethical assessment, economic and social assessment of
anticipated outcomes, assessment of any legal implications, compliance testing against
relevant standards. Where appropriate, other guidelines and ethical codes should be taken into
consideration in the design and operation of robots (e.g. medical or legal codes relevant in
specific contexts). The standard also makes the case that military application of robots does
not remove the responsibility and accountability of humans.

The IEEE Standards Association has also launched a standard via its global initiative on the
Ethics of Autonomous and Intelligent Systems. Positioning 'human well-being' as a central
precept, the IEEE initiative explicitly seeks to reposition robotics and AI as technologies for
improving the human condition rather than simply vehicles for economic growth (Winfield,
2019a). Its aim is to educate,train and empower AI/robot stakeholders to 'prioritise ethical
considerations so that these technologies are advanced for the benefit of humanity.'

III YEAR/VI SEM/CSE Page 1


CCS345-ETHICS & AI-UNIT-III

There are currently 14 IEEE standards working groups working on drafting so-called 'human'

standards that have implications for artificial intelligence (Table 4.1).

Table 2: IEEE 'human standards' with implications for AI

III YEAR/VI SEM/CSE Page 2


CCS345-ETHICS & AI-UNIT-III

MODEL PROCESS FOR ADDRESSING ETHICAL CONCERNS DURING SYSTEM


DESIGN

A set of processes by which organizations can include consideration of ethical values


throughout the stages of concept exploration and development is established by this standard.

The Institute of Electrical and Electronics Engineers (IEEE) and the IEEE Standards
Association have launched a new standard to address ethical concerns during the design of
artificial intelligence (AI) and other technical systems. The IEEE 7000™-2021 – IEEE

III YEAR/VI SEM/CSE Page 3


CCS345-ETHICS & AI-UNIT-III

Standard Model Process for Addressing Ethical Concerns During System Design provides a
methodology to analyse human and social values relevant for an ethical system engineering
effort.

The standard provides:

(a) a system engineering standard approach integrating human and social values into
traditional systems engineering and design;

(b) processes for engineers to translate values and ethical considerations into system
requirements and design practices;

(c) a systematic, transparent, and traceable approach to address ethically-oriented regulatory


obligations in the design of autonomous intelligent systems

As IEEE explains, the standards could help organisations that design, develop, or operate AI
or other technical systems to directly address ethical concerns upfront, leading to more trust
among the end-users and increased market acceptance of their products, services, or systems.

The target groups are policy makers, regulators, technical standards developers, pertinent
international and inter-governmental organizations and technology developers and users
around the globe.

The engagements would be aimed at:

 Addressing the fundamental ethical and societal issues and implications of A/IS in
regulations, policies, standards and treaties. This would contribute in IEEE taking a
leadership role in addressing ethically aligned design in A/IS technology development,

III YEAR/VI SEM/CSE Page 4


CCS345-ETHICS & AI-UNIT-III

standardization and use with the aim of building confidence and trust in A/IS. By taking into
account human rights, human and social well-being, accountability and transparency in A/IS
technology development, standardization and use, the societal and economic benefits of A/IS
would be realized more readily.

 Establishing and implementing practices and instruments that will allow innovative and
impactful A/IS, while ensuring that these technologies are developed and used responsibly
and with accountability.

 Being proactive in proposing adequate policies for the successful and safe development
and implementation of A/IS.

 Identifying opportunities to bring the IEEE Global Initiative‘s body of work -- Ethically
Aligned Design: A Vision for Prioritizing Human Well-Being with Autonomous and
Intelligent Systems and the associated IEEE 7000™ series of standards -- into practice.

 Collaborating with global partners to set precedents through standards and best practices.

IEEE Ethical Standards for System Design addressing

IEEE P7000™ - Model Process for Addressing Ethical Concerns During System Design6
outlines an approach for identifying and analyzing potential ethical issues in a system or
software program from the onset of the effort. The values-based system design methods
addresses ethical considerations at each stage of development to help avoid negative
unintended consequences while increasing innovation.

IEEE P7001™ - Transparency of Autonomous Systems7 provides a Standard for developing


autonomous technologies that can assess their own actions and help users understand why a
technology makes certain decisions in different situations. The project also offers ways to
provide transparency and accountability for a system to help guide and improve it, such as
incorporating an event data recorder in a self-driving car or accessing data from a device‘s
sensors.

IEEE P7002™ - Data Privacy Process8 specifies how to manage privacy issues for systems
or software that collect personal data. It will do so by defining requirements that cover
corporate data collection policies and quality assurance. It also includes a use case and data
model for organizations developing applications involving personal information. The
standard will help designers by providing ways to identify and measure privacy controls in
their systems utilizing privacy impact assessments.

IEEE P7003™ - Algorithmic Bias Considerations9 provides developers of algorithms for


autonomous or intelligent systems with protocols to avoid negative bias in their code. Bias
could include the use of subjective or incorrect interpretations of data like mistaking
correlation with causation. The project offers specific steps to take for eliminating issues of
negative bias in the creation of algorithms. The standard will also include benchmarking
procedures and criteria for selecting validation data sets, establishing and communicating the

III YEAR/VI SEM/CSE Page 5


CCS345-ETHICS & AI-UNIT-III

application boundaries for which the algorithm has been designed and guarding against
unintended consequences.

IEEE P7004™ - Standard on Child and Student Data Governance10 provides processes and
certifications for transparency and accountability for educational institutions that handle data
meant to ensure the safety of students. The standard defines how to access, collect, share and
remove data related to children and students in any educational or institutional setting where
their information will be access, stored or shared.

IEEE P7005™ - Standard on Employer Data Governance11 provides guidelines and


certifications on storing, protecting and using employee data in an ethical and transparent
way. The project recommends tools and services that help employees make informed
decisions with their personal information. The standard will help provide clarity and
recommendations both for how employees can share their information in a safe and trusted
environment as well as how employers can align with employees in this process while still
utilizing information needed for regular work flows.

Examples

• Image processing systems and their ability to accurately recognise facial characteristics,
including facial recognition and tracking, and applications that, for example, detect theft or
suspicious behaviour, or flag to law enforcement that an individual should be detained.

• Marketing automation applications that calibrate offers, prices, or content to an individual‘s


preferences and behaviour. Examples have occurred where well remunerated executive job
adverts are shown to men, but not women with similar experience.

• Online purchasing systems can use location data (e.g. zip code) to make different products
and services available at different prices, to different groups of customers, who may be
predominantly from a specific ethnic group, age group or socio-economic class.

TRANSPARENCY OF AUTONOMOUS SYSTEMS

he purpose of this standard is to set out measurable, testable levels of transparency for
autonomous systems. The general principle behind this standard is that it should always be
possible to understand why and how the system behaved the way it did.

IEEE 7001™-2021 - Standards for Transparency of Autonomous Systems. This standard


describes measurable, testable levels of transparency, so that autonomous systems can be
objectively assessed and levels of compliance determined.

The purpose of this standard is to set out measurable, testable levels of transparency for
autonomous systems. The general principle behind this standard is that it should always be
possible to understand why and how the system behaved the way it did. Transparency is one
of the eight General Principles set out in IEEE Ethically Aligned Design [B21], stated as
―The basis of a particular autonomous and intelligent system decision should always be
discoverable.‖ A working group tasked with drafting this standard was set up in direct

III YEAR/VI SEM/CSE Page 6


CCS345-ETHICS & AI-UNIT-III

response to a recommendation in the general principles section of IEEE Ethically Aligned


Design.

Defining Transparency in P7001


he UK‘s Engineering and Physical Science Research Council (EPSRC) Principles of
Robotics–the first national-level policy on AI–states, as principle four: ―Robots are
manufactured artifacts.

The EPSRC definition of transparency emphasises, through contrast, that transparency in


robotics means that the end user is well aware of the manufactured and thus artificial nature
of the robot.

Transparency Is Not the Same for Everyone


Transparency is not a singular property of systems that would meet the needs of all
stakeholders. In this regard, transparency is like any other ethical or socio-legal value

P7001 defines five distinct groups of stakeholders, and AIS must be transparent to each
group, in different ways and for different reasons. These stakeholders split into two groups:
non-expert end users of autonomous systems (and wider society), and experts including
safety certification engineers or agencies, accident investigators, and lawyers or expert
witnesses.

Transparency for End Users

For users, transparency (or explainability as defined in P7001) is important because it both
builds and calibrates confidence in the system, by providing a simple way for the user to
understand what the system is doing and why.

Taking a care robot as an example, transparency means the user can begin to predict what the
robot might do in different circumstances. A vulnerable person might feel very unsure about
robots, so it is important that the robot is helpful, predictable—never does anything that
frightens them—and above all safe. It should be easy to learn what the robot does and why, in
different circumstances

Transparency for the Wider Public and Bystanders

Robots and AIs are disruptive technologies likely to have significant societal impact . It is
very important therefore that the whole of society has a basic level of understanding of how
these systems work, so we can confidently share work or public spaces with them.

This kind of transparency needs public engagement, for example through panel debates and
science cafés, supported by high quality documentaries targeted at distribution by mass media
(e.g., YouTube and TV), which present emerging robotics and AI technologies and how they
work in an interesting and understandable way

III YEAR/VI SEM/CSE Page 7


CCS345-ETHICS & AI-UNIT-III

Transparency for Safety Certifiers

For safety certification of an AIS, transparency is important because it exposes the system‘s
decision making processes for assurance and independent certification.

Example:

The type and level of evidence required to satisfy a certification agency or regulator that a
system is safe and fit for purpose depends on how critical the system is. An autonomous
vehicle autopilot requires a much higher standard of safety certification than, say, a music
recommendation AI, since a fault in the latter is unlikely to endanger life. Safe and correct
behaviour can be tested by verification, and fitness for purpose tested by validation.

Transparency for Incident/Accident Investigators

Robots and other AI systems can and do act in unexpected or undesired ways. When they do
it is important that we can find out why. Autonomous vehicles provide us with a topical
example of why transparency for accident investigation is so important.

One example of best practice is the aircraft Flight Data Recorder, or ―black box‖; a
functionality we consider essential in autonomous systems.

Transparency for Lawyers and Expert Witnesses

Following an accident, lawyers or other expert witnesses who have been obliged to give
evidence in an inquiry or court case or to determine insurance settlements, require
transparency to inform their evidence. Both need to draw upon information available to the
other stakeholder groups: safety certification agencies, accident investigators and users.

In addition, lawyers and expert witnesses may well draw upon additional information relating
to the general quality management processes of the company that designed and/or
manufactured the robot or AI system.

Transparency Levels for End Users

III YEAR/VI SEM/CSE Page 8


CCS345-ETHICS & AI-UNIT-III

Transparency Levels for accident investigators

Application
System Transparency Assessment for a Robot Toy
In Winfield and Winkle (2020), an ethical risk assessment for a fictional intelligent robot
teddy bear we called RoboTED. Let us now assess the transparency of the same robot.

RoboTED is an Internet (WiFi) connected device with cloud-based speech recognition and
conversational AI (chatbot) with local speech synthesis; RoboTED‘s eyes are functional
cameras allowing the robot to recognise faces; RoboTED has touch sensors, and motorised
arms and legs to provide it with limited baby-like movement and locomotion—not walking
but shuffling and crawling.

Our ethical risk assessment (ERA) exposed two physical (safety) hazards including tripping
over the robot and batteries overheating. Psychological hazards include addiction to the robot
by the child, deception (the child coming to believe the robot cares for them), over-trusting of
the robot by the child, and over-trusting of the robot by the child‘s parents.

III YEAR/VI SEM/CSE Page 9


CCS345-ETHICS & AI-UNIT-III

DATA PRIVACY PROCESS

Data privacy is one of the primary concerns of the information age. Data is not managed
merely by humans however AI has an ever-increasing role in data processing.

Automated decision-making beings many opportunities for increased organizational


efficiency, but limited human involvement can lead to misuse or mishandling of personal
data.

Data privacy (or information privacy or data protection) is about access, use and collection of
data, and the data subject‘s legal right to the data. This refers to:

 Freedom from unauthorized access to private data


 Inappropriate use of data
 Accuracy and completeness when collecting data about a person or persons
(corporations included) by technology
 Availability of data content, and the data subject‘s legal right to access; ownership
 The rights to inspect, update or correct these data

Data privacy is also concerned with the costs if data privacy is breached, and such costs
include the so-called hard costs (e.g., financial penalties imposed by regulators, compensation
payments in lawsuits such as noncompliance with contractual principles) and the soft costs
(e.g., reputational damage, loss of client trust).

the US National Artificial Intelligence Initiative Act of 2020 (NAIIA), AI data systems
use data inputs to (1) perceive real and virtual environments, (2) abstract those
perceptions into models through automated analysis, and (3) use inference from
those models to develop options for action. For example, predictive algorithms on
social media observe users' activity, then offer them content (and advertising) that is
relevant to their interests.

AI data processing is a revolutionary technique in fields that must analyse large


volumes of data, such as finance, healthcare, education, and marketing. Without it,
such data would have to be processed with human labor, taking a significant amount
of time and money. However, the lack of human involvement is also a major point of
concern. The process of how an AI system comes to a decision is often unclear to
ordinary users. Users will likely not understand how the process works, how it affects
them, or even that they are being analyzed by AI at all.

AI can produced biased or incorrect conclusions. Biases of the AI's creators or the
data used to train the AI can be reflected in its output. For example, many facial
recognition algorithms have higher error rates when used to identify women and
racial minorities, likely due to being trained mostly on images of white men. The AI
can also draw mistaken conclusions--finding patterns and drawing conclusions from
correlations that are only coincidences without deeper meaning.

Bias can have serious consequences. AI decision-making is used in a number of


serious processes, such as resume analysis, insurance approval, and predictions of
future criminal activity. AI systems trained on biased input data create biased

III YEAR/VI SEM/CSE Page 10


CCS345-ETHICS & AI-UNIT-III

outcomes, preserving the human biases that objective computer analysis is meant to
eliminate.

Ethical Codes and Restrictions on AI

Many governments and organizations are starting understand the limitations and
complications of AI data processing, in addition to its benefits. As a result, agencies
and organizations are developing AI ethics codes to correct for these flaws and
increase transparency and oversight of AI systems.

For example, in 2020 the US Department of Defense adopted its own set of AI ethics
principles. The DoD's AI principles are a good example of the concept, having been
developed with input from national AI experts in government, academia, and the
private sector. In the DoD's ethical code, use of AI must be:

1. Responsible: DoD personnel will exercise judgment and care over the Department's
use and development of AI.
2. Equitable: The Department will actively attempt to minimize bias in AI capabilities.
3. Traceable: AI will be developed in such a way that relevant personnel understand the
technology, design procedure, and operations of the Department's AI.
4. Reliable: AI will have specific, well-defined uses, and will be tested on the safety,
security, and effectiveness in those uses throughout its life-cycle.
5. Governable: AI will be designed and made to fulfill their intended functions, while
being able to detect and avoid unintended consequences and to deactivate systems
that have unintended behavior.

AI use in data management can also benefit from this ethical code. Since AI used in
data management is often handling personal data and can make very impactful
decisions (like whether or not to approve someone for insurance), it is important to
ensure that the AI's conclusions are unbiased and that the people overseeing its
processes understand how it works. Knowing how an AI's decision-making works (as
opposed to the "black box" of unknown processes) makes it easier to repair errors,
correct for bias, or determine how much weight to give the AI's recommendations.
While these five principles are an illustrative model of AI ethics, the DoD has
struggled to develop internal rules to actually implement them. Without clear rules or
detailed internal guidance to clarify implementation, having a set of ethical principles
is of limited effectiveness.

Singapore's Model AI Governance Framework

As an example of a complete ethical AI framework, Singapore's Model AI


Governance Framework has both a set of broad AI ethical principles as well as
detailed implementation guidance. The two primary principles for responsible AI are
that (1) decisions made by AI should be explainable, transparent, and fair; and (2) AI
solutions should be human-centric, protecting people's safety and interests. The
simplified set of general ethics is augment by guidance from Singapore's Personal
Data Protection Commission (PDPC). The PDPC Framework sets out four areas that
organizations must consider in their AI implementation:

1. Internal Governance: Organizations using AI need to be structured in a way that


allows more effective oversight over their AI operations. For example, personnel

III YEAR/VI SEM/CSE Page 11


CCS345-ETHICS & AI-UNIT-III

responsible for overseeing AI processes should have clear guidance in their roles
and duties, as well as implementing AI-specific risk management controls.
2. Level of Human Involvement: Determine the appropriate level of human involvement
based on the severity and probability of harm. If the severity and probability are low
(like content recommendation), AI can act without human involvement. If there is a
moderate risk (like GPS navigation), the user should have a supervisory role to take
over when the AI encounters problems. If there is a high risk and/or severe harm (like
medical diagnosis), AI should only provide recommendations and all decision-making
authority should rest with the user.
3. Operations Management: Implementation of the AI process must be reasonably
controlled. Organizations must take measures to mitigate bias in both the data and
the AI model. The AI's decision-making processes must be explainable, traceable,
reliable, and easily able to be audited and assessed for errors.
4. Stakeholder Interaction and Communication: An organization's AI policies must be
available and known to users, in a clear and easy-to-understand way. Users should
be allowed to provide feedback on the effectiveness of AI data management, if
possible.

Singapore's AI governance framework is accompanied by two volumes of use cases


from government, financial, health, and tech organizations. The real-world examples
of effectively implementing and benefiting from accountable AI use, as well as the
detailed official guidance documents, make Singapore's Model AI Governance
Framework an international model for effective AI ethical policy.

The EU AI Act

The EU, home of the flagship data privacy law GDPR, is preparing its own AI
framework. Unlike others, however, the EU AI Act will be a fully-enforceable law and
not an internal ethical code or guidance from an agency. First unveiled in 2021, the
AI Act could become the new standard for AI regulation.

The AI Act is based on the EU's ethics guidelines for trustworthy AI, written by High-
Level Expert Group on AI and the European AI Alliance. The bill draft covers a wide
array of software and automated tools, and imposes four levels of regulation based
on the risk the present to the public. These range from total bans to no regulation at
all.

The Commission deems certain AI systems to be an unacceptable risk to the public


by their very nature, and would ban these systems outright. There are four banned
technologies: social scoring, dark-pattern AI, manipulation, and real-time biometric
ID systems. The ban on social scoring means that public authorities cannot use AI to
calculate people's trustworthiness based on their behavior. The ban on dark patterns
prohibits subliminal techniques to manipulate people's behavior; for example the Act
would forbid sounds at inaudible frequencies to force truck drivers to stay awake
longer. Manipulation in this context means that AI systems may be used not exploit
people's age or disabilities to alter their behavior. Real-time biometric ID--i.e., facial
recognition--is limited for law enforcement. It may only be used with judicial approval
(1) as part of a specific investigation, (2) to prevent a specific, substantial, and
imminent threat to life, or (3) to identify and locate a suspect of a serious crime like
terrorism or human trafficking. Private entities may use real-time facial recognition,
but will be subject to the restrictions on high-risk AI.

III YEAR/VI SEM/CSE Page 12


CCS345-ETHICS & AI-UNIT-III

High-risk AI includes AI that could pose a risk to human well-being if misused or


poorly implemented. This includes AI used in educational systems (like exam
scoring), employment (like automated resume sorting), biometric ID, critical
infrastructure like water and power, emergency services, or immigration and law
enforcement. AI in one of these categories must comply with five requirements
before they can be implemented:

1. Data Governance: The data used to train and test the AI must be relevant to its
purpose, representative of reality, error-free, and complete. Bias and data
shortcomings must be accounted for.
2. Transparency: Developers must disclose certain information about the system--the
AI's capabilities, limitations, intended use, and information necessary for
maintenance.
3. Human Oversight: Humans must be a part of the implementation--checking for bias
or dysfunction, and shutting the system down if it poses a risk to human safety or
rights.
4. Accuracy, Robustness, and Cybersecurity: High-risk AI systems must be accurate,
robust, and secure in proportion to their purpose. AI developers will have to provide
accuracy metrics to users and develop plans to ensure system robustness and
security.
5. Traceability and Auditability: AI developers must provide documentation on a very
long list of criteria to prove they are in compliance with the above requirements.

Limited-risk AI under the EU AI Act are those that do not pose a major risk to human
safety, but can still be abused to affect humans. This category includes deepfakes,
AIs designed to converse with humans, and AI-powered emotion-recognition
systems. The main issue with these is transparency--how does a user know if they
are interacting with an AI designed to emulate a human? The Act would grant EU
residents a right to know if they are looking at/hearing a deepfake, talking to a
chatbot, or being subject to AI emotion recognition.

The vast majority of AI pose a minimal risk to human well-being, and any type of AI
system not specifically mentioned in the Act falls into this category. The AI Act will
not regulate these AIs, but it does strongly encourage developing codes of conduct
to voluntarily apply good AI ethics to these systems.

ALGORITHMIC BIAS CONSIDERATIONS

The IEEE P7003 Standard for Algorithmic Bias Considerations is one of eleven IEEE ethics
related standards currently under development as part of the IEEE Global Initiative on Ethics
of Autonomous and Intelligent Systems.

The purpose of the IEEE P7003 standard is to provide individuals or organizations creating
algorithmic systems with development framework to avoid unintended, unjustified and
inappropriately differential outcomes for users.

In recognition of the increasingly pervasive role of algorithmic decision making systems in


corporate and government service, and growing public concerns regarding the ‗black box‘
nature of many of these systems, the IEEE Standards Association (IEEE- SA) launched the
IEEE Global Initiative on Ethics for Autonomous and Intelligence Systems.

III YEAR/VI SEM/CSE Page 13


CCS345-ETHICS & AI-UNIT-III

The ‗Global Initiative‘ aims to provide ―an incubation space for new standards and solutions,
certifications and codes of conduct, and consensus building for ethical implementation of
intelligent technologies‖. As of early 2018 the main pillars of the Global Initiative are:

 a public discussion document ―Ethically Aligned Design: A vision for Prioritizing


human Well-being with Autonomous and Intelligent Systems‖ on establishing ethical
and social implementations for intelligent and autonomous systems and technology
aligned with values and ethical principles that prioritize human well-being in a given
cultural context;
 a set of eleven working groups to create the IEEE P70xx series ethics standards, and
associated certification programs, for Intelligent and Autonomous systems.

Example
Security camera applications that detect theft or suspicious behaviour.
 Marketing automation applications that calibrate offers, prices, or content to an
individual‘s preferences and behaviour.
Since the standard aims to allow for the legitimate ends of different users, such as
businesses, it should assist them in assuring citizens that steps have been taken to
ensure fairness, as appropriate to the stated aims and practices of the sector where the
algorithmic system is applied. For example, it may help customers of insurance
companies to feel more assured that they are not getting a worse deal because of the
hidden operation of an algorithm.

The standard will describe specific methodologies that allow users of the standard to
assert how they worked to address and eliminate issues of unintended, unjustified and
inappropriate bias in the creation of their algorithmic system. This will help to design
systems that are more easily auditable by external parties (such as regulatory bodies).
 Elements include: a set of guidelines for what to do when designing or using such
algorithmic systems following a principled methodology (process), engaging with
stakeholders (people), determining and justifying the objectives of using the algorithm
(purpose), and validating the principles that are actually embedded in the algorithmic
system (product);
 a practical guideline for developers to identify when they should step back to evaluate
possible bias issues in their systems, and pointing to methods they can use to do this;
 benchmarking procedures and criteria for the selection of validation data sets for bias
quality control;
 methods for establishing and communicating the application boundaries for which the
system has been designed and validated, to guard against unintended consequences
arising from out-of-bound application of algorithms;
 methods for user expectation management to mitigate bias due to incorrect
interpretation of systems outputs by users (e.g. correlation vs. causation), such as
specific action points/guidelines on what to do if in doubt about how to interpret the
algorithm outputs;

III YEAR/VI SEM/CSE Page 14


CCS345-ETHICS & AI-UNIT-III

 The ‗algorithmic system design and implementation‘ orientated sections are currently
envisaged to include sections on ‗Algorithmic system design stages‘, ‗Person
categorizations and identifying of affected groups‘, ‗Representativeness and balance
of testing/training/validation data‘, ‗System outcomes evaluation‘, ‗Evaluation of
algorithmic processing‘, Assessment of resilience against external biasing
manipulation‘, ‗Assessment of scope limits for safe system usage‘ and ‗Transparent
documentation‘, though it is anticipated that further sections will be added as work
progresses

ONTOLOGICAL STANDARD FOR ETHICALLY DRIVEN ROBOTICS AND


AUTOMATION SYSTEMS

 Standards represent a consensual view of a particular subject, associated to


technology solutions, human or environment safety, good practices, etc.
 For IEEE, standards are essential to advance global prosperity through the
promotion of technological innovation.
 Since developing standards to define how robots can interact properly with
humans will ensure end-users some guarantee that the robot can interact
safely and ethically.
e.g. the elderly feeling of being treated like an object rather than a human.

Ontological standard development is important because ontologies allow to capture


and represent consensual knowledge in an explicit and formal way, independently of a
particular programming language.

The IEEE 1872-2015 Standard Ontologies for Robotics and Automation has been
developed using the METHONTOLOGY approach as described in Section II.
However, only concepts have been developed in the IEEE 1872-2015 standard.

The IEEE 1872-2015 Standard Ontologies for Robotics and Automation standard
establishes a series of ontologies about the Robotics and Automation (R&A) domain
e.g. the Core Ontology for Robotics and Automation (CORA).

CORA(Core Ontology for Robotics and Automation)


Core ontology specifies concepts that are general in a whole domain such as Robotics.
In the case of CORA, it defines concepts such as Robot, Robot Group, and Robotic
System. Its role is to serve as basis for other more specialized ontologies in R&A.
The IEEE P1872.2, being developed by the Autonomous Robotics (AuR) Ontology
Working Group aims to define standard ontologies for Autonomous Robotics
systems.

III YEAR/VI SEM/CSE Page 15


CCS345-ETHICS & AI-UNIT-III

CORA was developed following the METHONTOLOGY approach

Since it is a sistematization of what has been since before it. This involves five sets of
activities, namely, pre-development, development, post-development, management,
and support.

The following attributes were considered based on the Ontology Design Patterns
 the ontology must be well designed for its purpose; • it shall explicitly
include stated requirements;
 it must meet all and for the most part, only the intended requirements;
 it should not make unnecessary commitments or assumptions; • it
should be easy to extend to meet additional requirements;
 it reuses prior knowledge bases as much as possible;

III YEAR/VI SEM/CSE Page 16


CCS345-ETHICS & AI-UNIT-III

 there is a core set of primitives that are used to build up more complex
parts;
 it should be easy to understand and maintain; • it must be well
documented.

The four main phases of the ontology development,


 the specification, the conceptualization, the formalization, and the
implementation.
 the pre-development activities, the methodology specifies the environment
study and the feasibility study.
 The post-development activities are performed after the development of a
version of the ontology, and include the maintenance and the ontology reuse,
while the management activities are performed during the whole process of
ontology development and include scheduling, control, and quality assurance.
 The support activities can be performed during the development activities as
well, and usually include the knowledge acquisition, the evaluation, the
integration, the documentation, and the configuration management

Proposed Ontological Standard Development Life Cycle (RoSaDev)


Demerits
 Cora methodologies have long-duration cycles and do not address anymore the
need of quickly expanding technological fields such as robotics.

Hence, the proposed life cycle (RoSaDev) to develop a robotic ontological standard is
an Agile-inspired, iterative method which involves four step

Proposed Ontological Standard Development Life Cycle (RoSaDev).

III YEAR/VI SEM/CSE Page 17


CCS345-ETHICS & AI-UNIT-III

1. Collaborative Approach
 The first step consists in identifying the ontological concepts for the standard, and is
followed by their development and formalization.
 These steps are carried out in a collaborative way through brainstorms and
discussions, reaching consensus between the multiple stakeholders such as experts
from Public Bodies, Academia, and Industry
2. Middle-out Approach
The concept development is following a middle-out approach to address potential use
cases developed.

Use Case Template


Name: The use case name, which ideally should implicitly express aspects of the use
case purpose.
• Identifier (optional): A unique identifier that can be used by other project artifacts to
reference the use case.
• Author(s): Name of person or persons composing the use case.
• References: References in the literature relevant to the use case.
• Context Description: A descriptive summary of the use case actors, its goals and
purposes, when it applies, and relevant associated pre-suppositions, and
environmental context.
3. Incremental Approach:
 It means for the identification of necessary concepts and relations to be formalized in
the standard, each use case constitutes the basis for the validation step, leading to an
incremental integration of validated concepts within the standard being developed.
4. Iterative Approach
These ontologies aim to define a set of concepts and their relationship that will enable
the development of Robotics and Automation Systems in accordance with worldwide.

Ethics and Moral theories, with a particular emphasis on aligning ethics and
engineering communities to understand how to pragmatically design and implement
these systems
(i) a guide for teaching ethical design
(ii) a reference by policy makers and governments to draft AI related policies;
(iii) a common vocabulary to enable the communication among government
agencies and other professional bodies around the world;
(iv) a framework to create systems that can act ethically; and
(v) a foundation for the elaboration of other ethical compliance standards
Example:

 if the elderly person is bored, the robot may suggest to play a board game;

III YEAR/VI SEM/CSE Page 18


CCS345-ETHICS & AI-UNIT-III

 if the elderly person needs to talk to his family, the robot may suggest a
Skype call;
 if the elderly person has fallen, the robot may call help to his/her caregiver.

Preconditions: The elderly person is at the care home. The robot is at the care
home. The robot can move in the care home. The robot is always looking after
the elderly persons.
Name: Robot Companion to Recognize Elderly‘s Behaviour and to Suggest
Actions • Identifier (optional): P7007 Use case
Intent/Purpose: The use case describes how the robot can analyse the elderly
persons‘ behaviours and take the action to suggest activities
Alternate Related Scenario (optional): The use case can also be applied or
extended for care robots deployed at elders‘ homes.
Relevant Knowledge: {capability, behaviour, services, actions, recognition,
Skype call, call for help, user‘s will, safety, ignore, interaction, pose
recognition, voice recognition, play board game, emotion recognition,
activate, deactivate, knowledge base, task}

III YEAR/VI SEM/CSE Page 19

You might also like