Professional Documents
Culture Documents
Unit 3
Unit 3
Unit 3
SYLLABUS
Model Process for Addressing Ethical Concerns During System Design - Transparency of
Autonomous Systems-Data Privacy Process- Algorithmic Bias Considerations - Ontological
Standard for Ethically Driven Robotics and Automation Systems
Introduction
British Standard BS 8611 assumes that physical hazards imply ethical hazards, and defines
ethical harm as affecting 'psychological and/or societal and environmental well-being.' It also
recognises that physical and emotional hazards need to be balanced against expected benefits
to the user.
The standard highlights the need to involve the public and stakeholders in development of
robots and provides a list of key design considerations including:
Particular guidelines are provided for roboticists, particularly those conducting research.
These include the need to engage the public, consider public concerns, work with experts
from other disciplines, correct misinformation and provide clear instructions. Specific
methods to ensure ethical use of robots include: user validation (to ensure robot can/is
operated as expected), software verification (to ensure software works as anticipated),
involvement of other experts in ethical assessment, economic and social assessment of
anticipated outcomes, assessment of any legal implications, compliance testing against
relevant standards. Where appropriate, other guidelines and ethical codes should be taken into
consideration in the design and operation of robots (e.g. medical or legal codes relevant in
specific contexts). The standard also makes the case that military application of robots does
not remove the responsibility and accountability of humans.
The IEEE Standards Association has also launched a standard via its global initiative on the
Ethics of Autonomous and Intelligent Systems. Positioning 'human well-being' as a central
precept, the IEEE initiative explicitly seeks to reposition robotics and AI as technologies for
improving the human condition rather than simply vehicles for economic growth (Winfield,
2019a). Its aim is to educate,train and empower AI/robot stakeholders to 'prioritise ethical
considerations so that these technologies are advanced for the benefit of humanity.'
There are currently 14 IEEE standards working groups working on drafting so-called 'human'
The Institute of Electrical and Electronics Engineers (IEEE) and the IEEE Standards
Association have launched a new standard to address ethical concerns during the design of
artificial intelligence (AI) and other technical systems. The IEEE 7000™-2021 – IEEE
Standard Model Process for Addressing Ethical Concerns During System Design provides a
methodology to analyse human and social values relevant for an ethical system engineering
effort.
(a) a system engineering standard approach integrating human and social values into
traditional systems engineering and design;
(b) processes for engineers to translate values and ethical considerations into system
requirements and design practices;
As IEEE explains, the standards could help organisations that design, develop, or operate AI
or other technical systems to directly address ethical concerns upfront, leading to more trust
among the end-users and increased market acceptance of their products, services, or systems.
The target groups are policy makers, regulators, technical standards developers, pertinent
international and inter-governmental organizations and technology developers and users
around the globe.
Addressing the fundamental ethical and societal issues and implications of A/IS in
regulations, policies, standards and treaties. This would contribute in IEEE taking a
leadership role in addressing ethically aligned design in A/IS technology development,
standardization and use with the aim of building confidence and trust in A/IS. By taking into
account human rights, human and social well-being, accountability and transparency in A/IS
technology development, standardization and use, the societal and economic benefits of A/IS
would be realized more readily.
Establishing and implementing practices and instruments that will allow innovative and
impactful A/IS, while ensuring that these technologies are developed and used responsibly
and with accountability.
Being proactive in proposing adequate policies for the successful and safe development
and implementation of A/IS.
Identifying opportunities to bring the IEEE Global Initiative‘s body of work -- Ethically
Aligned Design: A Vision for Prioritizing Human Well-Being with Autonomous and
Intelligent Systems and the associated IEEE 7000™ series of standards -- into practice.
Collaborating with global partners to set precedents through standards and best practices.
IEEE P7000™ - Model Process for Addressing Ethical Concerns During System Design6
outlines an approach for identifying and analyzing potential ethical issues in a system or
software program from the onset of the effort. The values-based system design methods
addresses ethical considerations at each stage of development to help avoid negative
unintended consequences while increasing innovation.
IEEE P7002™ - Data Privacy Process8 specifies how to manage privacy issues for systems
or software that collect personal data. It will do so by defining requirements that cover
corporate data collection policies and quality assurance. It also includes a use case and data
model for organizations developing applications involving personal information. The
standard will help designers by providing ways to identify and measure privacy controls in
their systems utilizing privacy impact assessments.
application boundaries for which the algorithm has been designed and guarding against
unintended consequences.
IEEE P7004™ - Standard on Child and Student Data Governance10 provides processes and
certifications for transparency and accountability for educational institutions that handle data
meant to ensure the safety of students. The standard defines how to access, collect, share and
remove data related to children and students in any educational or institutional setting where
their information will be access, stored or shared.
Examples
• Image processing systems and their ability to accurately recognise facial characteristics,
including facial recognition and tracking, and applications that, for example, detect theft or
suspicious behaviour, or flag to law enforcement that an individual should be detained.
• Online purchasing systems can use location data (e.g. zip code) to make different products
and services available at different prices, to different groups of customers, who may be
predominantly from a specific ethnic group, age group or socio-economic class.
he purpose of this standard is to set out measurable, testable levels of transparency for
autonomous systems. The general principle behind this standard is that it should always be
possible to understand why and how the system behaved the way it did.
The purpose of this standard is to set out measurable, testable levels of transparency for
autonomous systems. The general principle behind this standard is that it should always be
possible to understand why and how the system behaved the way it did. Transparency is one
of the eight General Principles set out in IEEE Ethically Aligned Design [B21], stated as
―The basis of a particular autonomous and intelligent system decision should always be
discoverable.‖ A working group tasked with drafting this standard was set up in direct
P7001 defines five distinct groups of stakeholders, and AIS must be transparent to each
group, in different ways and for different reasons. These stakeholders split into two groups:
non-expert end users of autonomous systems (and wider society), and experts including
safety certification engineers or agencies, accident investigators, and lawyers or expert
witnesses.
For users, transparency (or explainability as defined in P7001) is important because it both
builds and calibrates confidence in the system, by providing a simple way for the user to
understand what the system is doing and why.
Taking a care robot as an example, transparency means the user can begin to predict what the
robot might do in different circumstances. A vulnerable person might feel very unsure about
robots, so it is important that the robot is helpful, predictable—never does anything that
frightens them—and above all safe. It should be easy to learn what the robot does and why, in
different circumstances
Robots and AIs are disruptive technologies likely to have significant societal impact . It is
very important therefore that the whole of society has a basic level of understanding of how
these systems work, so we can confidently share work or public spaces with them.
This kind of transparency needs public engagement, for example through panel debates and
science cafés, supported by high quality documentaries targeted at distribution by mass media
(e.g., YouTube and TV), which present emerging robotics and AI technologies and how they
work in an interesting and understandable way
For safety certification of an AIS, transparency is important because it exposes the system‘s
decision making processes for assurance and independent certification.
Example:
The type and level of evidence required to satisfy a certification agency or regulator that a
system is safe and fit for purpose depends on how critical the system is. An autonomous
vehicle autopilot requires a much higher standard of safety certification than, say, a music
recommendation AI, since a fault in the latter is unlikely to endanger life. Safe and correct
behaviour can be tested by verification, and fitness for purpose tested by validation.
Robots and other AI systems can and do act in unexpected or undesired ways. When they do
it is important that we can find out why. Autonomous vehicles provide us with a topical
example of why transparency for accident investigation is so important.
One example of best practice is the aircraft Flight Data Recorder, or ―black box‖; a
functionality we consider essential in autonomous systems.
Following an accident, lawyers or other expert witnesses who have been obliged to give
evidence in an inquiry or court case or to determine insurance settlements, require
transparency to inform their evidence. Both need to draw upon information available to the
other stakeholder groups: safety certification agencies, accident investigators and users.
In addition, lawyers and expert witnesses may well draw upon additional information relating
to the general quality management processes of the company that designed and/or
manufactured the robot or AI system.
Application
System Transparency Assessment for a Robot Toy
In Winfield and Winkle (2020), an ethical risk assessment for a fictional intelligent robot
teddy bear we called RoboTED. Let us now assess the transparency of the same robot.
RoboTED is an Internet (WiFi) connected device with cloud-based speech recognition and
conversational AI (chatbot) with local speech synthesis; RoboTED‘s eyes are functional
cameras allowing the robot to recognise faces; RoboTED has touch sensors, and motorised
arms and legs to provide it with limited baby-like movement and locomotion—not walking
but shuffling and crawling.
Our ethical risk assessment (ERA) exposed two physical (safety) hazards including tripping
over the robot and batteries overheating. Psychological hazards include addiction to the robot
by the child, deception (the child coming to believe the robot cares for them), over-trusting of
the robot by the child, and over-trusting of the robot by the child‘s parents.
Data privacy is one of the primary concerns of the information age. Data is not managed
merely by humans however AI has an ever-increasing role in data processing.
Data privacy (or information privacy or data protection) is about access, use and collection of
data, and the data subject‘s legal right to the data. This refers to:
Data privacy is also concerned with the costs if data privacy is breached, and such costs
include the so-called hard costs (e.g., financial penalties imposed by regulators, compensation
payments in lawsuits such as noncompliance with contractual principles) and the soft costs
(e.g., reputational damage, loss of client trust).
the US National Artificial Intelligence Initiative Act of 2020 (NAIIA), AI data systems
use data inputs to (1) perceive real and virtual environments, (2) abstract those
perceptions into models through automated analysis, and (3) use inference from
those models to develop options for action. For example, predictive algorithms on
social media observe users' activity, then offer them content (and advertising) that is
relevant to their interests.
AI can produced biased or incorrect conclusions. Biases of the AI's creators or the
data used to train the AI can be reflected in its output. For example, many facial
recognition algorithms have higher error rates when used to identify women and
racial minorities, likely due to being trained mostly on images of white men. The AI
can also draw mistaken conclusions--finding patterns and drawing conclusions from
correlations that are only coincidences without deeper meaning.
outcomes, preserving the human biases that objective computer analysis is meant to
eliminate.
Many governments and organizations are starting understand the limitations and
complications of AI data processing, in addition to its benefits. As a result, agencies
and organizations are developing AI ethics codes to correct for these flaws and
increase transparency and oversight of AI systems.
For example, in 2020 the US Department of Defense adopted its own set of AI ethics
principles. The DoD's AI principles are a good example of the concept, having been
developed with input from national AI experts in government, academia, and the
private sector. In the DoD's ethical code, use of AI must be:
1. Responsible: DoD personnel will exercise judgment and care over the Department's
use and development of AI.
2. Equitable: The Department will actively attempt to minimize bias in AI capabilities.
3. Traceable: AI will be developed in such a way that relevant personnel understand the
technology, design procedure, and operations of the Department's AI.
4. Reliable: AI will have specific, well-defined uses, and will be tested on the safety,
security, and effectiveness in those uses throughout its life-cycle.
5. Governable: AI will be designed and made to fulfill their intended functions, while
being able to detect and avoid unintended consequences and to deactivate systems
that have unintended behavior.
AI use in data management can also benefit from this ethical code. Since AI used in
data management is often handling personal data and can make very impactful
decisions (like whether or not to approve someone for insurance), it is important to
ensure that the AI's conclusions are unbiased and that the people overseeing its
processes understand how it works. Knowing how an AI's decision-making works (as
opposed to the "black box" of unknown processes) makes it easier to repair errors,
correct for bias, or determine how much weight to give the AI's recommendations.
While these five principles are an illustrative model of AI ethics, the DoD has
struggled to develop internal rules to actually implement them. Without clear rules or
detailed internal guidance to clarify implementation, having a set of ethical principles
is of limited effectiveness.
responsible for overseeing AI processes should have clear guidance in their roles
and duties, as well as implementing AI-specific risk management controls.
2. Level of Human Involvement: Determine the appropriate level of human involvement
based on the severity and probability of harm. If the severity and probability are low
(like content recommendation), AI can act without human involvement. If there is a
moderate risk (like GPS navigation), the user should have a supervisory role to take
over when the AI encounters problems. If there is a high risk and/or severe harm (like
medical diagnosis), AI should only provide recommendations and all decision-making
authority should rest with the user.
3. Operations Management: Implementation of the AI process must be reasonably
controlled. Organizations must take measures to mitigate bias in both the data and
the AI model. The AI's decision-making processes must be explainable, traceable,
reliable, and easily able to be audited and assessed for errors.
4. Stakeholder Interaction and Communication: An organization's AI policies must be
available and known to users, in a clear and easy-to-understand way. Users should
be allowed to provide feedback on the effectiveness of AI data management, if
possible.
The EU AI Act
The EU, home of the flagship data privacy law GDPR, is preparing its own AI
framework. Unlike others, however, the EU AI Act will be a fully-enforceable law and
not an internal ethical code or guidance from an agency. First unveiled in 2021, the
AI Act could become the new standard for AI regulation.
The AI Act is based on the EU's ethics guidelines for trustworthy AI, written by High-
Level Expert Group on AI and the European AI Alliance. The bill draft covers a wide
array of software and automated tools, and imposes four levels of regulation based
on the risk the present to the public. These range from total bans to no regulation at
all.
1. Data Governance: The data used to train and test the AI must be relevant to its
purpose, representative of reality, error-free, and complete. Bias and data
shortcomings must be accounted for.
2. Transparency: Developers must disclose certain information about the system--the
AI's capabilities, limitations, intended use, and information necessary for
maintenance.
3. Human Oversight: Humans must be a part of the implementation--checking for bias
or dysfunction, and shutting the system down if it poses a risk to human safety or
rights.
4. Accuracy, Robustness, and Cybersecurity: High-risk AI systems must be accurate,
robust, and secure in proportion to their purpose. AI developers will have to provide
accuracy metrics to users and develop plans to ensure system robustness and
security.
5. Traceability and Auditability: AI developers must provide documentation on a very
long list of criteria to prove they are in compliance with the above requirements.
Limited-risk AI under the EU AI Act are those that do not pose a major risk to human
safety, but can still be abused to affect humans. This category includes deepfakes,
AIs designed to converse with humans, and AI-powered emotion-recognition
systems. The main issue with these is transparency--how does a user know if they
are interacting with an AI designed to emulate a human? The Act would grant EU
residents a right to know if they are looking at/hearing a deepfake, talking to a
chatbot, or being subject to AI emotion recognition.
The vast majority of AI pose a minimal risk to human well-being, and any type of AI
system not specifically mentioned in the Act falls into this category. The AI Act will
not regulate these AIs, but it does strongly encourage developing codes of conduct
to voluntarily apply good AI ethics to these systems.
The IEEE P7003 Standard for Algorithmic Bias Considerations is one of eleven IEEE ethics
related standards currently under development as part of the IEEE Global Initiative on Ethics
of Autonomous and Intelligent Systems.
The purpose of the IEEE P7003 standard is to provide individuals or organizations creating
algorithmic systems with development framework to avoid unintended, unjustified and
inappropriately differential outcomes for users.
The ‗Global Initiative‘ aims to provide ―an incubation space for new standards and solutions,
certifications and codes of conduct, and consensus building for ethical implementation of
intelligent technologies‖. As of early 2018 the main pillars of the Global Initiative are:
Example
Security camera applications that detect theft or suspicious behaviour.
Marketing automation applications that calibrate offers, prices, or content to an
individual‘s preferences and behaviour.
Since the standard aims to allow for the legitimate ends of different users, such as
businesses, it should assist them in assuring citizens that steps have been taken to
ensure fairness, as appropriate to the stated aims and practices of the sector where the
algorithmic system is applied. For example, it may help customers of insurance
companies to feel more assured that they are not getting a worse deal because of the
hidden operation of an algorithm.
The standard will describe specific methodologies that allow users of the standard to
assert how they worked to address and eliminate issues of unintended, unjustified and
inappropriate bias in the creation of their algorithmic system. This will help to design
systems that are more easily auditable by external parties (such as regulatory bodies).
Elements include: a set of guidelines for what to do when designing or using such
algorithmic systems following a principled methodology (process), engaging with
stakeholders (people), determining and justifying the objectives of using the algorithm
(purpose), and validating the principles that are actually embedded in the algorithmic
system (product);
a practical guideline for developers to identify when they should step back to evaluate
possible bias issues in their systems, and pointing to methods they can use to do this;
benchmarking procedures and criteria for the selection of validation data sets for bias
quality control;
methods for establishing and communicating the application boundaries for which the
system has been designed and validated, to guard against unintended consequences
arising from out-of-bound application of algorithms;
methods for user expectation management to mitigate bias due to incorrect
interpretation of systems outputs by users (e.g. correlation vs. causation), such as
specific action points/guidelines on what to do if in doubt about how to interpret the
algorithm outputs;
The ‗algorithmic system design and implementation‘ orientated sections are currently
envisaged to include sections on ‗Algorithmic system design stages‘, ‗Person
categorizations and identifying of affected groups‘, ‗Representativeness and balance
of testing/training/validation data‘, ‗System outcomes evaluation‘, ‗Evaluation of
algorithmic processing‘, Assessment of resilience against external biasing
manipulation‘, ‗Assessment of scope limits for safe system usage‘ and ‗Transparent
documentation‘, though it is anticipated that further sections will be added as work
progresses
The IEEE 1872-2015 Standard Ontologies for Robotics and Automation has been
developed using the METHONTOLOGY approach as described in Section II.
However, only concepts have been developed in the IEEE 1872-2015 standard.
The IEEE 1872-2015 Standard Ontologies for Robotics and Automation standard
establishes a series of ontologies about the Robotics and Automation (R&A) domain
e.g. the Core Ontology for Robotics and Automation (CORA).
Since it is a sistematization of what has been since before it. This involves five sets of
activities, namely, pre-development, development, post-development, management,
and support.
The following attributes were considered based on the Ontology Design Patterns
the ontology must be well designed for its purpose; • it shall explicitly
include stated requirements;
it must meet all and for the most part, only the intended requirements;
it should not make unnecessary commitments or assumptions; • it
should be easy to extend to meet additional requirements;
it reuses prior knowledge bases as much as possible;
there is a core set of primitives that are used to build up more complex
parts;
it should be easy to understand and maintain; • it must be well
documented.
Hence, the proposed life cycle (RoSaDev) to develop a robotic ontological standard is
an Agile-inspired, iterative method which involves four step
1. Collaborative Approach
The first step consists in identifying the ontological concepts for the standard, and is
followed by their development and formalization.
These steps are carried out in a collaborative way through brainstorms and
discussions, reaching consensus between the multiple stakeholders such as experts
from Public Bodies, Academia, and Industry
2. Middle-out Approach
The concept development is following a middle-out approach to address potential use
cases developed.
Ethics and Moral theories, with a particular emphasis on aligning ethics and
engineering communities to understand how to pragmatically design and implement
these systems
(i) a guide for teaching ethical design
(ii) a reference by policy makers and governments to draft AI related policies;
(iii) a common vocabulary to enable the communication among government
agencies and other professional bodies around the world;
(iv) a framework to create systems that can act ethically; and
(v) a foundation for the elaboration of other ethical compliance standards
Example:
if the elderly person is bored, the robot may suggest to play a board game;
if the elderly person needs to talk to his family, the robot may suggest a
Skype call;
if the elderly person has fallen, the robot may call help to his/her caregiver.
Preconditions: The elderly person is at the care home. The robot is at the care
home. The robot can move in the care home. The robot is always looking after
the elderly persons.
Name: Robot Companion to Recognize Elderly‘s Behaviour and to Suggest
Actions • Identifier (optional): P7007 Use case
Intent/Purpose: The use case describes how the robot can analyse the elderly
persons‘ behaviours and take the action to suggest activities
Alternate Related Scenario (optional): The use case can also be applied or
extended for care robots deployed at elders‘ homes.
Relevant Knowledge: {capability, behaviour, services, actions, recognition,
Skype call, call for help, user‘s will, safety, ignore, interaction, pose
recognition, voice recognition, play board game, emotion recognition,
activate, deactivate, knowledge base, task}