Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 5

AI in Robotics Ethical, Legal, Professional and Social Issues

1. Ethical Issues
Human-Robot Interaction
The study of human-robot interaction is concerned with understanding how individuals and
machines communicate and work together. When examining the effects of HRI on individuals,
society, and the interaction between people and machines, ethical issues may come up in this
situation. In elderly care homes, social robots are being used more frequently to give elderly
people companionship and support. These robots interact, help with tasks around the house, and
offer emotional support. In this framework, there are ethical concerns that arise when Human-
Robot Interaction. According to deontological theories lack of Human Connection is one of the
main concerns that raise by utilizing social robots. Elderly people may experience isolation and
loneliness if social robots totally replace human interaction. Because it minimizes the value of
interpersonal relationships and empathy, the lack of real human connection and emotional
understanding can be considered unethical. Another ethical issue is Privacy and Surveillance
Concerns. The rights of individuals to privacy may be violated if social robots are given
sophisticated monitoring capabilities without their adequate authorization or transparency. If
these tools are used unethically, there may be privacy violations and worries about surveillance.
Consequentialist concepts, on the other hand, focus on the positive outcomes that social robots
can bring, particularly in terms of emotional care and friendship. By offering inhabitants
company and emotional support, social robots can supplement human care. Robots can reduce
feelings of loneliness and help people feel better if they are ethically built to enhance human
connections rather than replace them. Additionally, ensuring Personalized and Consent-Based
Interactions is crucial. By incorporating personalization and obtaining informed consent from
elderly residents, social robots should respect individual autonomy and preferences. Ethical HRI
practices ensure that residents have control over their interactions with the robots, allowing them
to decide the level of engagement and privacy they are comfortable with. In order to maintain an
ethical and respectful approach to HRI in such critical contexts, it is crucial to balance the
benefits of robotic assistance with the preservation of human connection and privacy rights.

(plato.stanford.edu/, 2023)
2. Legal Issues
Data privacy and security are significant concerns when it comes to the use of AI and robotics, as
these technologies frequently involve the collection, processing, and storage of large amounts of
personal data. Collecting and processing sensitive data about individuals, raise concerns about
privacy and security. AI and robotics often rely on data to make informed decisions, learn, and
improve their performance. However, this reliance on data raises concerns about individuals'
rights to privacy and the responsible handling of their personal information. To preserve people's
rights and uphold confidence in robotic and AI systems, it is essential to ensure that this data is
handled appropriately. The Data Protection Act 2018 is a legal framework that controls how
organizations gather, store, process, and handle personal data including AI in robotics systems
that operate in the UK. A robotic assistant with AI is being developed to support elderly people
at home. However, without the residents' knowledge, the robot unintentionally records and
retains their private movements and discussions. According to Article 5, part 1, point C the
information gathered must be sufficient, pertinent, and restricted to what is required for the
intended purpose. To stick with this guideline, unintended data collection should be limited to a
minimum. An AI-powered robot that can recognise people using face recognition and
communicate with them in public places is being developed by a robotics company. For the
purpose of providing specialised services, the robot records and processes facial data from users.
As stated in Article 6 the company must have a valid legal justification for collecting facial data,
such as gaining individuals' express consent or establishing actual company requirements. Facial
information may be categorized as a special type of biometric information. For processing such
data, specific standards must be met, such as getting express consent or establishing other legal
justifications. The Human Rights Act is essential in ensuring that individuals' rights are
protected when it comes to AI in robotics and its effects on security and privacy. The goal of an
autonomous delivery robot is to move parcels and items around a city. Customer data, such as
delivery addresses and payment information, is gathered and stored by the robot. Hackers can
access and steal this private information because of a security failure in the robot's system. The
principles of data protection are strongly tied to the right to privacy under Article 8. It is crucial
to ensure the security and confidentiality of personal data, especially when data processing and
storage are involved with AI in robots.
(robohub.org, 2021)
3. Professional Issues
The safety of individuals and the environment is one of the most important professional issues in
AI and robots. There should be robust safety procedures for autonomous robots functioning in
dynamic and unpredictable environments, such as cities, factories, and hospitals, to prevent
accidents and reduce possible risks. Considering a business that uses tiny, autonomous delivery
robots to deliver packages from a center to clients' doorsteps. These robots move through
roadways and pathways for pedestrians, interacting with people and other vehicles as they do so.
Although this technology may increase productivity and convenience, it also brings up a number
of work-related challenges that need to be resolved. According to the Public Interest area in BCS
highlights a significant value on professionals' responsibilities to behave in the public interest,
which includes ensuring the appropriate use of technology and protecting public health and
safety. On the other hand, according to the Duty to Relevant Authority area in BCS code of
conduct section 3d, professionals have a responsibility to release personal information of clients
with permission from the relevant authorities or as stated by the law. For AI-powered robots to
identify and avoid collisions with people, other robots, and objects in their environment, they
must have sophisticated sensor systems and algorithms. For the purpose of avoiding accidents,
real-time responsiveness must be maintained. Additionally, autonomous robots need to be
programmed to respond effectively in emergency situations. For instance, a delivery robot must
avoid dangerous movements that could endanger the safety of people or cars if it encounters a
barrier. The field of robotics is also still developing, and the majority of individuals who would
employ domestic robots in their homes are unfamiliar with how to operate them and cannot
assess their safety and quality. Therefore, it is the responsibility of the manufacturers to
effectively inform their customers on how to operate these robots and their terms and conditions
in case there is a damage caused by the robot in a household. An IT professional can impact the
life, health, privacy and future of a client by the services and work they do thus there can be
damage or harm by dishonesty or incompetence of a professional. As a result, it is the
responsibility of the professional to notify a customer before releasing any sensitive information
that might be in their possession by means of monitoring and maintenance of the domestic
robot’s system. To increase public confidence and acceptance of AI-powered robots, it is crucial
to implement strong safety regulations in AI algorithms.

(themanufacturer.com, 2023)
4. Social Issues
Autonomy and Accountability are two major social issues in robots and artificial intelligence.
The issue of how to hold AI-powered robots responsible for their acts and decisions arises as
they grow more independent. When robots are used in crucial or high-stakes circumstances, this
problem becomes especially important. Robots powered by AI are rapidly being built with the
ability to function independently, which means they can make decisions and complete tasks
without direct human input. While in some situation’s autonomy can increase effectiveness and
performance, it also raises concerns regarding responsibility and accountability for the deeds and
effects of AI robots. The development of autonomous vehicles is a prime example of this social
issue. AI algorithms and sensors are used by autonomous vehicles to navigate the road and make
quick judgments like when to brake, accelerate, or change lanes. While the promise of
autonomous vehicles includes possible safety enhancements and a decrease in traffic accidents,
instances involving these vehicles have sparked concerns about responsibility. Determining
accountability and blame in situations where autonomous vehicles have been engaged in
incidents that have resulted in injuries or fatalities can be difficult. It is unclear whether the AI
system of the car, the car's manufacturer, the driver, or other external, uncontrollable elements
are to blame. In terms of establishing responsibility and ensuring victims receive fair recompense
and justice, this poses legal, ethical, and regulatory issues. The benefits of autonomous systems
can be enjoyed by society while reducing possible risks and assuring responsible and safe
deployment if the issue of autonomy and responsibility in AI and robotics is proactively
addressed.

You might also like