Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 10

Test: Ethics & Robotics

Daniel Campos Chvez


UPA MTR 2 A

Abstract this paper deals with the birth of Roboethics. Roboethics is the ethics inspiring the design, development and employment of Intelligent Machines. Roboethics shares many 'sensitive areas' with Computer Ethics, Information Ethics and Bioethics. It investigates the social and ethical problems due to the effects of the Second and Third Industrial Revolutions in the Humans/Machines interactions domain. Urged by the responsibilities involved in their professions, an increasing number of roboticists from all over the world have started - in cross-cultural collaboration with scholars of Humanities to thoroughly develop the Roboethics, the applied ethics that should inspire the design, manufacturing and use of robots. The result is the Roboethics Road-map. Some common questions: How far can we go in embodying ethics in a robot? Which kind of ethics is a robotics one? How contradictory is, on one side, the need to implement in robots an ethics, and, on the other, the development of robots autonomy? Although far-sighting and forewarning, could Asimovs Three Laws become really the Ethics of Robots? Is it right to talk about consciousness, emotions, personality of Robots?

I. ETHICS AND ROBOTICS. Robotics is rapidly becoming one of the leading fields of science and technology. Figures released by IFIR/UNECE Report 2004 show the double digit increasing in many subsectors of Robotics as one of the most developing technological fie4ld. We can forecast that in the XXI century humanity wills coexist with the first alien intelligence we have ever come into contact with - robots. All these developments have important social, ethical, and economic effects. As for other technologies and applications of scientific discoveries, the public is already asking questions such as: Could a robot do "good" and "evil? Could robots are dangerous for humankind? . Like Nuclear Physics, Chemistry or Bioengineering, soon also Robotics could be placed under scrutiny from an ethical standpoint by the public and Public Institutions (Governments, Ethics Committees, and Supranational Institutions). Feeling the responsibilities involved in their practices, an increasing number of roboticists from all over the world, in cross-cultural collaboration with scholars of Humanities, have started deep discussions aimed to lay down the Roboethics, the ethics that should inspire the design, manufacturing and use of robots. The Three Laws of Robotics (often shortened to The Three Laws or Three Laws) are a set of rules devised by the science fiction author Isaac and later added to. The rules are introduced in his 1942 short story Runaround ", although they were foreshadowed in a few earlier stories. The Three Laws are:

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law. 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

The Three Laws form an organizing principle and unifying theme for Asimov's robotic -based fiction, appearing in his Robot series, the stories linked to it, and his Lucky Starr series of young-adult fiction . The Laws are incorporated into almost all of the positronic robots appearing in his fiction, and cannot be bypassed, being intended as a safety feature. Many of Asimov's robot-focused stories involve robots behaving in unusual and counter-intuitive ways as an intended consequence of how the robot applies the Three Laws to the situation it finds itself in. Other authors working in Asimov's fictional universe have adopted them and references, often parodic , appear throughout science fiction as well as in other genres. The original laws have been altered and elaborated on by Asimov and other authors. Asimov himself made slight modifications to the first three in various books and short stories to further develop how robots would interact with humans and each other; he also added a fourth, or zeroth law, to precede the others:

0. A robot may not harm humanity, or, by

inaction,

allow

humanity

to

come

to

harm.

The Three Laws, and the zeroth, have pervaded science fiction and are referred to in many books, films, and other media. It is recognized that they are inadequate to constrain the behavior of robots (see friendly artificial intelligence ), but it is hoped that the basic premise underlying them, to prevent harm to humans, will ensure that robots are acceptable to the general public.

From which standpoint do we as ethicists speak? And for whom? What are the consequences and what is the (potential) field of application of an ethics of human interaction with communication, bionic and robotic systems (in the following techno-ethics)? An important part of it should be an ethics of technology design and production. Techno-ethics should supports strong, contestatory democratic practice and citizen activity that is involved in the creation of technoscientific artifacts. The leading question is how to design an interdisciplinary process that also involves engineers and technology designers in the ongoing discussion.

A second question is, whether or how is it possible (and desirable) to develop a general ethics for any kind of robots and agents. In which case(s) do we need a differentiation of fields of application and types of robots / agents with regard to

ethical concerns? On a socio-technical level robots are described as sensomotorical machines which expand the human ability to move. They consist of mechatronic components, sensors and computing-based control and steering functions. The complexity of a robot is bigger than that of other machines because of its higher degree of freedom and its multiplicity and amount of behave ours." (Chris taller et al. 2001, Transl. Jutta Weber). Relevant questions to discuss are whether there is a qualitative difference between classical, trans-classical machines and autonomous systems. A third question should also be cui bono? For whom and by whom are robots developed? Who fits the standards that robots and robotic devices like AIBO, Pino, Paro, Kismet etc. embody? Do they contribute to deeper equality, keener appreciation of hetero-ge-neous multiplicity, and stronger accountability for livable worlds? Besides that a reflection on the socio-cultural context of the debate on robots and agents is needed. What kind of societal conflicts and power relations are intertwined in the production and usage of agents and robots? How does the fusion of science, technology, industry and politics come into play? What about the military interest in robotics and agents? Last but not least a central task for techno-ethics is to learn the lessons from the discussion on bioethics. For example: We should avoid abstract discussions of the agency or intentionality of agents and robots and reflect whether they are helpful to work out the contest on the future development and use of agents and robots.

The massive use of robots will change society probably in a similar way as cars and airplanes (and in former times: ships etc.) did and it already changed society think of industrial robots in the workplace who are an important factor with regard to the growing unemployment in Europe. This broad view of societal changes and consequently of the view(s) of ourselves, including our (moral) values, is fundamental There may be a re-definition of what it means to be human For instance the EU Charter of Human Rights is human centered. The massive use of robots may challenge this anthropocentric perspective. Why do we want to live with robots? What do we live with robots for? There are different levels of reflection when answering these questions, starting with the trivial one that robots can be very useful and indeed indispensable for instance in todays industrial production or when dealing with situations in which the dangers for humans are big. But before reflection in this direction let us take the perspective of what Ren Girard calls mimetic desire (Girard 1972).

Robotics and Ethics Is Robotics a new science, or is it a branch or a field of application of Engineering? Actually Robotics is a discipline born from Mechanics, Phys-ics/Mathematics, Automation and Control, Electron-ics, Computer

Science, Cybernetics and Artificial Intelligence. Robotics is a unique combination of many scientific disciplines, whose fields of applications are broadening more and more, according to the scientific and technological achievements.

Specificity of Robotics It is the first time in history that humanity is approaching the challenge to replicate an intelligent and autonomous entity. This compels the scientific community to examine closely the very concept of intelligence in humans, animals, and of the mechanical from a cybernetic standpoint. In fact, complex concepts like autonomy, learning, consciousness, evaluation, free will, decision making, freedom, emotions, and many others shall be analyzed, taking into account that the same concept shall not have, in humans, animals, and machines, the same semantic meaning. From this standpoint, it can be seen as natural and necessary that Robotics drew on several other disciplines, like Logic, Linguistics, Neuroscience, Psychology, Biology, Physiology, Philosophy, Literature, Natural History, Anthropology, Art, and Design. Robotics de facto combines the so called two cultures, Science and Humanities. The effort to design Roboethics should take into account this specificity. This means that experts shall consider Robotics as a whole - in spite of the current early stage which recalls a melting pot so they can achieve the vision of the Roboticsfuture.

1. Epistemological, Ontological, and Psychoanalytic Implications. The relation between humans and robots can be conceived as an envy relation in which humans either envy robots for what they are or they envy other humans for having robots that they do not have. In the first case, envy can be positive in case the robot is considered either as a model to be imitated or negative in case the relationship degenerates into rivalry. This last possibility is exemplified in many science fiction movies and novels in which robots and humans are supposed to compete. Robots are then often represented as emotion-free androids, lacking moral sense and therefore less worth than humans. Counter examples are for instance 2001: A Space Odyssey (Stanley Kubrick 1968) or Stanislaw Lems novel Golem XIV (Lem 1981). The mimetic conflict arises not only by the fact of imitating what a robot can do but more basically of imitating what it is supposed to desire. But a robots desires are paradoxically our own since we are the creators. The positive and negative views of robots shine back into human self-understanding leading to the idea of enhancing human capabilities for instance by implanting artificial devices in the human body. When robots are used by

humans for different tasks, this creates a situation in which the mimetic desire is articulated either as a question of justice (a future robot divide) or as new kind of envy. This time is the object of envy not the robot itself but the other human using/having it. The foundational ethical dilemma with regard to robots is thus not just the question of their good or bad use but the question of our relation to our own desire with all its creative and destructive mimetic dynamism that includes not only strategies such as envy, rivalry and model but also their trivial use as a tool that eventually turns to be a question of social justice.

Robots can be seen as masks of human desire. Our mimetic desire might influence (but how far?) the exchange value they get in the market place. Our love affair with them opens a double bind relationship that includes the whole range of human passions, from indifference through idealization until rivalry and violence although this might not be the case with regard to the contemporary state of the art in robotics as they still lack intelligence and unpredictable behaviour. It is the task of ethical reflection to go beyond the economic dimension in order to discover the mechanism that makes possible the invention, production, and use of robots of all kinds. These mechanisms are based on the human mimetic passion(s) on an individual as well as on a societal scale. In a mythical sense robots are experienced by our secularized and technological society as scapegoats for what is conceived the humanness of humanity whose most high and global expression is the Universal Declaration of Human Rights. From this mythical perspective, robots are the bad and the good conscience of our selves. They give us the possibility of a moral discourse on our selves but at the same time it takes away our attention from the intolerable situation of infringement of these rights with regard to real human beings. In other words, an ethical reflection on robots must take care of these pitfalls particularly when considering the dangers of the mimetic desire with regard to human dignity, autonomy or data protection. It must reflect the double bind relationship between humans and robots. If robots mirror our mimetic desire we should develop individual and social strategies in order to unmask the unattainable object we strive for that turns into a danger when it looks like a fulfillment in view of which everything including ourselves should be regarded as mean to an end. The concept of human dignity is a hallmark above and beyond our own desire. It is a hallmark of self transcendence independently of technological and/or religious promises. It allows us to avoid ideological or fundamentalist blockades by regulating at the same time the dynamic of mimetic desire.

The concept of robot is ambiguous. According to Karel apek who first coined the term, a robot is a human like artificial device, an android, that is able to perform autonomously, i.e., without permanent human guidance, different kind of tasks

particularly in the field of industrial production. Anthropomorphic robots but also artificial devices imitating different kinds of living beings have a long tradition. Todays industrial robots are often not human like. There is a tension between technoid and naturoid artificial products [Negrotti 1995, 1999, 2002]. The concept of artificiality itself is related to something produced by nature and imitated by man. Creating something similar but not identical to a natural product points to the fact that anything to be qualified as artificial should make a difference with regard the natural or the original (Negrotti). Robots are mostly conceived as physical agents. With the rise of information technology softbots or software agents have been developed that have also impact in the physical world so that it is difficult to draw a clear border. This is also the case with regard to the hybridization between humans and robots (cyborgs). In fact, not only individuals but society as a whole is concerned with a process of cyborgization.

What are robots? They are products of human dreams (Brun 1992, Capurro 1995). Every robotic idea entails the hidden object of our desire. Robots are thus like the images of the gods (Greek: agalma) inside the mask of a satyr. According to Jacques Lacans psychoanalytic interpretation (Lacan 1991), following the Platonic narrative of the love encounter between Socrates and Alcibiades in the Symposium (Symp. 222), such small objects are the unattainable and impossible goal of human desire. Plato describes in the Timaeus the work of the demiurge shaping the world as a resemblance (agalma) of the divine as a work of joy and therefore as an incentive to make the copy more similar to the original (pardeigma). In sum, our values or the goal of our desire are embedded into all our technological devices and particularly in the kind of products that mimicry our human identity. Therefore, the question is not only which values are we trying to realize through them but why are we doing this? Robots are a mirror of shared cultural values that show to us and to others who we want to be. We redefine ourselves in comparison with robots in a similar way as we redefine ourselves in comparison with animals or with gods. Theses redefinitions have far-reaching economic and cultural implications. But, who is the we of this kind of psychoanalytic discourse? What about an engineering culture which is mostly involved in the development & design of robots? In gender approaches we have the claim of a masculine culture of technology production. So do all people have the same kind of double-bind relationship to robots? And what about cultural differences?

From Myth to Science Fiction The issue of the relationship between humankind and autonomous machines or, automata - ap-peared early in world literature, developed firstly through legends and myths, more recently by scien-tific and moral essays. The topic of the rebellions of automata recurs in the classic European literature, as well as the misuse or the evil use of the product of ingenuity. It is not so in all the world cultures: for instance, the mythology of the Japanese cultures does not include such paradigm. On the contrary, machines (and, in general, human products) are always beneficial and friendly to humanity. This difference in seeing the machines is a subject we should take into account and analyse.

2. Ethical Aspects of Man-machine Relations. How do we live in a technological environment? What is the impact of robots on society? How do we (as users) handle robots? What methods and means are used today to model the interface between man and machine? What to think about the mimicry of emotions and stereotypes of social norms? What kind of language / rhetorics is used in describing the problem of agent and bots and which one do we want to use? In AI and robotics we can often find a sloppy usage of language which supports anthropomorphising agents. This language often implies the intentionality and autonomy of agents for example when researcher speak of learning, experience, emotion, decision making (and so on) of agents. How are we going to handle in science and in our social practices this problem? Robots are not ready-made products of engineers and computer scientists but devices and emerging technologies in the making:

What are the consequences of the fact that today ICT devices are developed by computer scientists and engineers only? What is the meaning of the relation master-slave with regard to robots? What is the meaning of robot as a partner in different settings? Recent research on social robots is focusing on the creation of interactive systems that are able to recognise others, interpret gestures and verbal expressions, which recognize and express emotions and that are capable of social learning. A

central question concerning social robotics is how "building such technologies shapes our self-understanding, and how these technologies impact society. To understand the implications of these developments it is important to analyse central concepts of social robotics like the social, sociality, human nature and human-style interactions. Main questions are: What concepts of sociality are translated into action by social robotics? How is social behaviour conceptualised, shaped, or instantiated in software implementation processes? And what kind of social behaviours do we want to shape and implement into artefacts?

There is a tendency to develop robots modeling some aspects of human behavior instead of developing an android (Arnall 2003). Relative autonomy is a goal for physical robots as well as for softbots. What is the meaning of the concept of autonomy in robotics? What are the affinities and differences between the robotic discourse and the philosophical discourse? Obviously, we can experience a strong bidirectional travel of the concept of autonomy (as well as that of sociality, emotion and intelligence) between very diverse discourses and disciplines. How does the concept transfer between the disciplines and especially the strong impact of robotics change the traditional meanings of concepts like autonomy, sociality, emotion and intelligence?

Having regard to the EU Charter of Fundamental Rights (Art. 1, 3, 6, 8, 25, 26) following questions arises: (a) Who is responsible for undesired results of actions carried out by human-robot hybrid teams? (b) How is the monitoring and processing of personal data by AI agents to be regulated? (c) Can bionic implants be used to enhance, rather than restore, physical and intellectual capabilities?

All three questions address possibilities that have an immediate impact on single human beings, since responsibility is traditionally attributed to single actors (with include individuals), the human right to privacy protects the ability to live autonomously, and enhancements are for the benefits of a singular person. But the importance of robot-human-integration goes beyond the level of the single individual, and address the question about how society or community could and should look like in which bots are integrated. Probably only certain members of a society or community will interact with certain kind of bots, for instance entertainment bots for rich people, service

bots for elderly or ill people etc. This kind of interaction with bots may also build new forms of communities. Close attention should be paid to what groups of individuals are likely to interact with certain kind of bots in a certain context while at the same time keeping the perspective on the impact of the specific interactions on the communities and societies in which these specific forms of interactions take place. All three forms of human-bot integration may include aspects of violation as well as fostering of human rights and dignity. It may not even ruled out that one and the same technology may do have both positive and negative effects. Surveillances infrastructures may be considered harmful with regard to privacy, but they may also enable us to create new kinds of communities.

The potential benefits or harm may be caused by certain forms of human-bot-integration. How to dissolve arising conflicts, especially if there is a conflict between the individual perspective and the perspective of a society or community? Such kind of enhancements might be considered a benefit to an individual but also raise new questions such as whether only an elite might be able to transform themselves into cyborgs or a worst case scenario whether the unemployed would be forced to have some sorts of implants to enable them to do certain jobs. When addressing the question of responsibility we should take into account that there are different levels of responsibility even when ascribing responsibility to an individual which might be held responsible for something with regard to her/his personal well-being, to the social environment (friends, family, community), to his/her specific (professional or private) role also as a citizen who is responsible to the society or the state someone lives in, or as a human being at all. Furthermore this does include the question whether and how responsibility might be delegated and whether institutions might be moral responsible with regard to robots. Robots are less our slave which is a projection of the mimetic desire of societies in which slavery was permitted and/or promoted than a tool for human interaction. This throws questions of privacy and trust but also of the way we define ourselves as workers in industry, service and entertainment. This concerns different kinds of cultural approaches to robots in Europe and in other cultures that may have different impact in a global world. Different cultures have different views on autonomy and human dignity.

You might also like