Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 2

I think currently or in the very near future, it is not yet impossible for robots to

replicate human robots. Firstly, to replicate something, the expression being copied
must be well understood. However, at this level of science, we humans are not at
the stage of being able to express human emotions in terms of data or numbers. As
discussed in the lecture, laughing can be detected clearly from coughing, but fake
laughter may be very difficult to differentiable from real, heartfelt laughter. This
example is particularly enlightening about the situation because in real human
interactions fake laughter is ubiquitous. We sometimes fake a laugh to be polite,
sometimes to mask our actual feeling. As such, with the current technology, I do not
think that it is possible to for us to program so that the robot can understand and
replicate human emotions after observing many samples.
Secondly, there is one more big hurdle for robots to understand and replicate
human emotions. That is differences of humans in culture and background. For
example, in some cultures, it is perfectly acceptable to be very expressive of our
own emotions while in some cultures, it is better to be less expressive about your
feelings. I think with the current technology in AI, it is not possible yet for robots to
understand these specific and detailed interactions. However, in contrast, I am
more optimistic about the aspect of robots being able to replicate human emotions.
While it is undeniable that it is very difficult for engineers to imitate very complex
nature of skin, I think this is one part that can be achieved in the near future. In
other words, while I think it is close to impossible to robots to understand us, once
they understand, I think it will be fairly easy to re-apply and replicate human
emotions.
The three elements that having multimodal communication, understanding
conscious and unconscious communication and being adaptive. There are many
reasons why the robot needs to be multimodal. Firstly, the interaction will not feel
real or convincing if the robot can only understand commands from the switch. To
simulate real conversation feeling, the robot must be able to receive
communication cues from the sound as well as sight. This is because very common
for humans to convey certain meaning using hand gestures. Secondly, the robot
must be able to understand conscious and unconscious communication. It is well
known that humans may not necessarily say what they really think or feel. For
example, an angry person may try to hide his or her anger in the public. But, a
perceptive person might be able to discern that said person is angry by judging the
eye movements, tone or speed of the rhythm. The field of studying body language is
dedicated to this. It is not very realistic to teach or program robots to find out these
hidden cues like a normal human being would. Instead, equipped with infrared
sensors or various sensors, robots are more suited to detect the physiological
changes that come together with change in emotion. For example, an angry person
will have higher heart rate, higher core temperature, and faster breathing rate. The
last of the three element is being adaptive. As engineers or programmers, it is
definitely impossible for them to code and instruct replies for every single social
interaction possible. Currently, there are several AI bots that that will have the
ability to give you a fixed set of replies. However, everyone who has come in contact
with such bots will all agree that it feels unnatural and the fixed responses are very
far from actual human-human interaction. The ideal situation would be that the bot
has the ability to adapt according to the flow of the conversation and give
appropriate replies. Additionally, the bot should have the ability to rectify the
situation if there is any error. For example, if what the robot said previously
offended the other party, it should be able to detect this and apologize for this.
A very valuable thing I learned from this lecture is the current stage of AI
technology. Occasionally, we see exaggerated news about how AI is going to replace
jobs or how AI makes artists obsolete. However, from the lecture I learned that AI is
far from being able to cause such effects. Also, these types of news make people
mistrust AI while in reality I feel that development of AI technology will lead to
more benefits to the humankind. Additionally, it was very interesting to see how we
can calculate and formulate things that are seemingly abstract such as music or
dance.

You might also like