automatic agent+IoTbased+samrt house simulator

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 11

Neurocomputing 209 (2016) 14–24

Contents lists available at ScienceDirect

Neurocomputing
journal homepage: www.elsevier.com/locate/neucom

Automatic agent generation for IoT-based smart house simulator


Wonsik Lee a, Seoungjae Cho b, Phuong Chu b, Hoang Vu b, Sumi Helal c, Wei Song d,
Young-Sik Jeong b, Kyungeun Cho b,n
a
Mobile R&D center, Mobile Communication Company, LG Electronics, Seoul 150-721, Republic of Korea
b
Department of Multimedia Engineering, Dongguk University-Seoul, Seoul 04620, Republic of Korea
c
Mobile and Pervasive Computing Laboratory, University of Florida, Gainesville, Florida 32611-6120, USA
d
College of Computer, North China University of Technology, Beijing 100-144, China

art ic l e i nf o a b s t r a c t

Article history: In order to evaluate the quality of Internet of Things (IoT) environments in smart houses, large datasets
Received 7 February 2015 containing interactions between people and ubiquitous environments are essential for hardware and
Received in revised form software testing. Both testing and simulation require a substantial amount of time and volunteer re-
1 April 2015
sources. Consequently, the ability to simulate these ubiquitous environments has recently increased in
Accepted 27 April 2015
Available online 8 June 2016
importance. In order to create an easy-to-use simulator for designing ubiquitous environments, we
propose a simulator and autonomous agent generator that simulates human activity in smart houses. The
Keywords: simulator provides a three-dimensional (3D) graphical user interface (GUI) that enables spatial config-
Virtual environment uration, along with virtual sensors that simulate actual sensors. In addition, the simulator provides an
Autonomous agent
artificial intelligence agent that automatically interacts with virtual smart houses using a motivation-
Ubiquitous computing
driven behavior planning method. The virtual sensors are designed to detect the states of the smart
GUI tool
Behavior planning house and its living agents. The sensed datasets simulate long-term interaction results for ubiquitous
computing researchers, reducing the testing costs associated with smart house architecture evaluation.
& 2016 Elsevier B.V. All rights reserved.

1. Introduction present a simulator that creates a virtual smart house and simu-
lates action recognition in the virtual environment. In the simu-
A smart house provides an intelligent home management in- lator, a virtual smart house is created with independent virtual
terface and a comfortable living environment. Smart houses have sensors, which record environmental information, including user
recently become important research topics in the Internet of location and house temperature [12]. The simulator provides de-
Things (IoT) [1–3]. A variety of IoT-based sensors connected by signers with a graphical user interface (GUI) in order to help them
wireless networks are installed in smart houses, to enhance the arrange house components and sensors. The smart house situation
life of the home's residents [4–8]. Smart house architectural en- and user state are then estimated using these sensed datasets [13].
gineers desire the ability to intuitively configure their smart In addition, an autonomous agent generator is provided for
houses and meet user needs before construction begins. Therefore, defining virtual agent behaviors and action effects. Virtual simu-
reliable, low-cost test beds are required in order to examine the lation must mimic the real world. Therefore, in order to simulate
architecture design. an actual person within the smart house, we create an intelligent
In addition, smart houses must monitor interaction between agent, which autonomously executes behavior planning based on
users and house components in order to provide appropriate various motivations [14]. The interactions between virtual agents
services [9]. Sensors detect various kinds of environmental data- and the house are recorded and visualized. After a long-term test,
sets [10]. However, individual sensors work independently and the recorded information is provided and used as a reference by
report simple information. In order to apply reliable ubiquitous smart house researchers in order to ensure convenient, desirable
computing and detect the living situation, multiple sensors must services.
be mounted [11]. However, hardware reconstruction for this type The rest of this paper is organized as follows. In Section 2, we
of interface installation substantially increases the cost. discuss works related to smart house simulations and behavior
Therefore, in order to provide a low-cost, effective test bed, we planning. Section 3 describes the simulator's structure, sensor-
based simulation techniques, and the autonomous agent gen-
n
Corresponding author. erator. The proposed simulator's performance is analyzed in
E-mail address: cke@dongguk.edu (K. Cho). Section 4, and Section 5 presents our conclusions.

http://dx.doi.org/10.1016/j.neucom.2015.04.130
0925-2312/& 2016 Elsevier B.V. All rights reserved.
W. Lee et al. / Neurocomputing 209 (2016) 14–24 15

Fig. 1. Architecture of the proposed simulator.

Table 1
Sensors and functions.

Sensor types Smart house applications Simulation functions

Location-based Motion detection Detects movements in detection scope while attached to a wall Returns ‘true’ if a movement is detected
sensor sensor or ceiling
Pressure sensor Detects pressure in the detection scope while attached to the Returns ‘true’ if a character is inside the detection
floor scope
Vibration detection Detects vibration while attached to furniture, such as a bed or Returns ‘true’ if vibration is made by a character in the
sensor chair detection scope
Physical sensor Temperature sensor Measures room temperature Calculates and returns temperature depending on ex-
ternal environmental variables

Object sensor RFID tag Sends signal to RFID receiver while attached to an object or wall Target point used to calculate distance between a RFID
tag and receiver
RFID receiver Identifies if an object is being used by receiving signals from Measures distance from an RFID tag to an RFID receiver
RFID tag, measuring signal strength, and estimating distance to and returns signal strength
tag
Contact detection Verifies opening or closing actions while attached to objects that Comprised of a pair of contact detection sensors and
sensor can be opened and closed, such as a door or window returns ‘true’ when a pair loses its contact

2. Related work simulation system. The system's design enabled virtual sensors
and home appliances to operate in two-dimensional (2D) en-
Smart house simulation research aims at generating long-term vironments. However, the system required a manually defined
testing data in order to verify a self-adaptive house architecture. agent scenario for testing the smart house after the design. Yang
Helal et al. [15] proposed an event-driven simulator, which pre- et al. [18] executed a simulation that provided a location-based
sents the common elements of a smart house, including house service in a smart house using the time-based Markov model.
components, sensors, agents, and their interactions. Given the However, because agents repeat the predefined schedule in this
sensing datasets, agents enact behavior planning and interact with simulation, it is difficult to obtain simulation results from diverse
the smart house using an event list. Park et al. [16] designed a environments.
context-aware simulation system, which allows smart house de- Nishikawa et al. [19] proposed a ubiquitous application simu-
signers to determine the optimal arrangement of house compo- lator, which provided an intuitive test bed for ubiquitous appli-
nents and sensors. In this system, context information was gen- cations in a virtual environment. The simulator rendered several
erated in order to report the interaction between virtual sensors invisible physical quantities, such as temperature, humidity, elec-
and users. Using the rule conflicts that are detected in the context tricity, and radio, simply using the GUI module. Lertlakkhanakul
information, developers can refine the smart house architecture. et al. [20] proposed an interactive virtual reality platform that si-
However, these studies have executed simulations without using mulates the spatial context-aware building data model, living
intuitive user interfaces and should be extended to the GUI level. agents, and web services of a smart house. A smart home user can
To reduce the substantial cost of developing smart house sys- control an avatar agent, which interacts with the virtual smart
tems, Jahromi et al. [17] proposed a multi-purpose smart house house, using the context-aware services. These studies allowed
16 W. Lee et al. / Neurocomputing 209 (2016) 14–24

Fig. 5. Temperature sensor heat calculation model.


Fig. 2. Motion detection method.

Fig. 6. Motivation-driven behavior planning system.

Fig. 3. Pressure sensor process.

Fig. 4. Pressure-sensor vibration-triggering process.

house owners to easily comprehend the smart house architectural


model, decreasing the design failure ratio before the construction
period.
Reliable house construction requires a sufficient long-term in-
teraction testing process. Because a human-controlled avatar is
difficult to apply, time consuming, and taxing on volunteer re-
sources, smart house simulation using these simulators requires a Fig. 7. Prior motivation extraction process example.
W. Lee et al. / Neurocomputing 209 (2016) 14–24 17

and social types. According to neuroscience theory, an autono-


mous agent can use intrinsic motivation to make decisions. Sevin
and Thalmann [25] proposed a motivational model of behavior
planning and a hierarchical classifier system in order to group
motivations into levels. Singh et al. [26] presented a study of in-
trinsically motivated reinforcement learning, which generates skill
hierarchies using a type of Markov decision process. Merrick [27]
extended reinforcement learning to multi-task applications, in
which motivation serves as a reward estimation parameter.
Song et al. [28] proposed a multiple sequential learning and
prediction system in order to help autonomous agents interact
with unknown environments. In the sequence learning process,
sensed states are classified according to a set of proposed moti-
vation filters in an effort to reduce learning computation. In the
prediction process, the learning agent makes decisions based on
each state's estimated cost in attaining a high payoff from the
environment [29]. As an extension of this work, we propose an
autonomous agent generator that can be used in ubiquitous
computing simulations.

Fig. 8. Prior motivation extraction algorithm.

3. Simulation system
scenario that defines basic actions.
In order to generate an automatic virtual agent, behavior-
In this section, we describe the architecture of the proposed
planning research has been implemented based on agent moti-
simulator, the applied sensor-based simulation techniques, the
vation. A motivation is a reason that causes a certain event or
developed sensor models, and the autonomous agent generator.
action [21]. Motivation has previously been employed in order to
prompt virtual agents to select goals [22]. Andriamasinoro [23] 3.1. Structure and function
applied motivation hierarchy theory and created a social hybrid
agent, which is able to execute social activities. According to pyr- In order to generate smart house environmental data while
amid motivation layer theory, possible actions are defined as maintaining low costs, we propose a GUI-based simulator, which
means for achieving multiple goals. However, this mixture of allows architectural designers to structure a house using an in-
motivations may complicate detailed goals in a specific situation. tuitive user interface. For easy control, the designer arranges house
Munroe and Luck [24] proposed a motivational taxonomy components and sensors using only a mouse. Virtual sensors si-
classification system to divide motivation into domain, constraint, milar to actual sensors when detecting simulation information. If a

Fig. 9. GUI design for house element selection.


18 W. Lee et al. / Neurocomputing 209 (2016) 14–24

Fig. 10. Furniture and object arrangement.

Fig. 11. Sensor arrangement.

Fig. 12. Action detection sensor states, (a) Static state, (b) Active state.

sensor is activated, a special warning message is reported to the components. The simulator provides designers with multiple
observer. In our system, we apply special colors to the detection basic device models, including house components, furniture,
areas of the stimulated sensors. In addition, we create an auton- and sensors. Simulation environment components record in-
omous agent equipped with artificial intelligence in order to rea- formation from the house during the simulation, such as spatial
lize automatic scenario generation. Similar to humans, the artifi- and sensor information. The autonomous agent generator au-
cial intelligence agent plans behavior based on intrinsic motiva- tomatically generates a simulation scenario based on the char-
tion. To implement the artificial intelligence of the agent described acter motivation and information collected by the multiple
above, this paper applied a motivation-driven behavior planning sensors. Scenario generation is implemented using a simulation
system. core.
Fig. 1 illustrates the simulator's structure, which is separated The simulator produces two types of outputs: smart house
into virtual environment editor, Autonomous agent generator, storage and sensed information. Smart house storage includes
and simulation core. The virtual environment editor is com- spatial information concerning the walls, doors, windows, and
prised of the smart house module and simulation environment furniture, along with multiple-sensor spatial information. The user
W. Lee et al. / Neurocomputing 209 (2016) 14–24 19

Fig. 13. Pressure detection using pressure detection sensors.

Fig. 14. Sofa and bed vibration detection using vibration detection sensors.

can save a smart house file from the designed model and load it the receiver is attached to the agent's hand. These determine
into a 3D environment. The sensed information is a file that re- whether certain objects are being held by the agent. Table 1
cords the operation of multiple sensors, including activation states, summarizes the types, applications, and functions of the sensors
values, and the operation time measured during agent interaction. used in the simulator.
Our first sensor, the motion detection sensor, scans a certain
3.2. Multiple-sensor design region and determines whether an agent is moving inside the
sensing area. If agent motion is detected, a motion detection signal
In order to provide an accurate test, action detection sensors is stimulated. This motion detection method is described in Fig. 2.
must be arranged and executed using basic operations within the Here, agent component meshes in the sensing range are saved to a
simulation tool. Therefore, multiple virtual sensors are developed motion mesh list. If the current list differs from the previous one,
as a means for perceiving environmental events, including those then an agent motion has taken place.
for detecting motion, pressure, vibration, temperature, and con- The pressure sensor records the agent's navigational path. It is
tact, along with radio frequency identification (RFID) tags and arranged in a grid on the floor and accurately reports the agent's
receivers. coordinates using the collision detection method. This method is
Motion detection sensors identify agent movement. When illustrated in Fig. 3. Pressure activation indicates that the agent is
motion is detected, this indicates that the agent is moving in the located on the sensor, which reports the agent's coordinates. In
detection region. Pressure sensors are arranged in a grid pattern our application, a transparent sphere signifies that a pressure
on the floor and identify the agent's location. Vibration detection sensor is activated.
sensors detect subtle movements, such as tossing or turning in Vibration detection sensors detect vibration signals and per-
bed. Temperature sensors report smart house temperature. Con- ceive collisions between the agent and an object in the house. In
tact detection sensors are attached to house components, such as our project, we attach these sensors to objects, such as beds, sofas,
doors, windows, and drawers, and identify whether objects are and chairs, in order to report the agent's location. The vibration
open or closed. RFID tags are attached to household objects, and detection sensor process is displayed in Fig. 4.
20 W. Lee et al. / Neurocomputing 209 (2016) 14–24

Fig. 15. Heat detection using RFID tags and receivers.

Fig. 16. Contact detection using contact detection sensors.

Table 2 RFID detectors measure the distance between two objects. In


Motivation-action mapping table. our study, we use them to detect objects being held by the agent.
The RFID tag is attached to an object, and the RFID receiver is
Motivation Action
attached to the agent's hand. The RFID receiver identifies the RFID
Sleepiness (low value) Climb onto the bed tag's location after receiving the signal intensity from the RFID tag.
Hunger Dine at the table The smart house temperature sensor continuously records house
Sleepiness (high value) Sleep on the bed
temperature in degrees Celsius. Temperature changes as the agent
Fatigue Rest on the sofa
interacts with the smart house. For example, when the agent im-
plements some behavior, such as cooking or sports, the heat source
Self-realization Cook
is calculated and the house temperature changes. Three elements
Engage in sports
affect temperature calculation: an object's heat, environmental
W. Lee et al. / Neurocomputing 209 (2016) 14–24 21

Fig. 17. Motivation value variation produced by simulation implementation.

Table 3 motivations contained in level 2. Finally, prior motivation FM is


Motivation selection and behavior generation. selected according to the motivation value. Fig. 8 represents the
prior motivation selection algorithm, which theorizes above
Time Prior motivation Action
process.
01:15:06 Sleepiness (low value) Climb on the bed
07:00:06 Self-realization Engage in sports at the bed
08:00:00 Hunger Dine at the table 4. Experiments and analysis
08:32:30 Self-realization Cook
12:40:00 Fatigue Rest on the sofa
13:09:00 Hunger Dine at the table In this section, we analyze the performance of the multiple-
13:44:30 Self-realization Engage in sports sensor operation and the automatic scenario generator in a de-
17:50:00 Fatigue Rest on the sofa veloped virtual smart house. The proposed methods were im-
18:19:00 Self-realization Cook
plemented with Unity3D engine on a computer which had an Intel
20:00:00 Hunger Dine at the table
20:21:30 Self-realization Engage in sports i7 CPU, 3 GB RAM, and an NVidia GTX 275 GPU. The methods
23:00:00 Sleepiness (high value) Sleep on the bed provide real-time processing performance at a minimum of
59.7 fps (frames per second), or 65.1 fps on average.

variables, and the agent's behavior. This is shown in Fig. 5. For ex- 4.1. Smart house configuration experiment
ample, the temperature increases when a heater is turned on, the
window is closed, or the agent engages in sports. Our simulator provides multiple GUI operation functions that
can be used in the smart house configuration and are controlled by
a mouse. Fig. 9 exhibits the GUI functions and the house display
3.3. Autonomous agent generator
views. The left window shows a top view of the constructed house,
and the right window captures a free view. Using the GUI buttons
In order to provide long-term testing data without the use of an
in the right window, the designer can select an object, such as a
avatar, we develop an autonomous agent generator for the simu-
door or window, and position it in the house by clicking and
lator. Each scenario contains various behavior sequences for the
dragging the mouse.
virtual agent. This mimics human behavior, in which actions sa- The simulator provides basic home furnishings, including a
tisfy specific motivations. Therefore, we propose a motivation- bed, desk, table, stove, and refrigerator, along with household
driven behavior planning system, which can be seen in Fig. 6. objects, such as a telephone, spoons, cups, and bowls. Fig. 10 il-
The motivation condition table defines the conditions to be lustrates furniture and object arrangement. After component se-
satisfied by each invoked motivation. The motivation value cal- lection is complete, the designer specifies component parameters.
culator only updates motivation values for motivations that satisfy For example, a chair can be defined as round or square.
each conditions, by referring to the motivation condition table. The Virtual sensors recognize agent actions and states, and the
motivation extractor extracts one final motivation on the basis of structure and operation of the sensors inside the smart house are
the pyramid motivation layer theory. After the final motivation is visualized, as in Fig. 11. Sensors are also arranged using a mouse.
identified, the action manipulator determines the action to relieve When the designer clicks on an empty space, a pop-up menu,
the relevant motivation by referring to the motivation-action DB. which includes “arrange a new sensor,” is displayed along with a
An instance of the prior motivation extraction process is de- list of sensors. The designer selects a sensor and specifies its at-
picted in Fig. 7. When a motivation condition is activated, a re- tributes, such as its name and detection range. The size and color
levant motivation is extracted. To maintain the balance of moti- of the sensing range change in order to indicate these detection
vation variables, the agent must respond using an adaptive action results.
in order to satisfy the extracted motivation. Agent motivations are
initialized as set M, and the motivation value of each motivation in 4.2. Performance of multiple sensor operation
M is updated during simulation. A motivation list, represented by
set MS, is then extracted based on the activated motivation con- The simulator provides six types of virtual sensors, including
dition, and the motivations are classified by level. For example, the sensors that detect motion, pressure, vibration, temperature, and
initial motivation set includes hunger, thirst, and the need to contact, along with RFID tags and receivers. These sensors work
study. The hunger and thirst motivations are grouped in level 5, independently during the simulation, and this section analyzes the
and the need-to-study motivation resides in level 2. The physio- working performance of each sensor type.
logical motivations in level 5 have higher priority than the The motion detection sensor activates when an agent is present
22 W. Lee et al. / Neurocomputing 209 (2016) 14–24

Fig. 18. Scenario representation of motivation-driven agent.

and takes some action. To assess the motion detection results, we The system described in this paper uses the recognition scope
used green to represent the sensor's sensing region that was not of each sensor for managing multiple sensor operation. CASS [13],
activated. This can be seen in Fig. 12(a). If the agent moved, the a similar system, can apply a variety of sensors in the same
sensor detection region was colored red, as is shown in Fig. 12(b). manner as this system. However, it does not consider the re-
Pressure detection sensors were arranged on the floor in order cognition scope of each sensor. Furthermore, UbiREAL [16] and
to determine when an agent is walking. Fig. 13 illustrates the ac- V-PlaceLab [17] perform modeling only when sensors are em-
tivation of these sensors. If the agent moved onto a pressure de- bedded in specific devices; thus, they cannot simulate individual
tection sensor, the detection region changed from green to red. sensor operations. Unlike the previous research presented above,
In order to detect collisions between the agent and house this system can execute a simulation that is similar to the actual
components, we attached vibration detection sensors to objects in environment, using a variety of sensors.
the house. When the agent touched one of these objects, the
sensor color became red, signifying the collision signal. Fig. 14 4.3. Performance of autonomous agent generator
exhibits the simulation results when vibration detection sensors
were applied. In addition to the proposed simulator, we present a motivation-
An RFID receiver was set on the agent's right hand, and RFID driven agent, which can autonomously plan behavior in the simu-
tags were attached to objects, such as a spoon, bowl, or stove, in lator. If the agent extracts a prior motivation, the resulting solution is
order to recognize touching actions between the agent and ob- autonomously decided. Table 2 presents the motivation-action
jects. When the agent's hand entered an RFID tag detection region, mapping of motive-based characteristics used in the experiment.
the receiver returned a touching signal and the tag's identification Four motivations were applied: sleepiness, hunger, fatigue, and self-
number. Fig. 15 displays the RFID tag and receiver activation realization. Table 2 also provides the solution actions, which in-
results. dependently satisfy the motivations. The motivation value of Slee-
In order to determine whether a door was open, a pair of contact piness executes “Climb onto the bed” when it is lower than a specific
detection sensors were set on the door and doorframe. Fig. 16 shows threshold and “Sleep on the bed” when it is higher than the
the contact detection sensor activation results. If the door was open, threshold. Other motivations are mapped to one action each.
the sensors appeared red; if closed, they were green. Motivation condition activation increases the motivation value.
Environmental temperature was detected using a heat sensor. Fig. 17 is a motivation variation graph of data recorded in a one-day
Updating house temperature is initiated by the agent when certain simulation. For example, the motivation value of hunger increased
actions are taken. These actions include heating objects and at 08:00. When the agent dined at the table, the motivation was
changing the states of environmental elements. satisfied and the value decreased. When motivation conditions
W. Lee et al. / Neurocomputing 209 (2016) 14–24 23

activated other motivations, the relevant motivation values in- International Conference on, 2014, pp.3823–3828.
creased. If the agent executed a relative action in order to satisfy the [8] T. Perumal, M.N. Sulaiman, N. Mustapha, A. Shahi, R. Thinaharan, Proactive archi-
tecture for Internet of Things (IoT)s) (management in smart homes, Consumer
motivation, the motivation value decreased. Electronics (GCCE), 2014 IEEE 3rd Global Conference on), 2014, pp.16–17.
Table 3 lists prior motivation selections and scenario generation at [9] K. Bouchard, A. Ajroud, B. Bouchard, A. Bouzouane, SIMACT: A 3D Open source
differing times, which were triggered when the motivation-driven smart home simulator for activity recognition, Lecture Notes Comput. Sci.
6059 (2010) 524–533.
agent interacted with the smart house. Fig. 18 shows the simulation [10] P. Nurmi, P. Floréen, M. Przybilski, G. Lindén, A Framework for Distributed
results of the behavior generated by the agent in the smart house. Activity Recognition in Ubiquitous Systems, Proceedings International Con-
This system automatically plans the actions of the agent. Ubi- ference on Artificial Intelligence, 2005, pp.650-655.
[11] H. Lieberman, T. Selker, Out of context: Computer systems that adapt to, and
REAL [16] executes the simulation according to a test sequence learn from, context, IBM Syst. J. 39 (2000) 617–632.
predefined by the user, and an agent performs the actions defined [12] S. Jang, C. Shin, Y. Oh, W. Woo Technical Reports, Introduction of'ubiHome’
in the sequence. For V-PlaceSims [17], a user directly selects the Testbed, 2005, IPSJ SIG 2005, pp. 215–218.
[13] H. Prendinger, B. Brandherm, S. Ullrich, Simulation Framework for Sensor-
target actions of an agent. Unlike systems described in previous
Based Systems in Second Life, 18, MIT Press Journal 2009, pp. 468–477.
research, the system proposed in this paper does not require [14] W.J. Clancey, Simulating activities: relating motives, deliberation, and atten-
processes that define the actions of an agent, because it auto- tive coordination, Cogn. Syst. Res. 3 (3) (2002) 471–499.
matically plans the agent's actions using motivation. [15] S. Helal, J.W. Lee, S. Hossain, E. Kim, H. Hagras, D. Cook, Persim – Simulator for
Human Activities in Pervasive Spaces, Proceedings of the 7th International
Conference on Intelligent Environments, 2011, pp.192-199.
[16] J. Park, M. Moon, S. Hwang, K. Yeom, CASS: A Context-Aware Simulation
5. Conclusions System for Smart Home, 5th ACIS International Conference on Software En-
gineering Research, Management & Applications, 2007, pp.461–467.
[17] Z.F. Jahromi, A. Rajabzadeh, A.R. Manashty, A. Multi-Purpose Scenario-based,
In this paper, we presented a simulator that provides a 3D Simulator for smart house environments, ((IJCSIS)) Int. J. Comput. Sci. Inf.
smart house configuration tool and autonomous agent generator. Secur. 9 (1) (2011) 13–18.
[18] Y. Yang, Z. Wang, Q. Zhang, Y. Yang, A time based markov model for automatic
Using the configuration tool and intuitive GUI operations, smart position-dependent services in smart home, Control Decis. Conf. ((CCDC))
house designers arrange house components and sensors. In order (2010) 2771–2776.
to monitor environmental information and a virtual agent's states, [19] H. Nishikawa, S. Yamamoto, M. Tamai, K. Nishigaki, T. Kitani, N. Shibata,
K. Yasumoto, M. Ito, UbiREAL: Realistic smarspace simulator for systematic
we proposed multiple virtual sensors, which report information testing, Lect. Notes Comput. Sci. 4206 (2006) 459–476.
similar to that sensed in an actual environment. The entire si- [20] J. Lertlakkhanakul, J.W. Choi, M.Y. Kim, Building data model and simulation
mulation process was monitored, and the simulator was used to platform for spatial interaction management in smart home, Automat. Constr
17 (2008) 948–957.
verify the smart house structure and confirm that the sensors
[21] W. Song, K. Um, K. Cho, Motivation based behavior sequence learning for an au-
were properly arranged. Furthermore, in order to simulate a hu- tonomous agent in virtual reality, J. Korea Multimed. Soc. 12 (12) (2009) 1819–1826.
man-like virtual agent, we proposed a motivation-based behavior [22] H. Lee, H. Kim, J. Seo, An integrated neural network model for domain action
planning method, which ensures safe and effective interaction determination in goal-oriented dialogues, J. Inf. Process. Syst. 9 (2) (2013) 259–270.
[23] F. Andriamasinoro, R. Courdier, Integration of generic motivations in social
with the smart house. We simulated the proposed methods in a hybrid agent, Lect. Notes Comput. Sci. 2934 (2004) 281–300.
virtual smart house, and experimentation verified the multiple- [24] S. Munroe, M. Luck, Agent Autonomy Through the 3M Motivational Taxonomy,
sensor performance and illustrated the autonomous agent gen- Lect. Notes Comput. Sci. 2969 (2004) 55–67.
[25] D. Sevin, D. Thalmann, A motivational model of action selection for virtual
erator performance. Moreover, the proposed simulator reduces humans, Comput. Gr. Int. (2005) 213–220.
smart house testing costs and enables autonomous agent gen- [26] S. Singh, A.G. Barto, N. Chentanez, Intrinsically motivated reinforcement
eration. Smart house researchers can then use long-term simu- learning, Proc. Adv. Neural Inf. Process. Syst. (2005).
[27] K. Merrick, Modelling Motivation for Experience-Based Attention Focus in
lated information to provide convenient services for the smart Reinforcement Learning PhD Thesis, University of Sydney, 2007.
house, using the sensors connected within the IoT environment. [28] W. Song, K. Cho, K. Um, Multiple behaviors learning and prediction in un-
known environment, J. Korea Game Soc. 13 (12) (2010) 1820–1831.
[29] J. Kim, J. Byun, H. Jeong, Cloud AEHS: Advanced learning system using user
preferences, J. Converg. 4 (3) (2013) 31–36.
Acknowledgments

This research was supported by the MSIP (Ministry of Science,


ICT and Future Planning), Korea, under the ITRC (Information Tech-
Wonsik Lee received his B. Eng. Degree in Multimedia
nology Research Center) support program (IITP-2016-H8501-16- Engineering in 2010, his M. Eng. Degree in Multimedia
1014) supervised by the IITP (Institute for Information & commu- Engineering in 2012, respectively, all from Dongguk Uni-
nications Technology Promotion). versity, Seoul, Republic of Korea. He has done many works
mainly associated with 3D simulation applications in the
military and artificial intelligence areas, such as remote
terrain visualization and sensor simulation, and so on. He
References is currently affiliated to LG Electronics Inc.

[1] J.C. Augusto, V. Callaghan, D. Cook, A. Kameas, I. Satoh, Intelligent Environ-


ments: a manifesto, Human-centric Comput. Inf. Sci. 3 (12) (2013) 1–18.
[2] J. Vanus, P. Kucera, R. Martinek, J. Koziorek, Development and testing of a vi-
sualization application software, implemented with wireless control system in
smart home care, Human-centric Comput. Inf. Sci. 4 (18) (2014) 1–19.
Seoungjae Cho received his B. Eng. Degree in Multimedia
[3] E.-J. Lee, C.-H. Kim, I.Y. Jung, An intelligent green service in internet of things, J.
Engineering in 2012 from Dongguk University, Seoul, Re-
Converg. 5 (3) (2014) 4–8.
public of Korea. Since March 2012, he is in the M. Eng. –
[4] S.-K. Bae, Power consumption analysis of prominent time synchronization proto-
Ph.D. Eng. integrated course at the Department of Multi-
cols for wireless sensor networks, J. Inf. Process. Syst. 10 (2) (2014) 300–313.
media Engineering, Dongguk University, Seoul, Republic of
[5] Q. Zhu, R. Wang, Q. Chen, Y. Liu, W. Qin, IOT Gateway: BridgingWireless Sensor
Korea. He has done many works mainly associated with
Networks into Internet of Things, Embedded and Ubiquitous Computing
3D simulation applications in the military and human
(EUC), (2010) IEEE/IFIP 8th International Conference on, (2010), pp.347-352.
computer interaction, artificial intelligence areas, such as
[6] S.D.T. Kelly, N.K. Suryadevara, S.C. Mukhopadhyay, Towards the implementa-
remote terrain visualization and sensor simulation, brain
tion of IoT for environmental condition monitoring in homes, Sens. J. IEEE 13
computer interface games, robot learning, and so on. His
(10) (2013) 3846–3853.
current research interests are focused on the areas of
[7] M.V. Moreno, J. Santa, M.A. Zamora, A.F. Skarmeta, A holistic IoT-based man-
sensor simulation applications using 3D technology and
agement platform for smart environments, Communications (ICC), 2014 IEEE
NUI (Natural User Interface) utilizing various NUI devices.
24 W. Lee et al. / Neurocomputing 209 (2016) 14–24

Phuong Chu received his Bachelor Degree in Informa- Young-Sik Jeong is a professor in the Department of
tion Technology in 2011 from Le Quy Don Technical Multimedia Engineering at Dongguk University in
University, Hanoi, Vietnam. After that, he worked in Korea. His research interests include multimedia cloud
Institute of Simulation Technology in the same uni- computing, mobile computing, ubiquitous sensor net-
versity. Since September 2014, he is in the M. Eng. work (USN), and USN middleware. He received his B.S.
course at the Department of Multimedia Engineering, degree in Mathematics and his M.S. and Ph.D. degrees
Dongguk University, Seoul, Republic of Korea. He has in Computer Science and Engineering from Korea Uni-
done many works mainly associated with 3D simula- versity in Seoul, Korea in 1987, 1989, and 1993,
tion applications in artificial intelligence areas and respectively.
human computer interaction, such as Q-learning for
virtual robot.

Hoang Vu received his Bachelor Degree in Mathe- Kyungeun Cho is a full professor at the Department of
matics-Applied Informatics in 2005 from Vietnam Na- Multimedia Engineering at the Dongguk University in
tional University, Hanoi, Vietnam. After that, he worked Seoul, Republic of Korea since Sept. 2003. She received
in Institute of Simulation Technology in the Le Quy Don her B. Eng. Degree in Computer Science in 1993, her M.
Technical University, Hanoi, Vietnam. Since September Eng. and her Dr. Eng. Degrees in Computer Engineering
2014, he is in the M. Eng. course at the Department of in 1995 and 2001, respectively, all from Dongguk Uni-
Multimedia Engineering, Dongguk University, Seoul, versity, Seoul, Korea.During 1997–1998 she was a re-
Republic of Korea. He has done many works mainly search assistant at the Institute for Social Medicine at
associated with 3D simulation applications in artificial the Regensburg University, Germany, and a visiting
intelligence areas. researcher at the FORWISS Institute at TU-Muenchen
University, Germany. Her current research interests are
focused on the areas of intelligence of robot and virtual
characters and real-time computer graphics technolo-
gies. She has led a number of projects on robotics and game engines and also has
published many technical papers in these areas.

Sumi Helal is a Professor at the Computer and In-


formation Science and Engineering Department (CISE)
at the University of Florida (UF), USA, and a Finland
Distinguished Professor (FiDiPro) at Aalto University
and the EIT ICT Labs, Finland. He is a pioneer and a
recognized leader in the fields of Mobile, Pervasive and
Ubiquitous Computing. He is well known for his inter-
disciplinary research on smart spaces and Health
Telematics in support of Health Care and Aging, Dis-
abilities and Independence (ADI). He directs the Mobile
and Pervasive Computing Laboratory in the CISE de-
partment at UF. He is co-founder and director of the
Gator Tech Smart House, an experimental facility for
applied research development and validation in the domains of elder care and
health telematics.

Wei Song is a full lecturer at the Department of Digital


Media at the North China University of Technology
(NCUT), Beijing, China, since July. 2013. He received his
B. Eng. Degree in Software Engineering from North-
eastern University, Shenyang, China, in 2005, and his M.
Eng. and Dr. Eng. Degrees in Multimedia (Major of
Computer Game Production) from Dongguk University,
Seoul, Korea, in 2008 and 2013, respectively. Since
September 2013, he has been the director of the in-
teractive studio of NCUT. His current research interests
are focused on the areas of mixed reality, NUI, inter-
active information visualization, Image processing,
computer graphics, pattern recognition, 3D re-
construction, mobile robot, network game, and other multimedia technologies.

You might also like