Download as pdf or txt
Download as pdf or txt
You are on page 1of 2

Human-Robot Interaction in Unstructured Environments Progress Report May 13, 2013 Kevin DeMarco

Pattern Recognition and Machine Learning


As Ive been reading through the literature on Hidden Markov Models and pattern recognition, Ive found that I needed more background in the topic. Thus, I began making my way through Christopher Bishops A book, Pattern Recognition and Machine Learning. Im maintaining a L TEXdocument that I use for deriving equations that are not completely described in his book. Ive included my notes for the rst chapter in this progress report. Im going to be moving through this book in parallel with my other research throughout the summer.

MORSE Simulation
Ive managed to import the Turtlebot / Turtlebot 2 3D model into Blender and equip it with a Kinect sensor model for object detection. Over the next week I will be going through the ROS Navigation Tutorials in order to become better acquainted with the ROS autonomy module. Figure 1 shows the Turtlebot 2 in the MORSE simulation.

Figure 1: Turtlebot 2 in MORSE Simulator The output of the simulated Kinect sensor is sent to ROS where it can be visualized in RVIZ. Figure 2 shows the output of the Kinect sensors PointCloud. In the PointCloud, the chair and desk are the most recognizable objects. I did notice a signicant decrease in performance when I enabled the simulated Kinect sensor. My main goal in this simulation is showing that I can autonomously predict the plan of a human agent based on the humans trajectory and environmental markers (doors, roads, obstacles, etc.). I believe that I will be able to simulate this process without having to use the full Kinect model in order to improve run-time performance. After the basic algorithms are proven in MORSE, I will be able to test the full system on an actual Turtlebot 2.

Other
I also attached a PowerPoint presentation I delivered to DARPA and SAIC last week in order to address a major issue with testing autonomy modules. The main issue is that since most autonomy modules are composed of independent processes that are not synchronized and they communicate through a publishand-subscribe architecture, the simulator needs to throttle the data input to the autonomy module. If the simulator oods the autonomy module with data, it will not behave deterministicly due to CPU load. The

Figure 2: Kinect PointCloud in RVIZ solution is composed of dening a standard interface that allows the simulator to control the autonomy module in pseudo-lockstep.

You might also like