Mobile Robot Lab 2

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 10

ME 598A: Introduction to Robotics Spring 2009

Stevens Institute of Technology Mobile Robot Lab 2

ME 598A: Mobile Robot Lab 2


Due: Monday, April 13, 2009 @ beginning of class (6:15pm)

Part 1. Color Detection Code and Camera Setup

a) Download and Install Color Detection files

Download colorDetection.zip from the class website. Extract the files to the computer hard
drive and place the following files into the CreateToolbox directory:

colorDetect.m
getHSVColorFromDirectory.m
selectPixelsAndGetHSV.m

Create a directory called TrainingImages on the computer and place the balls.jpg file in this
directory. Note this directory location.

b) Modify colorDetect.m to output image

Open up the colorDetect.m function in the Matlab Editor. Modify it so it will output the image I
back to the main program that calls it by making the following changes:

Original:
function colorDetectHSV(fileName, hsvVal, tol)

Modifed:
function I = colorDetectHSVimage(fileName, hsvVal, tol)

Save the modified code as a new function called colorDetectHSVimage.m

Note: you can modify this function to not display the images every time that it is called by
commenting out the last two lines. This will help speed up processing for vision-based
navigation tasks.

c) Test Code

Read the documentation.html file that you downloaded from the zip file for instructions and run
the example.m code that you also downloaded. You will need to make the following changes in
order to run it:

1
ME 598A: Introduction to Robotics Spring 2009
Stevens Institute of Technology Mobile Robot Lab 2

Replace train with the directory path for the TrainingImages directory:
HSV = getHSVColorFromDirectory('train');

Modify function call to newly created function:


Original:
colorDetectHSV('test/face01.jpg', median(HSV), [0.05 0.05 0.2]);

Modifed:
CD = colorDetectHSVimage('balls.jpg', median(HSV), [0.05]);

Note: You need to include proper directory path for balls.jpg file or else you will get an error.

Experiment with changing the dimension and values for the tolerances as well as searching for
the different color balls.

d) Camera Setup
Plug in your Communications Box to the computer

Configure TCP/IP communications:

Go to Control Panel Network Connections


Open Network Connections
Right click on Hawking wireless network/wireless network 2
Click Properties
Click on TCP/IP

Click the properties button on the window after selecting TCP/IP


Select Use The Following IP Address

2
ME 598A: Introduction to Robotics Spring 2009
Stevens Institute of Technology Mobile Robot Lab 2

Type in your IP address 192.168.2.xx ( use Any number for the xx from 15 to 255 and
do not reuse numbers)
o Robot 1: XX = 21
o Robot 2: XX = 22
o Robot 3: XX = 23
o Robot 4: XX = 24
o Robot 5: XX = 25
o Robot 6: XX = 26
o Robot 7: XX = 27
o Robot 8: XX = 28
The subnet must be 255.255.255.0
Gateway should be 0.0.0.0
Click ok on everything and close all windows

Connect to wireless internet camera:


Plug in the power cord from the camera battery to power up your camera.
Under Network Connections, right-click on Hawking wireless network/wireless network
2 and Select View Available Wireless Networks
Click on the wireless network for your camera, i.e. Robot 1 = Camera 1, etc., and select
Connect.

3
ME 598A: Introduction to Robotics Spring 2009
Stevens Institute of Technology Mobile Robot Lab 2

e) Grab Image in Matlab

To grab an image and display it in Matlab, use the following code with the appropriate
camera IP address. The IP address will be 192.168.2.Y, with Y = camera number. This
code reads in images from camera 7 and then displays in a figure window.

I = imread('http://192.168.2.7/image.jpg');
figure(1)
imshow(I)

Part 2. Train Color Detector

a) Create training data for vision-based navigation tasks

In this lab you will have to execute a series of vision-based navigation tasks. To do this you need
to detect orange cones in the workspace of the robot. The first step for this is to create a set of
training images so that you can get HSV values for the orange cones to use in the color detection
program from Part 1.

Place some orange cones in the field of view (FOV) of the camera that is on the robot and grab
images of them into Matlab. Place some cones close, some far away. Grab some images with
them at different distances away from the camera and with more than one in the FOV at a time.
Create at least 5 different training images. The more training data you acquire, the better your
color detection system will be.

Once you have grabbed an image into Matlab and have it displayed in a figure window, you
need to save it to the computer. Make sure that the figure window of the image you want to
save is active by clicking on it, then execute the following command:

saveas(gcf,'Image1','jpg')

This will save the image in the current figure as a file called Image1.jpg in the current directory.
Repeat this process for all test images, making sure to change the name of the image each time
so you dont overwrite any images you have already saved.

Move all these test images into a directory called OrangeConeTrainingImages. You can also
use some of the images in the ConeImages.zip file on the class website. You will get the best
results from images in the same environment that you are running the robot in.

4
ME 598A: Introduction to Robotics Spring 2009
Stevens Institute of Technology Mobile Robot Lab 2

b) Get HSV data for training data set

Modify your example.m file to get the HSV data values for the orange cones (you can also refer
to Lecture8.m file on website). Click at least 10 seed points in each image.

Be sure to give a descriptive name to your HSV variable and save it to a .mat file for future
retrivial. Make sure to change the directory path to point to the location of the
OrangeConeTrainingImages on the computer. Your code should look something like this:

HSV_cone = getHSVColorFromDirectory('C:\ConeTrainingImages');
save HSV_orange_cone HSV_cone;

To load the HSV_cone values at a later time from the file, just type the line below. It will
recreate the variable HSV_cone in the workspace:

load HSV_orange_cone;

c) Calibrate color detection system

Using the HSV_cone data obtained from the previous section, calibrate you color detection code
to determine the appropriate number of and values for the HSV tolerance values to accurately
detect the images of the cones in the training data set. Make sure you use the appropriate HSV
values in your function call. For example, replace HSV with HSV_cone as shown below:
CD = colorDetectHSVimage('C:\ConeTrainingImages\ConeImage1.jpg', median(HSV_cone), [0.03]);
CD = colorDetectHSVimage('C:\ConeTrainingImages\ConeImage1.jpg', median(HSV_cone), [0.05 0.05
0.2]);

In the report:

Include the median value for your HSV_cone array


The final HSV tolerance values determined to satisfactorily detect the orange cones
Figures of the original images and color detected images for at least 5 images in the training
image data set using the optimal HSV tolerance values that you determined (the output
figure from the color detection code). From the Figure window, choose the Edit menu item
Copy to copy the figure to the clipboard. This can then be pasted into your report. You
can also choose to save the figure to the computer as a .jpg or other file type by going to File
Save as

5
ME 598A: Introduction to Robotics Spring 2009
Stevens Institute of Technology Mobile Robot Lab 2

Part 3. Blob Analysis

a) Extract centroid of cone or cones in image

Using Lecture8.m as a guide, perform the necessary image processing steps on the color
detected images to determine the centroid position of the cone or cones in all of the training
images.

These steps should include the following:

Filtering out connected pixel regions smaller than a user-defined threshold


Filling in holes in the image
Labeling of the remaining collected components
Component area extraction
Component centroid extraction
Plotting of centroid in the color detected image

Note: Your code should be robust enough to handle images with as many as three cones in
the image or as few as 0 cones in the image.

If there are no cones in the image, the output of the areas array will be empty = [],
and have size = [0 0].
For 1, 2, or 3 cones, the output should be the corresponding centroid positions for
these cones.

Therefore, you will need to search the areas vector to find the three largest area values.
Determine an area threshold to decide if the connected pixel region is large enough to
correspond to an actual cone. If not, then ignore it. If nothing is over the area threshold
then there will be no cones in the image. For the connected regions with areas over the
area threshold, identify their corresponding centroid values in the centroids matrix. Plot
these values on the color detected images. HINT: Once you find the location (idx) of
the max area in the array, you can then set it equal to zero and repeat the procedure to
find the next largest value and corresponding centroid.

In the report
Include copy of your code

Figures of color detected images with computed centroids plotted appropriately in at least 5
training images. This image set must include images with 0, 1, 2, and 3 cones. You will have
to create a new image without any cones to debug and test your code.

6
ME 598A: Introduction to Robotics Spring 2009
Stevens Institute of Technology Mobile Robot Lab 2

Part 4: Vision-Based Navigation

a) Distance Sensing

Using the color detection and blob analysis code from the previous sections, program your
robot to drive towards an orange cone and stop when it is 8 from the cone, after starting it
from a distance of 3 (1 m) away.

Start by grabbing images of the cone from the starting and ending position and determining
the area of the cone in the color detected image. You will want to program your robot to
start driving and constantly grab images. Every time an image is grabbed, run the color
detection and blob analysis code to determine the area of the cone in the image. The area
is a measure of the robots distance to the cone. Program your robot to stop once the area
is equal to or greater than the area corresponding to a distance of 8 from the cone.

How accurate are you able to control the stopping position of the robot?

Bonus:

Implement a proportional speed controller for the robot by adjusting the speed of the robot
based on how close it is to the cone. As it gets closer to the goal distance it should slow
down, when it is far away it can go faster.

In the report:

Description of your code; copy of your code


How close were you able to stop the robot to the 8 distance from the cone?
Copy of your code from Bonus section, if applicable, and TA signature verification

7
ME 598A: Introduction to Robotics Spring 2009
Stevens Institute of Technology Mobile Robot Lab 2

b) Landmark/Target Tracking

Program the robot to track the target cone for the three cases shown below:

For Case I and Case II, the cone starts off to the left and right of the robot, respectively, with the
cone appearing in the camera FOV. You need to detect the cone and calculate its centroid in
the image and compare it to the coordinates of the center of the image.

To get the size of a matrix in Matlab, you can use the following command:

[nr nc] = size(I);

Then, nr = # of rows in matrix and nc = # of columns. The center coordinates for the image are:

ImageCenterX = nc/2;

8
ME 598A: Introduction to Robotics Spring 2009
Stevens Institute of Technology Mobile Robot Lab 2

ImageCenterY = nr/2;

Compare the ImageCenterX value against the X coordinate of the cones centroid, CentroidX:

Xdiff = CentroidX - ImageCenterX;

Depending on the sign of this difference, you can determine if the robot needs to turn CW or
CCW to bring the cone closer to the center of the image. Create a stopping tolerance value.
Stop the robot from turning if Xdiff is less than this tolerance.

For Case III, start the robot without the cone in the FOV of the robot. Choose to rotate the
robot either CW or CCW until it does appear in the FOV and the X coordinate of the centroid is
within the specified tolerance value of the center of the image.

In the report:

Description of your code


Copy of your code
Initial and final images for each of the three cases

c) Obstacle Avoidance

Program your robot for obstacle avoidance using the vision. Set up a course as shown in the
figure on the following page. There are three obstacles (cones) that need to be avoided. One
solution approach is to drive the robot towards the obstacles or targets, while rotating the robot
to keep the target in the middle of the image. Then, once the robot is close to the target
(cone), stop the robot and have it rotate so that the original object is not in the FOV anymore
and the next target is the largest object in the FOV (Using this methodology, you only need the
area/centroid for largest connected pixel region in image). Then have it drive to and track this
new target. Once close to target 2, then stop the robot, rotate it, and track and drive to target
3. Once at target 3, then rotate to the robot to avoid it and then drive to the finish line.

You can decide if/when an obstacle is close to the robot based on the area of the connected
pixel region and instruct the robot to turn accordingly. Note that if you get too close to the
obstacle and are not directly aligned with it, a portion of it may be outside the FOV of the
camera. Thus, the area value that you use to measure distance may be off. You may want to
grab some sample images to see what the obstacles look like when approaching from various
distances and orientations so you can program for the different scenarios.

9
ME 598A: Introduction to Robotics Spring 2009
Stevens Institute of Technology Mobile Robot Lab 2

In the report:

Description of your code


Copy of your code
Signature from TA verifying execution of vision-based obstacle avoidance routine.

Part 5. Feedback

In the report:
What did you think of the lab?
Explain any difficulties you encountered when using the robot/doing the lab.
How long did it take you to complete?

10

You might also like