Download as pdf or txt
Download as pdf or txt
You are on page 1of 72

i-Android

Line Following Robot


Date of Submission: 28/04/2008
Authors C.Chandrasekara (030056C) P.D.J.B.Karunarathne (040189U) T.C.R.Piyasena (040275F)

Supervisor Dr. Lanka Udawatta Co- Supervisor Dr. Chathura De Silva

Coordinator Mr.Shantha Fernando

This report is submitted in partial fulfillment of the requirements for the award of the degree of Bachelor of Science of Engineering at University of Moratuwa Sri Lanka.

Line Following Robot

Project Report

Abstract
There are several categories of robot navigation. Sensor based and vision based systems are the two major categories of them. Sensor based approach uses various kinds of sensors such as IR sensors and ultrasonic sensors. IR sensors are generally used for measuring the difference in reflectivity of surfaces depending on the properties like color, roughness. Ultrasonic sensors are used to measure the distance to an object. Robots are built to navigate using these out puts according to the application. In vision based approach, it uses a vision system to extract the features it needs in order to navigate. The vision system is mostly a video or snapshot camera which can be chosen from the various kinds of cameras available in the market. The vision based approach can be customized in to various kinds of applications while the sensor based approach has a low processing overhead. Line follower is a machine that can follow a path. The path can be visible like a black line on a white surface (or vice-versa) or it can be invisible like a magnetic field. Sensing a line and maneuvering the robot to stay on course, while constantly correcting wrong moves using feedback mechanism forms a simple yet effective closed loop system. As a programmer you get an opportunity to teach the robot how to follow the line thus giving it a human-like property of responding to stimuli. i-Android was developed based on a vision based system to navigate the robot through a white line marked in the black surface. It also extracted some features in the sensor based systems as well. This document is intended to describe the information regarding the project i-Android. It explains literature survey, requirements, the techniques and technologies used, design and implementation details, problems faced, and future improvements of the project.

Keywords: Navigation, Sensor, Vision, Ultrasonic sensors, IR sensors, magnetic field

Line Following Robot

Project Report

Acknowledgement
The completion of the line follower was not an easy task for us. The project was bit different from other software based projects and a huge hardware and electronic knowledge were required. In accomplishing this goal many personals gave the helping hand for us. We would like to appreciate their guidance and encouragement since without their support the project would not have been a success.

First of all we would like to thank Mrs. Vishaka Nanayakkara, Head of the Department of Computer Science and Engineering who gave us tremendous help by providing necessary guidance for our final year projects. It is a pleasure to mention Dr. Lanka Udawatta, Senior Lecturer of Department of Electrical Engineering who was the supervisor of the project, gave us a fabulous help and guidance in the whole period of the project. And as the Cosupervisor Dr Chathura De Silva gave us an enormous support specially giving us some required electronic measuring equipment and precious advices when we were stuck with technical problems. Initially we used some wrong approaches in the design process of control circuits because of the lack of experience but with the guidance of him we could move to the right path.

Then we would like to thank our project coordinator Mr. Shantha Fernando and all the staff members of the Department of Computer Science and Engineering, University of Moratuwa, for the friendly support and encouragement given by them. Its quite happy to announce that each dead line that was there helped us to urge the development and complete this on time.

And we would like to thank Mr. Nishantha and Mr. Kosala from Department of Electronic and telecommunication for their great support given for this project. Mr Nimesh from Millennium IT Technologies Ltd and Mr Thusitha Samarasekara from Dialog Telekom Ltd gave us a helping hand in microcontroller based problems. We could get some important design decisions using their practical experiences. And even they allowed us to use their

Line Following Robot

Project Report

workshops in some urgent situations. We would like to appreciate the support given by Mr. Udaya Sampath Karunathillake, Ms. Banie Abeywickrama and Mr. Asanka Wikramasinghe in the process of electronic designing.

And we need to mention about some of our colleagues in Department of Mechanical Engineering. They gave us a great support in making some required mechanical components in developing the prototype.

Our team was together as a family, sharing all the happiness, hardships and even the personal matters, during the last several months. Each member contributed maximum to make this a success. We would like to thank all other colleagues that were not mentioned here for their great support provided.

Line Following Robot

Project Report

Table of Contents
1. Introduction...................................................................................................................... 9 1.1SampleScenarios................................................................ ....................... 9 1.1.1Automated Production Processes.................... ............................. 9 1.1.2Baggage Carrier Systems.............................. ................................. 10 2. Background and Literature Review.................... .............................................................. 12 2.1 Background............................................................................................. 12 2.1.1 Available controlling systems......................... ...................... 12 2.1.2 Vision Systems................................................................................13 2.1.3 Existing platform....................................................................... .. 15 2.2 Literature Survey................................................................................ .... 16 2.2.1 Line detection techniques........................................................ .... 16 2.2.1.1 Edge detection...................................................................... 16 2.2.1.2 Hough transformation............................................... ........... 17 2.2.1.3 Sensor based technique............................................. .......... 18 2.2.2 C++ ................................................................................................ 21 2.2.3 Open source computer vision library......... .................................. 21 2.2.4 C++ threading using windows threading API.... ............................ 22 3. Design and Implementation...................................... ....................................................... 23 3.1 Software System....................................... .............................................. 24 3.1.1 Overall Design............................................................................... 24

Line Following Robot

Project Report

3.1.2 Server Application.......................... .............................................. 24 3.1.3 Client Application.......................... ............................................... 28 3.1.4 Automatic Traversing Module................. ..................................... 31 3.2 The Hardware Prototype................................ ........................................ 35 3.2.1 Steering mechanism.......................... ........................................... 35 3.2.1.1 Linear Movement-Forward Direction.. ................................. 36 3.2.1.2 Movement along a curved path....... .................................... 37 3.2.1.3 Sudden Rotation.................... ............................................... 38 3.2.1.4 Changing the Rotation Speed.......... ..................................... 40 3.2.1.5 Changing the Rotation Direction... ....................................... 43 3.2.1.6 Locking the Motors................ ............................................... 44 3.2.2 Controlling the Parallel port............... .......................................... 45 3.2.3 Hardware Component with 16F877 PIC [7].................................. 46 3.2.3.1 The Oscillator for clocking the Micro Controller.. ................ 47 3.2.3.2 I/O Port D for the additional Sensors............. ...................... 48 3.2.3.3 Built-in Pulse Width Modulators.... ...................................... 48 3.2.3.4 RS 232 Communication module....... .................................... 54 3.2.4 Infrared Sensor Support System ................................................... 57 3.2.5Obstacle Detection ........................................................................ 57 3.2.5.1Development of Ultrasonic Distance measuring Component 61 3.2.5.2 Master Device ....................................................................... 62

Line Following Robot

Project Report

3.2.5.3 Slave Device ............................................................................... 62

5. Discussion ........................................................................................... 64 6. Conclusion... ................................................................................................ 66 7. Future Enhancements ................................................................................................. 67 8. Abbreviations.. ................................................................................................. 69 9. References.. ................................................................................................ 70

Line Following Robot

Project Report

Table of Figures
The Basic Structure of i-Android...........................................................................................23 Overall Design24 Server Application24 Client Application.28 User Interface-i-Android Client Application..30 Automatic Traversing Module.31 Model Image32 Steering Mechanism..35 Linear Movement Forward Direction.36 Movement along a curved path.37 Sudden Rotation39 Changing the rotation speed41 Circuit Design.42 The implemented Circuit..44 Test Console in Visual Basic.45 Pin Diagram-16F877 PIC.46 The Oscillator for clocking the Micro Controller47 Built-in Pulse Width Modulators48 Block Diagram50 Absolute Maximum Ratings.50 ULN 2803 Pin diagram.53 RS 232 Communication module.54 Test Console in C#.................................................................................................................56 Ultrasonic Sensors58 SRF 04 Connections.59

Line Following Robot

Project Report

1. Introduction
Robotics has become a very common application in most of the developed countries. High performance, high accuracy, lower labor cost and the ability to work in hazardous places have put robotics in an advantageous position over many other such technologies. Third world countries like Sri Lanka however, are still not very familiar with the use of robots. There are two reasons behind this. Firstly, the high initial costs in importing robots and associated software and secondly the misconceptions of the capabilities of robots as day to day essential applications.

i- Android can be considered as a mobile robot or a mobile platform that can follow a line ideally a white line that is marked on the ground. The line may consist of bends, turns and dead ends etc and the improved version of i-Android would deal with all types of path characteristics with a high precision. The basic objective of i-Android is to provide cost effective solutions for industrial applications, commercial applications and security applications, but not restricted to them. Slightly modified version of i-Android may be creatively used even for home applications too. Here we have selected some sample scenarios to highlight the useful occasions. 1.1 Sample Scenarios 1.1.1 Automated Production Processes Industrial automation can be considered as a new arena of modern technology. Most of the production associations now tend to automate their production processes replacing the humans by contraptions. The major reasons for this trend are the lower operating cost, precision and accuracy, safety and health hazard reasons, higher trustworthiness etc. I-Android provides a good solution for some parts of the automation such as handling raw material and stocks. Generally those raw materials and processing units or

Line Following Robot

Project Report

conveyers are located in fixed positions, so a path between them can be easily marked permanently. Hence the technology of i- Android can be easily used with suitable machinery. As an example, a need of a forklift that carries raw material from stockpile to conveyer belt can be eliminated easily, by setting up a white line between the stockpiles and conveyer belt. This doesnt mean that i-Android can do it, but in addition to robotic arms and object detection techniques it is not an intricate task. In that way a properly planned organization structure can reduce the labour cost effectively. In this sample scenario it eliminates the need of having a separate operator for each and every forklift reducing the operation cost of the production process drastically. In most countries including Sri Lanka, there are some regulations related with health hazards and the safety of the employees. Especially in the countries with Industrial economies they are really essential. In those applications that have unsafe environments for humans this technology may be really useful. A properly designed machine can work properly in the environments with radioactive rays or chemical particles. 1.1.2 Baggage Carrier Systems

Suppose a sample workflow of an International Airport. Clearly there is a need of carrying luggage, air mail parcels, etc to custom and security checks. Then they are issued to the owners. Then the owner is needed to carry the outside of the airport.

For this purpose baggage carriers similar to the conventional trolleys were used. A human should push the trolley to its destination.

i-Android can make this whole process easy and safe. It will carry the bags from the aircraft to the security check points without the aid of a human avoiding the potential risks of bombs and similar explosions.

The passengers bags will be carried to a vehicle without the interaction of passenger, providing a more comfortable journey to the passenger.

10

Line Following Robot

Project Report

These are only a few scenarios where the application of i-Android system may come in very useful. If the system is further enhanced to provide improved functionalities, it would clear the path for this product, to contribute in many practical applications.

11

Line Following Robot

Project Report

2. Background and Literature Review


2.1 Background
When we started the project we did not have a clear idea about the robot systems and the prototype we were going to use, which were developed by a group, of a previous batch. So we have analyzed the existing prototype and got familiar with them. Also we have gained knowledge about the existing robot platforms as well. 2.1.1 Available controlling systems An almost endless variety of computers can be used as a robots brain. The most common types used are [1]:

Microcontroller

Microcontrollers are the preferred method for endowing a robot with smarts. The reasons for this include their low costs, simple power requirements (usually 2.5 to 5 V), and ability of most, to be programmed using software and a simple hardware interface on the PC. Once programmed, the microcontroller is disconnected from the PC and operates on its own. These are programmed either in an assembly language or in a highlevel language such as BASIC or C. There are literally hundreds of different microcontrollers with an excess of different interfacing capabilities that you can choose from, to control the robot.

Personal Digital Assistant (PDA).

PDA provides a lot of processing power in a fairly small space with a number of features that make it very attractive for use as a robot controller. Personal digital assistants can be used as a small robot controller that combines many of the advantages of microcontrollers, larger single-board computers, and PC motherboards and laptops. The built-in power supply and graphic LCD display (with Graffiti stylus input) are further advantages, eliminating the need for supplying power to the PDA and providing a method of entering in parameter data or even modifying the application code without a

12

Line Following Robot

Project Report

separate computer. The most significant issue that will be encountered using a PDA as a robot controller is deciding how to interface it to the robots electronics. PDAs are becoming increasingly popular as robot controllers and there are a variety of products and resources that will make the effort easier.

Single-board computer

A few years ago, complete computer systems built on a PCB were the preferred method of controlling robots. These systems are still used but are much less popular due to the availability of low-cost PC motherboards and more powerful, easy to use microcontrollers. There are a number of robots that are controlled by single-board computers or SBCs. Like microcontrollers, an SBC can be programmed in either assembly language or in a high-level language such as BASIC or C and contain not only the processor and the memory but also the I/O interfaces necessary to control a robot. SBCs avoid the programming issues of microcontrollers due to the built-in RS-232 or Ethernet interfaces, which allow simple application transfers.

Personal computer motherboards and laptops.

Very small form factor PC motherboards and laptops are common controllers for larger robots. Having your personal computer control your robot is a good use of available resources, because you already have the computer to do the job. Just because the average PC is deskbound it doesnt mean it cant mount it on the robot and use it in a portable environment. These controllers can be programmed using standard development tools and commercial, digital I/O add-ons, for the interfaces needed for the different robot functions.

2.1.2 Vision systems Sensor based vision:

This is the most widely used method in robot construction. Because it is portable, low cost, fast to process and has a variety of forms that can be chosen from. Basically in

13

Line Following Robot

Project Report

robotics sensors are used in many implementations as object detection, path detection (line detection using colour variation.), etc. also it can be easily integrated to microcontroller based controlling systems which are also widely used. Video vision:

Single- and multicell-vision systems are useful for detecting the absence or presence of light, but they cannot make out the shapes of objects. This greatly limits the environment into which such a robot can be placed. By detecting the shape of an object, a robot may be able to make intelligent assumptions about its surroundings and perhaps, be able to navigate those surroundings. A video system for robot vision need not be overly sophisticated. The resolution of the image can be as low as about 100 by 100 pixels (10,000 pixels total), though a resolution of no less than 300 by 200 pixels (60,000 pixels total) is preferred [1]. The higher the resolution is, the better the image and therefore the greater the robots ability to discern shapes. Video systems that provide a digital output are generally easier to work with than those that provide only an analog video output. You can connect digital video systems directly to a PC, such as through a serial, parallel, or USB port. Analog video systems require that a video capture card, a fast analog to digital converter, or some similar device to be attached to the robots computer. While the hardware for video vision is now affordable to most robot builder, the job of translating a visual image a robot can use, requires high-speed processing and complicated computer programming. Giving robots the ability to recognize shapes has proved to be a difficult task. Consider the static image of a doorway. Our brains easily comprehend the image, adapting to the angle at which we are viewing the doorway; the amount, direction, and contrast of the light falling on it; the size and kind of frame used in the doorway; whether the door is open or closed; and hundreds or even thousands of other variations. Robot vision requires that each of these variables be analyzed, a job that requires computer power and programming complexity beyond the means of most robot experimenters.

14

Line Following Robot 2.1.3 Existing platform:

Project Report

The robot platform we have been given to modify, is a final year project which was done by a team in a previous batch (batch02). Their intention was to develop a general purpose Mobile Robot Platform which could be customized to doing various tasks. It is generally an educational robot. The robot consists of a rich set of sensors and actuators so that it provides a development platform for a robotic or embedded system researcher. The mobile robot platform provided a hardware platform and a software layer (Application programmer interface) to control the robot. The MRP enables the programmer to accurately direct the robot in a given direction at a given speed, to navigate through obstacles in finding a designated target, turn in desired angles, detect the presence of obstacles by IR sensors, detect any collisions and the positioning of such not detected by IR sensors using bumper switches and visual display of the path of the robot and its surroundings, by a live video update through a camera mounted on the robot.

So at the beginning we examined how we can customize the existing platform to suit our requirement of line following. But we learned that most parts (modules) integrated were not working due to improper maintenance. So we had to remove those parts under the guidance of Dr.Chathra De Silva. And we had to rebuild the modules that were needed to do our task, which also made our work more complex.

15

Line Following Robot

Project Report

2.2 Literature Survey


2.2.1 Line detection techniques To detect the white line several methods have been tested 2.2.1.1 Edge detection: Here the canny edge detection [3]is tested over the sample images, The steps in the canny edge detection algorithm: Smooth the image with a Gaussian filter Compute the gradient magnitude and orientation using finite-difference approximations for the partial derivatives o Gradient in two directions(Gx, Gy) is calculated by Sobel operator and compute |G| = |Gx| + |Gy| o Direction is calculated by () =tan-1 (Gx/Gy) relate the edge direction to the four directions that can be traced in an image Apply nonmaxima suppression to the gradient magnitude(go along the edge in the edge direction and set the pixels that are not considered to be an edge to 0) Use the double thresholding algorithm to detect and link edges

How

different

parameters

of

Canny

algorithm

can

be

varied

under

different conditions:

The size of the of the Gaussian filter o Smaller filters cause less blurring, and allow detection of small sharp lines.

o A larger filter causes more blurring and the localization error in the detected edges also increases slightly. Thresholds

16

Line Following Robot o

Project Report

The upper tracking threshold can be set quite high and the lower threshold quite low for good results. Setting the lower threshold too high will cause noisy edges to break up. Setting the upper threshold too low increases the number of spurious and undesirable edge fragments appearing in the output. And setting the upper threshold too high and lower threshold too low will miss important information.

Operators like (Roberts, Sobel, and Prewitt) can be applied to get the gradient of the image. o Sobel operator can be used to highlight horizontal and vertical lines o Roberts operator can be used to identify 45 degree lines.

If the image is in low contrast some additional processing should be done before applying this method for better results. As the images is captured using a webcam which is also moving along with the robot, the images are in low contrast as well as having some blurring effect due to the motion. Considering the processing overhead and the accuracy in low contrast images, this methodology is ignored.

2.2.1.2 Hough transformation: The Hough transform [4] is a feature extraction technique used in image analysis, computer vision, and digital image processing. The purpose of the technique is to find imperfect instances of objects within a certain class of shapes by a voting procedure. The classical Hough transform was concerned with the identification of lines in the image, but later the Hough transform has been extended to identifying positions of arbitrary shapes, most commonly circles or ellipses. Algorithm: A line in image space with co-ordinates x and y can be written as y = y0 + m*x ...(1)

17

Line Following Robot

Project Report

The Hough method transforms this line into the parameter space generated by m and y0. For each pixel (x, y) in the original image there is obviously a corresponding "line" in parameter space: y0 = y - x*m ...(2)

The Hough transformation creates a discrete parameter space, called accumulator, initialized with zeros, and transforms every pixel in the original image according to (2). All accumulator cells found are incremented. In the end the highest valued accumulator cell is sought, its parameters represent the most probable line in the original image. Hough transformation also has a modified version to detect curves. When using the line identification method, slightly deviated lines are identified as separate lines. And it cannot be used to identify curve places. So this method is also found to be rather difficult to deal with.

2.2.1.3 Sensor based technique Sensor based robot uses IR sensors to sense the line. As an example lets take an array of 8 IR LEDs [2] and sensors facing the ground which is a common method used. The output of the sensors is an analog signal which depends on the amount of light reflected back, this analog signal is given to the comparator to produce 0s and 1s which are then fed to processing. L4 L3 Left L2 L1 R1 R2 R3 R4 Right

Center Sensor Array

18

Line Following Robot

Project Report

Starting from the center, the sensors on the left are named L1, L2, L3, L4 and those on the right are named R1, R2, R3, R4. Let us assume that when a sensor is on the line it reads 0 and when it is off the line it reads 1 The next move is decided as to the position of the robot such that, L1 and R1 both read 0 and the rest read 1. L4 1 Left Desired State L1=R1=0, and Rest=1 Algorithm: 1. L= leftmost sensor which reads 0; R= rightmost sensor which reads 0. If no sensor on Left (or Right) is 0 then L (or R) equals 0; For example: L4 1 Left L3 0 L2 0 L1 1 R1 1 R2 1 R3 1 R4 1 Right L3 1 L2 1 L1 0 R1 R2 R3 R4 0 1 1 1 Right

Center

Center Here L=2 R=0 L4 1 L3 1 L2 0 L1 R1 R2 R3 R4 0 0 0 0 0

Left

Center

Right

19

Line Following Robot Here L=2 R=4

Project Report

2.

If all sensors read 1 go to step 3, else, If L>R Move Left If L<R Move Right If L=R Move Forward Go to step 4

3.

Move clockwise if line was last seen on Right Move counter clockwise if line was last seen on Left Repeat step 3 till line is found.

4.

Go to step 1.

After analyzing all these techniques we decided to model the method used in the sensor based system, in our vision system, because it is less computationally expensive, less complex and flexible.

20

Line Following Robot 2.2.2 C++

Project Report

When we started the project we did not have much understanding in which language we were going to do the project. We had several options such as C#, C++ and java. But after considering the relative performances of the languages we decided to go with C++. Hence we had to study the language as none of us had used the language before, especially the pointers, references and memory allocation and freeing. This had taken us some time to adapt to the language.

2.2.3 Open source computer vision library (OpenCV) We have decided to use the OpenCV library for the image processing task so have done some research on it to get familiar with the functionalities in it. Open source computer vision library is developed in C/C++ and it is optimized and intended for real-time applications [5]. Also it is OS/hardware/window-manager independent. It has functionality like generic image/video loading, saving, and acquisition. The key features are, Image data manipulation (allocation, release, copying, setting, conversion). Image and video I/O (file and camera based input, image/video file output). Matrix and vector manipulation and linear algebra routines (products, solvers, eigenvalues). Various dynamic data structures (lists, queues, sets, trees, graphs). Basic image processing (filtering, edge detection, corner detection, sampling and interpolation, color conversion, morphological operations, histograms, image pyramids). Structural analysis (connected components, contour processing, distance transform, various moments, template matching, Hough transform, polygonal approximation, line fitting, ellipse fitting, Delaunay triangulation).

21

Line Following Robot

Project Report

Camera calibration (finding and tracking calibration patterns, calibration, fundamental matrix estimation, homography estimation, stereo correspondence). Motion analysis (optical flow, motion segmentation, tracking). Object recognition (eigen-methods, Hidden Markov Models). Basic GUI (display image/video, keyboard and mouse handling, scroll-bars). Image labelling (line, conic, polygon, text drawing)

2.2.4 C++ threading using windows threading API C++ directly doesnt support threading functionality. We have to implement that. So we have to study the windows thread API and implement threading functionality in our application. We have to write the code that refers to the thread API of Windows [8] and then write the actual functionality we want to get done.

22

Line Following Robot

Project Report

3. Design and Implementation


This chapter contains all the information related with the design and implementation process of I-Android. We made some design changes iteratively, especially in hardware design with the evolution of the project. So details on design, as well as the implementation will be discussed within the same chapter. With experience we could identify more suitable technologies. The tested designs, design changes, the reasons and justifications for changes and comments are described in detail in this chapter.

The Basic Structure of I-Android

I-Android is a project that is slightly different from many other Computer Engineering related projects. There are two major components. 1. Hardware Prototype 2. Software System

The robot performs its movements with the collaboration of both sub components. Both components are equally responsible for proper operation of the system.

23

Line Following Robot

Project Report

3.1 Software System


3.1.1 Overall Design

Camera

Server application

Remote application

Automatic traversing module

3.1.2 Server Application

Server Application

Image Capturing Module

Port Communication Core module Module (Server)

Message Send/Rec Communication Module

Video Streaming Module

24

Line Following Robot

Project Report

i-Android Server application is the core application that is run when the robot is operated in the manual control mode. This application basically provides the user to operate the robot in the direction that is desired by him/her. The manual user can use a remote computer to connect to the robot through the network. Then he or she can give commands using the keyboard or the control pad provided in the client application. Details of the client application will be stated later.

Web Cam Main objective of this system is to enable the robot to traverse itself by following a white line. Images are captured by a normal Webcam. The system should be able to detect the camera that has been plugged to the built-in computer, in the robot platform. In this server application the web cam is detected and live video is sent to the remote computer on which the user is able to operate the robot manually.

Image Capturing Module This is an important module in the i-Android Server application. It does the job of accessing the web cam and capturing images, so that it can provide a real time video, of the front view of the robot. This image stream is fed to the core application. Core application is responsible for handling this input for the next steps of the procedure which will be explained later.

Serial Port Communication Module This module carries the major responsibility of communicating with the serial interface. This is the link between the software interface and the hardware interface. It is used for sending and receiving data to or from the hardware system. The main capabilities of this module can be listed as follows.

25

Line Following Robot Change Right Motor Duty Ratio Change Left Motor Duty Ratio Getting the input from ultrasonic sensors Get IR Sensor Reading Stop Both Motors Re-set Both Motors to Forward Rotation Lock Right/Left Motors Invert Rotation Direction of Right/Left Motors

Project Report

More detailed descriptions of these functionalities will be provided later in the hardware design and implementation. This module is responsible for providing the interface to the server application to easily access the hardware system and give instructions/receive information.

Message Send/Receive Communication Module This is a separate communication module that is used to maintain the communication of messages with the i-Android client application. The server application has to have two main communication streams. These are for the video streaming and the message communication. These two streams have been separated for convenience.

Video Streaming Module This is the communication scheme used for sending the video stream to the i-Android client application. It was designed to be implemented as a separate module from the message communication module. Video streaming function is very important with regard to the

26

Line Following Robot

Project Report

manual controlling mode of the robot. The remote user is provided with the facility of getting the live video of the front view of the robot.

Core Module i-Android Server Application The core module of the i-Android server application carries the major responsibilities which are listed below. Handling video input from the image capturing module Handling the port communication module in order to control hardware Interacting with the message communication module Handling the video streaming procedure Change the controlling mode on request (manual/automatic) Managing serial port communication (avoiding confusion)

Therefore the core module is responsible for managing all the major tasks that are to be carried out by the i-Android server application. This can be considered as one of the most important modules of this system.

27

Line Following Robot 3.1.3 Client Application

Project Report

Client Application Message Send/Rec Communication Module Video Streaming Module

Core module (Client)

Keyboard Input Handler

i-Android client application is the basic interface for the remote user (administrator) to control the robot manually. Under this application no automation is provided and the user is totally responsible for the traversal of the robot. User is given the live video captured by the web cam on the robot platform. Client application is run on a remote computer and the communication is done through a wireless network. User is able to control as far as the network connection is active.

28

Line Following Robot Core Module Client Application The core module is responsible for the major tasks that are listed below.

Project Report

Maintain message communication with the i-Android server application Receive video stream from the server application and display video to the user Get key board input from the user (traversal directions of the robot)

Several modules are provided for these tasks to be achieved. Details of those will follow.

Keyboard Input Handler Keyboard input handler is responsible for listening to keyboard events and generate appropriate signals to be sent to the server application. This is when the robot is controlled manually by a remote user in a remote computer. The keyboard input is generated in the remote computer and the appropriate signals have to be sent to the server so that it can generate corresponding hardware commands. The communication with the robot hardware is handled by the server application. Message Communication Module In relation to the message communication module in the server side, this is the client side module that carries the responsibilities. The major task is to send the appropriate signals to the server application, whenever the user generates a control command from the client side. This has to happen in real time. The delay should be minimized in order to get better performance. A communication protocol has to be used in order to avoid confusion when sending control signals. Important command signals like changing to the automatic mode has to be sent through this communication stream.

29

Line Following Robot Video Streaming Module

Project Report

The video streaming module in the client application is responsible for maintaining real time communication with the server and provides the live video to the core module. Presenting it to the user is done by the core application. As this procedure is done through the network, there may be some delay depending on the circumstances. Provided that sufficient bandwidth is provided, better results can be generated. In the design phase it was heavily taken into consideration. Steps have been taken to minimize the probable delays. User Interface-i-Android Client Application

30

Line Following Robot

Project Report

This is the basic view of the i-Android Client Application. It provides the user with the front view of the path. This is done by using a real time video streaming module that is running in the Server application. Client application provides several functionalities. Manual traversal of the robot using arrow keys (Manual Control Mode) Switch from Manual mode to Automatic Mode Switch from Automatic Mode to Manual Mode

3.1.4 Automatic Traversing Module

Automatic traversing module


Video preprocessor Image processor

Port controller

Line traversing module

31

Line Following Robot Video Preprocessor

Project Report

This is the one that is responsible for interacting with the camera. It captures the video output given by the camera and separates the frames. Then the frames are transferred to the image processing module. Image processor This one is responsible for identifying the line and transfers those details to the line traversing module. The basic logic behind this module is as follows,

As we are using the model that is used by the sensor based systems we scan a few points in the image as shown above. The distance between two points should be 75% of the width of the line, so that at any time, one or two points of them are in the line. According to that, the decision of speed and rotation can be predicted.

32

Line Following Robot Line traversing module

Project Report

This module is the one that takes the decision of speeds of two wheels, according to the inputs given by the image processor and the sensors. The traversing algorithm is as follows,

Lets assume the 3 points scanned as,

LEFT Do IF (Left = 0) then IF (Middle = 0) then IF (Right = 0) then LeftMotor = 0

MIDDLE

RIGHT

//Obstacle Stop (interruption)

RightMotor = 0 ELSE LeftMotor = 75 RightMotor = 100 ENDIF ELSE // Only Left Sensor on Line LeftMotor = 50 RightMotor = 100 ENDIF ELSE IF (Right = 0) THEN IF (Middle = 0) then LeftMotor = 100 RightMotor = 75 ELSE LeftMotor = 100 //Hard Turn Right // Slow Turn Right // Hard Turn Left // Slow Turn Left

33

Line Following Robot RightMotor = 50 ENDIF ELSE // Only Left Sensor on Line IF (Middle = 0) then LeftMotor = 100 RightMotor = 100

Project Report

//Straight Down the Middle

ELSE LeftMotor = 0 RightMotor = 0 ENDIF ENDIF ENDIF //Lost Line, Stop

PAUSE LOOP // Repeat

Here we gave the basic algorithm which we are using. Actually when a decision cannot be made (like all 3 points are detected as a line) and in stop condition we use the backup IR sensor based system to verify the line is there or not. And also we use ultrasonic sensors to detect obstacles and when an obstacle is found the automatic mode stops the motors and give the control to the manual mode.

Port controller This is the one that is responsible for calling the serial port interface and send the necessary control signals which were generated from the line traversing module. And also it gets the sensor inputs requested by the traversing module. This is capable of doing all the functionalities done by the port communication module in the server application.

34

Line Following Robot

Project Report

3.2 The Hardware Prototype


We received a robot platform from the Department. It was used for a final year project by a group of our graduates a few years ago. But since it was not stored carefully, the internal mechanism was completely destroyed. There was heavy physical damage as well as some other damage due to dust, moisture etc. So we needed to reconstruct the whole hardware platform, as well as the software system, to achieve our goal. 3.2.1 Steering mechanism The platform consisted of a steering mechanism, of two driven wheels and one free wheel.

Driven Wheels

Direction of Movement Free Wheel

The wheels were driven using two geared DC Motors. According to the information the motors were originally used with a toy vehicle. Because of that reason, there was no specification marked on the motors and no datasheet was available for the motors. So we needed to rely on practical values obtained from the motors and do a design. The movement operations of the above configuration are done in the following way.

35

Line Following Robot

Project Report

Left Wheel

Right Wheel

Free Wheel

3.2.1.1 Linear Movement Forward Direction

Both the Left wheel and the Right wheel should be rotated with the same angular velocity. So the movement of the robot will be along a path parallel to AB axis.
A

Left Wheel

Right Wheel

Free Wheel B

3.2.1.2 Movement along a curved path

Suppose a path is curved in the anticlockwise direction. Now the robot is needed to navigate over the curved line.
B

36

Line Following Robot

Project Report

To achieve this, the wheels of the robot should be rotated differentially. In the above scenario the left wheel of the robot should be rotated at a lower angular velocity relative to the right wheel, so that the resultant path will be a curve similar to the AB. To traverse over a curve in clockwise direction the angular velocity of the left wheel should be greater than the angular velocity of the right wheel. Those velocities should be calculated appropriately.

Suppose, The radius of the wheel = r Angle rotated = Length of the Arc = L

The equation, [1] By dividing both sides by T (The time taken for the rotation in Radians); we get,

Here

denotes the linear velocity of the wheel and

denotes the angular velocity of the

wheel. Since the radii of wheels are equal and constant the linear velocity is directly proportional to the angular velocity. So by changing the rotation speeds we can easily obtain a curved motion.

37

Line Following Robot

Project Report

Higher Linear Velocity Lower Linear Velocity Left Wheel (Lower Angular Velocity) Right Wheel (Higher Angular Velocity)

Free Wheel

3.2.1.3 Sudden Rotation Suppose the robot is needed to get a sudden rotation. Consider the following path.
D C

38

Line Following Robot

Project Report

When dealing with turns like B and C, the platform should perform sudden rotations. This can be performed by setting the angular velocity of one wheel to zero. So this can be considered as a special case of above scenario. But mechanically this should be handled by using a different mechanism. Even if the power to a single motor is cutoff the motor may rotate freely i.e the angular velocity of the wheel is greater than zero. So a locking mechanism should be employed to obtain an accurate result.

Left Wheel (Locked)

Right Wheel (Higher Angular Velocity)

Axis of Rotation

Free Wheel

Other than the machineries used some other mechanisms was used to obtain the desired motion. By rotating the motors in different directions, the system would give more quick rotation.

39

Line Following Robot

Project Report

Left Wheel

Right Wheel

Free Wheel

Axis of Rotation (If the rotation speeds of the wheels are equal)

By concerning all the possible scenarios the following movements can be identified. So the ideal design should be able to perform the following operations. 1. Changing the rotation speed. 2. Changing the rotation direction. 3. Locking the motors.

3.2.1.4 Changing the Rotation Speed.

The motors employed in the given prototype are Permanent magnet DC motors. So the best mechanism that can be used with the given motors is Pulse Width Modulation (PWM).

So a Circuit based on common Digital and Analog electronic Techniques was designed and implemented for this purpose.

In this case eight levels of speed were defined. The basic functionality of the circuit is demonstrated below.

40

Line Following Robot

Project Report

The functionality of the design can be explained as follows. The computer program, the intended program that performs the speed control of the motors generates 3 bit output from the parallel port, according to the intended speed level. Output 000 001 010 011 100 101 110 111 Speed Level Stopped Speed Level 1 Speed Level 2 Speed Level 3 Speed Level 4 Speed Level 5 Speed Level 6 Maximum Speed

The parallel ports can basically handle 8 bits. But in this design, only 3 bits were used because of the following reasons.

41

Line Following Robot

Project Report

Other bits can keep reserved for maintain the other functionalities such as rotation direction changing etc. A small current of a few milli-Amperes can be drawn from the parallel port. But we need some higher current to perform the activities of the next stages. So we need to use some device, preferably an IC, to overcome this problem. So its more effective to use the same IC to increase the current as well as save the bits from the parallel port.

A 3:8 decoder is employed in next stage. It will decode the 3 bit input into 8 outputs. The currents along each output link are adjusted using an array of variable resistors. The pulse generator changes the duty ratio of the PWM according to the voltage applied on a reference pin of PWM module. The power control module facilitates the system to drive the high current motors avoiding the hardware damage.

According to the above flow diagram the following circuit was designed.

42

Line Following Robot 3.2.1.5 Changing the Rotation Direction

Project Report

There are several technologies to deal with this problem. Here a design based on 12V relays was created concerning the higher current that should be supplied to the motors. Another 2 outputs from the parallel port was used to change the rotation directions. Output Pattern 00 01 10 11 Motor 1 Forward Forward Backward Backward Motor 2 Forward Backward Forward Backward

43

Line Following Robot 3.2.1.6 Locking the Motors

Project Report

Generally there are several ways to lock a motor. (In this case a Permanent Magnet DC motor) 1. By short circuiting the power input pins of the motor. 2. By adding an additional mechanism preferably a solenoid to lock the wheel mechanically. A design using the first method was developed but the implementations were postponed until the first two steps get 100% success.

The Implemented Circuit

44

Line Following Robot 3.2.2 Controlling the Parallel port

Project Report

In order to test the hardware system an application that can control the parallel port is required. So an application that can perform that task was developed. Since it is just a test console it was developed in Visual Basic to save time.

After a successful integration the prototype was tested by giving the inputs manually from the keyboard. But we observed the following problems with that design. 1) The motors of the prototype were not 100% identical. They were quite different when considering the number of revolutions per second. So the path of the prototype was not a straight line. And I realized the need of having separate PWM modules and a feedback mechanism for each motor. 2) The stability of the circuit was not sufficient. Some changes in rotation speeds could be observed without any change of external factors. 3) The MOSFET dissipated a lot of energy in the form heat. So there was a risk of device damage.

45

Line Following Robot

Project Report

Because of all the above factors I decided to do a major change in hardware design. The new design consisted of Microcontrollers.

3.2.3 Hardware Component with 16F877 PIC [7]

The System consists of a Hardware component with semi decision making capability. A PIC Microcontroller would be used to grant this capability. While concerned about the modifiability, Cost and performance factors a 16F877 Microcontroller is used in this design phase.

46

Line Following Robot

Project Report

The basic design can be sub divided into several sub categories concerning the functionality of each component. 1) The oscillator for clocking of the microcontroller 2) I/O Port D for additional sensors 3) Two built in Pulse Width Modulators 4) RS 232 Communication Module

3.2.3.1 The Oscillator for clocking the Micro Controller According to the proposed design the oscillator stage would consist of a 4 MHz to 20 MHz Crystal oscillator.

22pF

4 MHz

22pF

47

Line Following Robot

Project Report

The alternate design consisting of a resonator can be proposed. It eliminates the need of capacitors and provides more stability. The availability of the electronic components in the local market would be the major factor in component selection.

4 MHz

3.2.3.2 I/O Port D for the additional Sensors. We decided to add a new Infra-red sensor array to facilitate the core vision system to identify the white line. So the port D of the micro controller was used as an input port for this purpose.

3.2.3.3 Built-in Pulse Width Modulators In order to control the speed of the DC motors independently, the built-in pulse width modulators, of the Microcontroller were used. It provides a Duty Cycle adjustable with the user defined input value.

48

Line Following Robot

Project Report

Power Control Circuit

PWM Output

Power Control Circuit

PWM Output

The output of the Microcontroller would be a low current output that can have no capability of driving the DC Motors. So an additional Power Control Circuit is designed for this purpose. L 298 [6], A Dual Full Bridge driver IC is employed to handle this situation. The IC can provide polarity inversion and breaking capabilities, which can be considered as additional advantages.

49

Line Following Robot

Project Report

According to the practical readings we get, the average current rating of a single DC motor lies between 3 4 Amperes. But according to the data-sheet of L298 Dual Full Bridge Driver, it exceeds the maximum allowable limits per channel.

50

Line Following Robot

Project Report

So the power circuit was designed using parallel outputs. It doubles the logical absolute maximum current ratings.

The bi-directional motor control mechanism was used with parallel channels. 1N4001 diodes were used to avoid the equipment damage due to back electromotive forces. The basic configuration is presented in the following figure.

51

Line Following Robot

Project Report

We used a combination of above configurations in order to gain the bidirectional rotation capability as well as higher current raitng. At the beginning we tried to connect the microcontrocontroller directly to the L 298 Motor dirver IC. But when we try to power up the motor the motor got stopped suddenlly and the outputs got Logic Zero. After reading some resources we could understand the reason for this behaviour. When we apply the current to the circuit the L298 module draws a relatively higher current from the microcontroller making the microcontrolller to get reset. So we needed to use some isolation meachanism between the Microcontroller and L298 ICs. After researching about possible mechanisms we found the following mecthods for resolve this problem. 1. Using Optocouples 2. Using a Gate IC just to increase the current 3. Use of Transistors as amplifying devices We had a try on using opto couples as isolation device. But practically the control circuit failed to gain the extremely lower speeds. The reason was the distortion of the waveform because of the switching delay. So we researched about the possible solutions and we found extremely effective solution; an octal pheripheral dri that consists of eight darlington pairs.

52

Line Following Robot ULN 2803 Pin diagram

Project Report

Internal Architecture of ULN 2803

Because of this Open Collector configuration the there is no power requirement for the IC. But the original waveform gets inverted. So we inverted our speed specifying protocol to deal with this matter.

53

Line Following Robot 3.2.3.4 RS 232 Communication module

Project Report

The basic block diagram for the complete functional system would be as follows.

Video Input

Software Vision System

Microcontroller

Motor Controller Circuit

Additional Sensor Inputs

The communication between the software vision system and the Microcontroller (Marked in Red color in diagram) would be handled using the RS 232 communication module. In designing this stage we have considered some important factors such as communication mode etc. The protocol used for the communication is RS 232 using the COM port of the attached mother board. Since the communication should be full duplex for maximum throughput there should be two parallel paths for transmission and reception. Other than that a protocol conversion mechanism is required to convert the protocols from RS 232.

54

Line Following Robot

Project Report

According to the specifications provided by Maxim semi-conductors, a design using pins 11, 12, 13 and 14 would be suggested.

Pins 13 and 12 provide RS 232 to TTL conversion and Pins 11 to 14 provide the reverse. So the COM port to MAX 232 communication is done in RS 232 protocol (13 V peak to peak) and the protocol will be converted to TTL (5V peak to peak) in MAX232. The hardware USART module in 16F877 is employed in this case. We used a vero board for initial construction and testing purposes but we realized that the troubleshooting capability of the constructed circuit is extremely lower. So we developed an OrCAD design for the designed circuit and developed a Printed circuit board.

55

Line Following Robot

Project Report

To test the functionality of the parallel port and for the calibration purposes a Test console was developed. It contained the basic control functions that to be operated on the prototype. The test console was developed in C#.

In order to facilitate the strategic developments an API was developed with some useful methods. The APIs were developed in C# and C++. To interact with prototype the following methods were implemented.
void acknoledge(void); void SendIRReading(void); void SendLeftEncoderCount(void); void SendRightEncoderCount(void); void SetLeftPWMValue(void); void SetRightPWMValue(void);

56

Line Following Robot


void SetRotationDefaults(void); void InvertLeftMotor(void); void InvertRightMotor(void); void LockLeftMotor(void); void LockRightMotor(void); void StopMotors(void); void getUltrasonicOutPut(void) 3.2.4 Infrared Support Sensor System

Project Report

There may be some cases that the computer vision based navigation system fails. As an example in the positions where the light intensity is high, the camera may produce the images with a glowing effect and the software system will not recognize an intensity change properly. In order to dealing with this we decided to add an additional IR sensor array. In a general case the weight for the IR sensor array was set to lower in the process of decision making. But in a case that the vision system failure the core algorithm uses the IR array reading to navigation increasing the probability of accurate traversal. Each sensor element consists of an Infra red Emitter, Infrared Receiver and a Comparator. The Infrared transmitter unit transmits a beam of Infrared light and the receiver is installed parallel to the transmitter. When the Infrared beam is fallen on a surface with a light color the beam reflects back. If the color of the surface is black the reflection would be extremely low. The receiver and the comparator detect this difference and generate a digital output.

3.2.5 Obstacle Detection Generally in a Robot there is a compulsory need of avoiding the obstacles. Since our system is optimized for high power heavy applications this would be a critical issue because of the safety requirements applicable with the scenario. Generally various techniques are used to detect the obstacles in robotics. 1. Bumper Switches Generally bumper switches are designed using the Micro switches. When the switch is touched with some obstacle or a surface the switch

57

Line Following Robot

Project Report

gets pressed making the circuit a closed circuit. A well designed array of bumper switches can identify the direction and the shape of the obstacle roughly and the microcontroller can get the decision according to that. Since there is a prior collision before the detection, this method would be useful for light weight prototypes with lower momentum. Otherwise the switches or the collision surface may get damaged.

2. Infra Red Rangers An Infrared beam is transmitted from the robot and the reflection is captured. The intensity of the reflection determines the availability of an obstacle on robots path. The availability of the Infrared rangers is low and interference from the daylight and other light sources is extremely high.

3. Ultrasonic Rangers - The ultrasonic rangers work by transmitting an ultrasonic (well above human hearing range) pulse and measuring the time it takes to "hear" the pulse echo.

So by concerning the availability, cost, applicability and the accuracy of the each methodology we decided to use SRF 04 Ultrasonic Rangers.

58

Line Following Robot

Project Report

Technical Specification of SRF 04 Ultrasonic Ranger


Operating Voltage

5V 30mA 50mA 40kHz 3cm to 3m


10uS Min. TTL level pulse 0.4 oz .75" w x 0.625" h x 0.5" d

Typical Current Rating Max Current Rating Frequency Range


Input Trigger

Weight
Size

According to the Datasheet the Pin diagram of the SRF 04 Module is as follows.

59

Line Following Robot


The Timing diagram associated with SRF 04 is as follows.

Project Report

So the practical procedure of detecting an obstacle can be explained as follows. The transceiver module should be powered by using 5V supply pin and 0V ground pin. A trigger pulse with 10uS width should be input to the Trigger Pulse input in order to activate the Ranger. After the Trigger pulse the ultrasonic transmitter sends a sonic burst consists of eight 40 kHz pulses. Just after sending the burst it makes Echo Pulse Output logic state up. When the receiver gets the echo pulse it lowers the logic level of Echo pulse output. So the time taken by the sonic burst is equal to the time duration of the echo pulse. So the distance to the obstacle can be calculated as follows. Total Distance = the velocity of the sound in air x time taken by the burst to return back to sensor We can assume that the velocity of the prototype is negligible with respect to the velocity of the sound in free air. So the Distance to the obstacle can be found by dividing the total distance by two.

60

Line Following Robot

Project Report

So the basic requirement of the hardware implementation was measuring the width of the Echo Pulse. So we used a separate Microcontroller for this purpose due to the following reasons. The microcontroller unit that controls the encoders and the pulse width modulation units has occupies all three hardware timers available in the 16F877 PIC. The microcontrollers need some timers to measure the pulse width. The Ultrasonic sensor does not directly affect the line following functionality so it should not affect the core functionality. The core system was deployed and under testing. So it is more necessary to develop a separate component and attach with the core module other than modifying the core module.

3.2.5.1 Development of Ultrasonic Distance measuring Component The component deals with two major issues. 1. It avoids the collisions with walls and other obstacles. 2. It avoids the prototype be fallen down from the steps on its way. So we implemented the circuit on a proto board and tested it successfully. Since we have used a multiple microcontroller mechanism, we had a need of using some Intermicrocontroller communication mechanism. Generally we can use I2C, Hardware USART or Software USART for this purpose. So we started with I2C communication concepts and had some prototype level experiments using two 16F877 Microcontroller units. But the I2C experiments were not successful to the desired level. So we decided to use the software USART because the Hardware USART is already in use in Microcontroller 1. So we developed some MikroC code for software USART communication and tested the prototype successfully.

61

Line Following Robot


3.2.5.2 Master device
#include "built_in.h" unsigned long data;

Project Report

void main() { data = 0xFFFF; TRISB = 0xA0; PORTB = 0; Soft_Uart_Init(PORTB, 7, 6, 2400, 0); PORTB.F4 = 1; Soft_Uart_Write(hi(data));

while(1){ if (PORTB.F5){ Soft_Uart_Write(lo(data)); PORTB.F4 = 0; break; } } }

3.2.5.3 Slave Device


#include "built_in.h"

unsigned long data = 0; unsigned short temp=0;

unsigned short *recOK;

void main() { TRISD =0; TRISB = 0x90;

PORTB=0; TRISC = 0; PORTC = 0; PORTD =0;

62

Line Following Robot

Project Report

Soft_Uart_Init(PORTB, 7, 6, 2400, 0);

while(1){ if(PORTB.F4){

do{ hi(data) = Soft_Uart_Read(recOK); }while (*recOK);

PORTD = hi(data);

Soft_UART_Write(data);

PORTB.F5 = 1;

do { lo(data)= Soft_Uart_Read(recOK); }while (*recOK);

PORTC = lo(data); } } }

63

Line Following Robot

Project Report

5. Discussion
The ultimate goal of the I-Android is to develop an efficient navigation mechanism using computer vision. But we tried to use some hybrid mechanism to gain higher accuracy and flexibility other than using either software approach or hardware approach.

Problems encountered and solutions

Over Sensitivity of Infrared Sensors The infrared receivers which are used by us were too sensitive. They even responded to day light as well. So we designed the circuit using low base currents for the transistors. It reduced the effect of over sensitivity of detectors.

The dissimilar nature of two motors This is one of the most critical problems that we faced. The motors were driven in different speeds in identical operating conditions. The major cause of this is the high friction of one motor unit. There is a huge variation in the speeds of two motors due to that and the variation also unpredictable. A correction method would be needed to this process. We tried to counter with this problem using a lookup table under the guidance of our supervisor.

Unavailability of quality components in local market We could not find some components that are compatible with our requirements in the local market. Especially we couldnt find Infrared receiver modules according to the part number. In local market the vendors dont use the part number. So we faced to some design problems. We had to implement the test prototypes and measure the parameters other than using the datasheets for above purpose.

High power rating of the motors The motors that were available for us had an extremely higher current rating. They were not marked on the motors so we measured them practically using a digital multi-meter. The

64

Line Following Robot

Project Report

ratings we obtained were bit higher than expected. So we had to use some high current devices for control the motors. We used L 298 in single channel mode and got some integrated circuits burnt. Then we moved to L 298 parallel channel mode with large heat sinks and it solved the problem in an acceptable manner.

Drain current of the motor controller integrated circuits As we mentioned earlier we experienced a problem in interfacing the motor control module with the microcontroller. The microcontroller behaved in an unexpected manner. So we had to isolate them using a proper mechanism. We found an integrated circuit called ULN 2803 and fixed this problem successfully.

Inter Integrated Circuit communication We had to use multiple microcontrollers to get the desired functionality. To implement the inter communication we had to use I2C or USART based approach. But they were completely new to us as well as there were no much resources available. But we took the challenge and tried to solve them using a prototype built on a proto board. We tried to use I2C communication first but failed. The resources were available for demonstrating the communication between PIC and EEPROMs. So we moved to the Software USART approach.

Advantages of i-Android
Higher accuracy Since i-Android uses some backup mechanisms for computer vision based systems, a higher reliability can be expected.

Usability in High power applications Here we used DC motors to gain the motion other than using the steppers. Generally the DC motors can provide higher power and a torque than steppers.

65

Line Following Robot Use as a learning platform

Project Report

i-Android can be used as a learning platform for students who are willing to do researches in Robot motion. And it can used to test some applications which are based on line following and object following.

Administrative control (client application) The controlling software consists of a client application which gives the administrative control to the user. Such as getting the control back from automatic mode to manual mode and vice versa. This is very important as if the robot malfunctioned due to hardware failure.

66

Line Following Robot

Project Report

6. Conclusion
Robotics and Automation based systems and technologies are vastly advancing in the field of computer engineering. Mostly in developed countries which are exposed to modern high end technologies, many researches are carried out in this field. These kinds of systems have proved to be highly advantageous in many areas like educational, military and industrial applications. But unfortunately in developing countries like Sri Lanka, advancement of robotics systems is relatively slow in progress. It is due to many factors such as, low exposure to modern technologies, unaffordable cost etc. Actually robotics based systems are limited to research work carried out in universities. Undergraduate students possess sufficient skills to carryout research work in these areas. Those skills can even be used to develop robot systems in large scale as commercial products. But still, sufficient support and technical expertise is not provided. Main target of i-Android-Line Following Robot is to provide the basis for improvement in robotics based systems for usage in various application scenarios mentioned above. Although this is carried out as a research project in line following robots, results of this project will be vastly advantageous in future researches as well as industry related applications. Especially engineering undergraduates in the field of computer technology should pay more attention to this field. Sufficient support and motivation should be provided to them from the universities. Line Following Robots can be used in many practical applications such as baggage carriers, shopping carts etc. This i-Android platform can be used as a learning platform for researches in traversal of robot systems. This can be considered as an important usage of this system. The Calibration Console provides users to study about controlling of motors in the traversing mechanism. As a conclusion it is to be emphasized the fact that i-Android-Line Following Robot platform can be very useful in research work as well as in industrial applications.

67

Line Following Robot

Project Report

7. Future Enhancements
In the process of development of the line follower, we identified some useful features. But according to the time limitations and the human resource limitations we could not go for such a big goal, but we would like to include them as future enhancements. Adding some side sensors This can be considered as a good feature that can be implemented. There is a possibility of getting the sides collided with walls etc. The computer vision system pays attention only on the path in forward direction. Since the safety requirements for the proposed devices are high its really useful to avoid the collisions of all the types including side collisions. The most suitable methodology to achieve this is the use of ultrasonic sensors because they produce sufficiently accurate results with any sort of reflecting surfaces in low distance. But the system should use a suitable multiplexing mechanism to avid false alarms. Adding Tilt sensors With our mechanism there is no protection mechanism to deal with slopes. But with heavy devices with a higher centre of gravity the tilts may cause some problems and safety issues. So an addition of tilt sensors with appropriate modifications to the core algorithm can be suggested. Adding a Path Recording Mechanism The basic idea of our prototype is to follow a given white line. If there is a clear path from a certain origin to a given destination clearly there is a path in reverse also. So if it can memorize the path it followed then it can derivate the reverse path without much effort. In order to add this functionality there should be a proper mechanism to record the path. An efficient dynamic data structure may solve this problem.

68

Line Following Robot Object following concept

Project Report

According to our research, this is an advanced concept. A properly programmed system may identify and extract some useful features from the images and use those characteristics to follow the object. Ex: A baggage carrier that follows the passenger in the airport An automated car that follows the previous car in traffic jam

69

Line Following Robot

Project Report

8. Abbreviations
USART - Universal asynchronous receiver/transmitter PIC Programmable Interface Controller PWM Pulse Width Modulation IR Infra Red API Application Programmer Interface TTL Transistor Transistor Logic MOSFET - Metal Oxide Semiconductor Field Effect Transistor

70

Line Following Robot

Project Report

9. References
[1] Gordon McComb, Myke Predko Robot Builders Bonanza Third Edition

[2] Priyank Patil Line Following Robot Department of Information Technology, K. J. Somaiya College of Engineering, Mumbai, India

[3] Bill Green Canny Edge detection tutorial http://www.pages.drexel.edu/%7Eweg22/can_tut.html [Accessed Date: 10th December 2007]

[4] Hough Transformhttp://homepages.inf.ed.ac.uk/rbf/HIPR2/hough.htm [Accessed Date: 21th December 2007]

[5] Open Source Computer Vision Library Reference Manual

Intel Corporation

[6] L298 Datasheet http://www.st.com/stonline/products/literature/ds/1773.pdf [Accessed Date: 2nd December 2007]

71

Line Following Robot

Project Report

[7]16F877 Datasheet http://www.cs.indiana.edu/csg/hardware/B442_inv_pend/30292b.pdf [Accessed Date: 15th October 2007] [8] Randy Charles Morin

C++ And Threads

72

You might also like