Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 18

Robotics The Merriam Webster Dictionary, 1998, defines robotics as technology dealing with the design, construction, and

operation of robots. Robotics encompasses such diverse areas of technology as mechanical, electrical, and electronic systems; computer hardware; and computer software. The Robot Institute of America defines a robot as a programmable, multifunctional manipulator designed to move material, parts, tools, or specialized devices, through variable programmed motions, for the performance of a variety of tasks. Different fields of technology involved in the architecture of robots: Theory of robots Sensors and transducer technology Motors technology (Steppers or DC servo motors) Motor drive and control Control theory Power semiconductor drive Microelectronics Digital systems Microprocessors Computer systems & Computer interfacing general areas of robotics: industrial, hobbyist, show or promotional, domestic or personal, military, educational, and medical. Classification of Robots Manual-Handling device. Fixed-Sequence Robot. Variable-Sequence Robot. Playback Robot. Numerical Control Robot. Intelligent Robot. History of Robotics 1922: Rossums Universal Robots 1954: First programmable robot 1978: First PUMA robot 1983: Started teaching in Robotics Advantages of Robots Increase Productivity, Safety, Efficiency, Quality, and Consistency of products. Work in Hazardous environments and have capabilities beyond that of humans.

Components of Industrial Robot Physical parts or anatomy, Built-in instructions or instinct , Learned behavior or task programs. Physical Parts of An Industrial Robot Mechanical part or manipulator (Body, Arm, Wrist) End effector (Tool or Gripper) Actuators Controller (Sensors, Processor) Power supply, Vehicle (optional). Robot Anatomy Manipulator is constructed of a series of Joints & Links. A Joint provides relative motion between the input link and the output link.Each joint provides the robot with one degree of freedom. Robot Joints Linear, rotational, twiating and revolving.

Degrees of Freedom Point location in space specified by three coordinates (P). Object location in space specified by location of a selected point on it (P) and orientation of the object (R). Six degrees (P,R) of freedom needed to fully place the object in space and orientate it. Robot Hand Location The arm joints are used to position the end effector. The wrist joints are used to orient the end effector. Robot Languages Robotic languages range from machine level to high-level languages. High-level languages are either interpreter based or compiler based. Levels of Robot Languages Microcomputer Machine Language Level Point-to-Point Level Primitive Motion Level Structured Programming Level Task-Oriented Level Industrial Robot Characteristics Lifting power (Payload), Reach (Workspace), Repeatability, Reliability, Manual/Automatic control, Memory, Library of programs, Safety interlocks, Speed of operation, CoRobot & Automation Industrial robots are neither as fast nor as efficient as special-purpose automated machine tools. However, they are easily retrained or reprogrammed to perform an array of different tasks, whereas: An automated special-purpose machine tool can work on only a very limited class of tasks, and designed to do one task very efficiently. Choosing among HUMANS, ROBOTS, and AUTOMATION Some rules can help suggest significant factors to keep in mind. The first rule to consider is known as the Four Ds of Robotics. Is the task dirty, dull, dangerous, or difficult? The second rule recalls the fourth law of robotics: A robot may not leave a human jobless. A third rule involves asking whether you can find people who are willing to do the job. A fourth rule is that the use of robots or automation must make short-term and long-term economic sense. A task that has to be done only once or a few times and is not dangerous probably is best done by a human. A task that has to be done a few hundred to a few hundred thousand times, however, is probably best done by a flexible automated machine such as an industrial robot. A task that has to be done 1 million times or more is probably best handled by building a special-purpose hard automated machine to do it.

Robot Industrial Application Material handling applications, like: Material transfer applications Machine loading/unloading applications Processing applications, for example: Welding Painting Assembly. Inspection.

Kinematics is the modeling of the relationship between the position, velocities and accelerations of the link of a manipulator. Kinematics concerns the study of motion of bodies without reference to forces that cause the motion. Object Manipulation Manipulation is the skilful handling and treating of objects: picking them up, moving them, fixing them one to another, and working on them with tools. Before we can program a robot to perform such operations, we require a method of specifying where the object is relative to the robot gripper, and a way of controlling the motion of the gripper. Kinematic Model Before a robot can move its hand to an object, the object must be located relative to it. There is currently no simple method for measuring the location of a robot hand. Most robots calculate the position of their hand using a kinematic model of their arm.

Forward Kinematics will enable us to determine where the robots hand will be if all joint variables are known. Inverse Kinematics will enable us to calculate what each joint variable must be if we desire the hand to be located at a particular point and have a particular orientation. Transformations Transformations of frames introduced to make modeling the relocation of objects easier. An object is described with respect to a frame located in the object, and this frame is relocated with a transformation. The transformation is the result of a sequence of rotations and translations, which are recorded with a transformation equation. The actions of the individual joints must be controlled in order for the manipulator to perform a desired motion. The robots capacity to move its body, arm, and wrist is provided by the drive system used to power the robot. The joints are moved by actuators powered by a particular form of drive system. Common drive systems used in robotics are electric drive, hydraulic drive, and pneumatic drive. Types of Actuators *Electric Motors, like: Servomotors, Stepper motors or Direct-drive electric motors *Hydraulic actuators *Pneumatic actuators Drive Systems The drive system determines the speed of the arm movement, the strength of the robot, dynamic performance, and, to some extent, the kinds of application. Robot Actuators Quality Have enough power to acc/dec the links, Carry the loads, Light, Economical, Accurate, Responsive, Reliable and Easy to maintain. Characteristics of Actuating Systems *Weight, Power-to-weight Ratio. *Operating Pressure. *Stiffness vs. Compliance. *Use of reduction gears. Stiffness vs. Compliance Stiffness is the resistance of a material against deformation. Hydraulic systems are very stiff and noncompliant. Pneumatic systems are compliant. *The stiffer the system, the larger load that is needed to deform it. *The more compliant the system, the easier it deforms under the load.

*Stiffness is directly related to the modulus of elasticity of the material. *Stiff systems have a more rapid response to changing loads and pressures and are more accurate. *A working balance is needed between these two competing characteristics. Use of Reduction Gears Gears used to increase the torque and reduce the speed. Hydraulic actuators can be directly attached to the links. This Simplifies the design, Reduces the weight, Reduces the cost, Reduces rotating inertia of joints, Reduces backslash, Reduces noise and Increases the reliability of the system. Electric motors normally used in conjunction with reduction gears to increase their torques and to decrease their speed. This increases the cost, increases the number of parts, increases backslash, increases inertia of rotating body, increases the resolution of the system. Applications Electric motors are the most commonly used actuators. Hydraulic systems were very popular for large robots. Pneumatic cylinders are used in on/off type joints, as well as for insertion purposes. Hydraulic and pneumatic drive systems use devices such as linear pistons and rotary vane actuators to accomplish the motion of the joint. Pneumatic drive is typically reserved for smaller robots used in simple material transfer applications. Both electric drive and hydraulic drive are used on more sophisticated industrial robots. Hydraulic Drives Hydraulic drives are electric pump connected to a reservoir tank and a hydraulic actuator. Advantages: precise motion control over a wide range of speeds and loads, robust, and greater strength. Disadvantages: expensive, high maintenance, not energy efficient, noisy, not suited for clean-air environments. Pneumatic Drives Pneumatic drives: air-driven actuators. Advantages: economical, easy installation, less costly than hydraulic drives, good speed and accuracy. Disadvantages: precision is less than electric drives (air is compressible), air needs conditioning, noisy, vibration. Electric Drives They are readily adaptable to computer control, the predominant technology used today for robot controllers. Electric drive robots are relatively accurate compared to hydraulically powered robots.

Types: ac servomotors, dc servomotors, stepper motors. Advantages: quiet, less floor space, electric power readily available, clean-air environments, precision. Stepper Motors Stepper motor, unless a step is missed, steps a known angle each time it is moved. Angular position is always known and no feedback is necessary. Stepper motors come in many different forms and principles of operations. Stepper Motors Operation Stepper has multiple windings in its stator and a permanent magnet as its rotor. When each of the coils of the stator is energized, the rotor will rotate to align itself with the stator magnetic field. Steppers rotate only when the magnetic field is rotated through its different windings. Each rotation is equal to the step angle (1.8 7.5). With the opposite on-off sequence, the rotor will rotate in the opposite direction. Stepper Speed-Torque Characteristic Steppers develop maximum torque (holding) at zero angular velocity. As the speed of motor increases, the torque it develops reduces significantly. Steppers cannot rotate fast. If the signals coming are too fast, the rotor will miss steps. L297 / L298 Stepper Motor Driver n This Step motor controller uses the L297 and L298N driver combination; it can be used as stand alone or controlled by microcontroller. It is designed to accept step pulses at up to 25,000 per second. All eight inputs are pulled up to +5V by RP1 (4.7K). The output driver is capable of driving up to 2 Amp into each phase of a two-phase bipolar step motor. L293 Dual Stepper Motor Driver The circuit consists of three ICs, a PIC16F84 and either two L293D H-bridge drivers for bipolar steppers or two ULN2803 for unipolar steppers. * A 4 MHz resonator, a 10K pull-up resistor, and some connectors. * A pack of 6 x 1.2V batteries, supplying 7.2V, is linearly regulated to 5V to supply the logic voltage and the raw unregulated power is applied to the 5V steppers. L6203 a full bridge driver The L6203 is a full bridge driver, which can handle the high peak current up to 5A and supply voltage up to 48V. The chip can run the motor at 4A continuous with proper heat sinking. 4424 Driver Direct motor driving with this chip is only possible for motors that draw less than 50 mA under load. TTL/CMOS compatible 4424 MOSFET driver chips protect the logic chips, isolate electrical noise, and prevent potential short-circuits inherently possible in a discrete H-bridge. Schottky diodes to protect against overvoltage or undervoltage from the motor. Capacitors to reduce electrical noise and provide spike power to the driver chips. Pull-up resistors that prevent unwanted motor movement while the microcontroller powers up or powers down.

A sensor is an electronic device that transfers a physical phenomenon (temperature, pressure, humidity, etc.) into an electrical signal. Sensors in Robotics are used for both internal feedback control and external interaction with the outside environment. Desirable Features of Sensors Accuracy. Precision. Operating range. Speed of response. Calibration. Reliability. Cost. Ease of operation. Potentiometers The general idea is that the device consists of a movable tap along two fixed ends. As the tap is moved, the resistance changes. The resistance between the two ends is fixed, but the resistance between the movable part and either end varies as the part is moved. In robotics, pots are commonly used to sense and tune position for sliding and rotating mechanisms. Switch Sensors Switches are the simplest sensors of all. They work without processing, at the electronics level. Switches measure physical contact. Their general underlying principle is that of an open vs. closed circuit. If a switch is open, no current can flow; if it is closed, current can flow and be detected. Principle of Switch Sensors Contact sensors: detect when the sensor has contacted another object. Limit sensors: detect when a mechanism has moved to the end of its range. Shaft encoder sensors: detects how many times a shaft turns by having a switch click (open/close) every time the shaft turns. Shaft Encoding Shaft encoders measure the angular rotation of an axle providing position and/or velocity info. To detect a complete or partial rotation, we have to mark the turning element. This is usually done by attaching a round disk to the shaft, and cutting notches into it. A light emitter and detector are placed on each side of the disk, so that as the notch passes between them, the light passes, and is detected; where there is no notch in the disk, no light passes. Usually, many notches are cut into the disk, and the light hits impacting the detector are counted. Encoder An alternative to cutting notches in the disk is to paint the disk with black and white wedges, and measure the reflectance. In this case, the emitter and the detector are on the same side of the disk.

In either case, the output of the sensor is going to be a wave function of the light intensity. This can then be processes to produce the speed, by counting the peaks of the waves. Shaft encoding measures both position and rotational velocity, by subtracting the difference in the position readings after each time interval. Velocity, on the other hand, tells us how fast a robot is moving, or if it is moving at all. Shaft Encoders There are multiple ways to use this measure the speed of a driven (active) wheel, use a passive wheel that is dragged by the robot (measure forward progress) We can combine the position and velocity information to do more sophisticated things: 1. move in a straight line 2. rotate by an exact amount. Note, however, that doing such things is quite difficult, because wheels tend to slip (effector noise and error) and slide and there is usually some slop and backlash in the gearing mechanism. Shaft encoders can provide feedback to correct the errors, but having some error is unavoidable. Ultrasonic Sensors Ultrasonic sensors are used in wide range due to some considerations: very cheap in compare with other type of detectors. relatively have a good sensitivity available in different shapes. Ultrasonic sensors measure the distance or presence of target objects by sending a pulsed ultrasound wave at the object and then measuring the time for the sound echo to return. Knowing the speed of sound, the sensor can determine the distance of the object.

Ultrasonic Distance Sensing Ultrasound sensing is based on the time-of-flight principle. The emitter produces a sonar of sound, which travels away from the source, and, if it encounters barriers, reflects from them and returns to the microphone. The amount of time it takes for the sound beam to come back is tracked and is used to compute the distance the sound traveled. Sound wave travels with is a constant speed, which varies slightly based on ambient temperature. At room temperature, sound travels at 1.12 feet per millisecond.

Ultrasonic Sensors Applications *Long sonar readings can be very inaccurate, as they may result from false rather than accurate reflections For example, a robot approaching a wall at a steep angle may not see the wall at all, and collide with it! *Sonar sensors have been successfully used for very sophisticated robotics applications, including terrain and indoor mapping, and remain a very popular sensor choice in mobile robotics. One can find ultrasound used in a variety of other applications; the best known one is ranging in submarines. The sonars there have much more focused and have longer-range beams. Simpler and more mundane applications involve automated tapemeasures, height measures, burglar alarms, etc. Light sensors measure the amount of light impacting a photocell, which is basically a resistive sensor. The resistance of a photocell is low when it is brightly illuminated, it is high when it is dark. Light sensors can measure: Light intensity (how light/dark it is)P Differential intensity(difference between photocells)P Break-beam (change/drop in intensity)P Optical Sensors Optical sensors consists of an emitter and a detector. Depending of the arrangement of emitter and detector relative to each other, we can get two types of sensors: Reflective sensors (the emitter and the detector are next to eachP other, separated by a barrier; objects are detected when the light is reflected off them and back into the detector) Break-beam sensors (the emitter and the detector face each other;P objects are detected if they interrupt the beam of light between the emitter and the detector) The emitter is usually made out of a light-emitting diode (an LED), and the detector is usually a photodiode/phototransistor in Reflective optical sensors. A light bulb in combination with a photocell can make a break-beam sensor. Light Reflective Sensors Light reflectivity depends on the color (and other properties) of a surface. It may be harder (less reliable) to detect darker objects this way than lighter ones. In the case of object distance, lighter objects that are farther away will seem closer than darker objects that are not as far away. What can be done with light reflectivity? object presence detection object distance detection surface feature detection (finding/following markers/tape) wall/boundary tracking rotational shaft encoding (using encoder wheels with ridges or black & white color) bar code decoding

Light Sensors Calibration Source of noise in light sensors is ambient light. The best thing to do is subtract the ambient light level out of the sensor reading, in order to detect the actual change in the reflected light, not the ambient light. This done by taking two readings of the detector, one with the emitter on, and one with it off, and subtracting the two values from each other. The result is the ambient light level, which can then be subtracted from future readings. This process is called sensor calibration. Beam-break Sensors Any pair of compatible emitter-detector devices can be used to produce such a sensors, for example: an incandescent flashlight bulb and a photocell, red LEDs and visible-light-sensitive photo-transistors or infra-red IR emitters and detectors Infra Red Sensors Infra red sensors are a type of light sensors, which function in the infra red part of the frequency spectrum. IR sensors are active sensors: they consist of an emitter and a receiver. IR sensors are used in the same ways that visible light sensors: as break-beams and as reflectance sensors. IR is preferable to visible light in robotics applications because it suffers a bit less from ambient interference, because it can be easily modulated, and simply because it is not visible. Voice recognition This process involves determining what is said and taking an action based on the perceived information. Voice recognition systems generally work on the frequency content of the spoken words. Any signal may be decomposed into a series of sines & cosines of different frequencies at different amplitudes. It is assumed that every word (letter), when decomposed into the constituent frequencies, will have a unique signature composed of its major frequencies, which allow the system to recognize the word. The user must train the system by speaking the words a priori to allow the system to create a look up table of the major frequencies of the spoken words. When a word is spoken and its frequencies determined, the result is compared with the look up table. If a close match is found, the word is recognized. A universal system that recognizes all accents and variations in speaking may not be either possible or useful. For better accuracy, it is necessary to train the system with more repetitions. The more accurate the frequencies, the narrower the allowable variations. This means that if the system tries to match many frequencies for better accuracy, in the presence of any noise or any variations in the spoken words, the system will not be able to recognize the word. On the other hand, if a limited number of frequencies is matched in order to allow for variations, then it may mix the words with other similar words. Many robots have been equipped with voice recognition systems in order to communicate with the users. In most cases, the robot is trained by the user and it can recognize words that trigger a

certain action in response. When the voice-recognition system recognizes the word, it will send a signal to the controller, which, in turn, will run the robot as desired. Voice Synthesizers Voice synthesis is accomplished in two different ways: One is to recreate each word by combining phonemes and vowels:P this can be accomplished with commercially available phonemes chip and a corresponding program. Although this type of system can reproduce any word, it sounds unnatural and machine like. The alternative is to record the words that the system may needP to synthesize and to access them from memory or tape as needed. Although this system sounds very natural, it is limited. As long as all the words that the machine needs to say are known a priori, this system can be used. Vision Vision is the ability to see and recognize objects by collecting the light reflected of these objects into an image and processing that image. Robot vision makes use of computers or other electronic hardware to analyze visual images and recognize objects of importance in the current application of the robot. Image An electronic image is an array of pixels that has been digitized into the memory of a computer. A binary number is stored in each pixel to represent the intensity and possibly the wavelength of the light falling on the part of the image. Manufacturing Tasks Selecting parts that are randomly oriented from a conveyor. Parts identification. Limited inspection. Visual servoing & Navigation. Classification of Vision Systems Two-dimensional, or three-dimensional model of the scene. According to the number of gray levells: Binary image (Black and white). Gray image. Color Image (RGB image). Components of Vision Systems The camera, and digitizing hardware, A digital computer, Hardware and software necessary to interface them. Image Processing Image processing relates to the preparation of an image for later analysis and use. Image processing is the collection of routines & techniques that improve, simplify, enhance, or otherwise alter an image. Image Analysis Image analysis ids the collection of processes in which a captured image that is prepared by

image processing is analyzed in order to extract information about the image and to identify objects or facts about the object or its environment. Histogram Of Images A histogram is a representation of the total number of pixels of an image at each gray level. Histogram information can help in determining a cutoff point when an image is to be transformed into binary values. Thresholding Thresholding is the process of dividing an image into different levels by picking a certain grayness level as a threshold, comparing each pixel value with the threshold, and then assigning the pixel to the different levels, depending on whether the pixels grayness level is below or above the threshold level. Connectivity Paths Connectivity establishes whether neighbouring pixels have the same properties, such as being of the same region, coming from the same object, having a similar texture. Three fundamental connectivity paths for two-dimensional image processing & analysis: +4 or x4 connectivity H6 or V6 connectivity 8 connectivity Filtering Techniques Frequency-related techniques operate on the Fourier Transform of the signal, Spatial-domain techniques operate on the image at the pixel level. Noise Reduction Noises net effect is a corrupted image that needs to be preprocessed to reduce or eliminate the noise. Systematic noises come from dirty lenses, faulty electronic components, and low resolution. Random noises caused by environmental effects or bad lighting. Noise Reduction Operations Convolution masks Image averaging Frequency domain Median filters Convolution Masks The noise is reduced by using masks. Create masks that behave like a lowpass filter, such that the higher frequencies of an image are attenuated while the lower frequencies are not changed very much. Image Averaging A number of images of the exact same scene are averaged together. This technique is time consuming. This technique is not suitable for operations that are dynamic and change rapidly. It is more effective with an increased number of images. It is usefull for random noise.

Frequency Domain When the Fourier transform of an image is calculated, the frequency spectrum might show a clear frequency for the noise, which in any cases can be selectively eliminated by proper filtering. Median Filters In median filter the value of the pixel is replaced by the median of the values of the pixels in a mask around the given pixel, stored in ascending order. A median is the value such that half of the values in the set are below and half are above the median. The median is stronger in eliminating spikelike noises without blurring the object or decreasing the overall sharpness of the image. The median is independent of the value of any single pixel in the set. Edge Detection Class of routines and techniques that operate on an image and result in a line drawing of the image. That requires much less memory to be stored, much simpler to be processed, and saves in computation and storage costs. The lines represent changes in values such as cross section of planes, intersections of planes,. All techniques used operate on differences between the gray levels of pixels or group of pixels through masks or thresholds. Region Growing These are techniques of segmentation. Through these techniques an attempt is made to separate the different parts of an image into components with similar characteristics that can be used in further analysis. Segmentation by regions will result in complete and closed boundaries. Regios Growing Techniques Two approaches are used for region segmentation: Region growing by similar attributes, such as grey-level ranges or other similarities. Region splitting into smaller areas by using finer differences. Image Analysis A collection of operations and techniques that are used to extract information from images. Among these are feature extraction, object recognition, analysis of the position, size, orientation, and extraction of depth information. Feature Extraction In vision applications distinguishing one object from another is accomplished by means of features that uniquely characterize the object. A feature [area, diameter, perimeter], is a single parameter that permits ease of comparison and identification. An important objective in selecting these features is that the features should not depend on position or orientation. Feature Extraction Techniques The techniques available to extract feature values for twodimensional cases can be roughly categorized as those that deal with boundary features and those that deal with area features.

Object Recognition The next step in image data processing is to identify the object the image represents. This identification is accomplished using the extracted feature information described. The recognition algorithm must be powerful enough to uniquely identify the object. Object recognition by Features This may include gray-level histogram, morphological features such as area, perimeter, number of holes, eccentricity, cord length, moments,. The information extracted is compares with a prior information about the object, which may be in a lookup table. Basic Morphological Features Used For Object Identification The average, maximum, or minimum gray levels. The perimeter, area, diameter of an object, number of holes it has and other morphological characteristics. The minimum aspect ratio (the ratio of the width to the length of a rectangle enclosed about the object). Thinness [(perimeter)^2/area or diameter/area] and Moments. Basic Concepts of Robot control Robot Control System Task The task of a robot control system is to execute the planned sequence of motions and forces in the presence of unforseen errors. Errors can arise from: inaccuracies in the model of the robot, tolerances in the workpiece, static friction in joints, mechanical compliance in linkages, electrical noise on transducer signals, and limitations in the precision of computation. Controlled Variables In both Cartesian and joint spaces, we require precise control of: Position, Velocity, Force and Torque. Robot Control Techniques Open Loop Control (Nonservo Control) No Feedback! Basic control suitable for systems with simple loads, Tight speed control is not required, no position or rate-of-change sensors, on each axis, there is a fixed mechanical stop to set the endpoint of the robot, its called stop-to-stop or pick-andplace systems. The desired change in a parameter is calculated (joint angles), The actuator energy needed to achieve that change is determined, and the amount of energy is applied to the actuator. If the model is correct and there are no disturbances, the desired change is achieved.

Feedback Control Loop Determine rotor position and/or speed from one or more sensors. Position of robot arm is monitored by a position sensor, power to the actuator is altered so that the movement of the arm conforms to the desired path in terms of direction and/or velocity. Errors in positioning are corrected. Feedforward Control It is a control, where a model is used to predict how much action to take, or the amount of energy to use. It is used to predict actuator settings for processes where feedback signals are delayed and in processes where the dynamic effects of disturbances must be reduced. Adaptive Control This control uses feedback to update the model of the process based upon the results of previous actions. The measurements of the results of previous actions are used to adapt the process model to correct for changes in the process and errors in the model. This type of adaption corrects for errors in the model due to long-term variations in the environment but it cannot correct for dynamic changes caused by local disturbances. The robots transfer function (motor + linkages) continually changes due to the nonlinearities in the robot. * These nonlinearities include changing inertial loads, coupling between joints, changes in gravitational torque, gear backlash, shaft eccentricity, mass imbalance, inherent vibrations and friction.The transfer function of a robot joint/linkage system changes with configuration due to varying inertial and gravitational loads. Unless the control system corrects for these non-linear dynamics, the response and stability of the joint controllers change with configuration. The dynamic effects which have the most impact on control loop stability are due to changing mass or configuration. * The torque required to balance gravitational load changes as the configuration of the manipulator changes. * The inertias of the robot linkages, as seen by an actuator, change rapidly as the configuration changes. * The inertia changes whenever an object is picked up or put down.

Increasing the inertia, reduces the open-loop gain (Ko)and shifts the left pole (- ) towards the origin with the result that a critically damped system becomes underdamped. The two poles that were located together on the real axis, in the closed-loop transfer function, move apart and become complex. Increasing the friction moves the open-loop pole away from the origin, and the closed-loop response is overdamped. Controlling Joint Position When controlling joint position with a DC motor, a Proportional plus velocity control law achieves fast, stable response with minimum error. Error can be minimized by: - increasing the gain, - using a pulse-width modulated power amplifier, - adding some integral to the control law. Control Architectures There have been a number of different architectures for navigation and control of AMRs proposed to cope with the named problems of uncertainty and complexity. There have two main groups of them: model-based architectures, sensor-based architectures. They generally differ in the sources of information used for planning and controlling movement of the robot. While the model-based approaches rely on a stored global map, sensor-based ones use continuous local data obtained from sensors. IntellIgent Control systems In the past two decades, robotics research has concentrated on development of intelligent navigation and control system enabling effective use of information about the operational space to plan and perform the robots operation. Intelligent Control System Properties: 1) Interact with its environment, Make decision when things go wrong during the work cycle, 2} Communicate with human beings, 3) Make computations. 4) Operate in response to advanced sensors. Autonomous Robot Control The basic task of autonomous robot is to navigate from an initial position to a desired target position. To achieve the goals of autonomy, an intelligent control system must be designed to manage the robots operation. Autonomy of robots can range from remote controlled means, through program controlled ones, to completely autonomous mobile robots. An aim of intelligent control research is to develop autonomous system that can dynamically interact with the real world.

Robot Force Control In many applications, a robot must explicitly control the force it applies to the object it is manipulating. The actuators must be controlled to achieve the desired forces. Joint Torque Control A force (a vector of three forces and three torques) are controlled in Cartesian space by controlling torques in joint space. From robot statics, the transformation between joint space torques and Cartesian space force is the transpose of the manipulator Jacobian. Torque in joint space is controlled by controlling the torque applied by each actuator. Torque can be measured using a sensor (accurate) or calculated from armature current (simple). Force control using feedback of joint torques is limited by the accuracy of the static model of the manipulator. To obtain accurate control of the force vector at the end effector, place a wrist force sensor between the tool plate and the end effector to measure end effector force. The force transform from the sensor to the end effector is usually simple. Basically, a fuzzy logic is a large group of boolean logic (degrees of True and False values), also called partial truth. But instead of dealing in 1 (yes) or 0 (No), it deals with degrees of truth, close to the possibility theory. A fuzzy logic controller ( FLC ) is an intelligent control system that smoothly interpolates between rules. In autonomous systems, tasks are generally performed based on evaluation of sensor data according to a set of rules/heuristics furnished by a human expert who has learned them from experience or training. Resent research and applications employing non-analytical methods of soft computing such as fuzzy logic and neural networks. Fuzzy logic has proven to be a convenient tool for handling real-world uncertainty and knowledge representation. More detailed Articles on Fuzzy logic: Fuzzy Logic article on Wikipedia Good Newsgroup on the subject

You might also like