Download as pdf or txt
Download as pdf or txt
You are on page 1of 20

Project Seminar Report

On

KINEMATICAL AUTONOMOUS LEGGED BOT


(K.A.L.B)

Submitted By:
Syed Abraruddin Roll:160418733020
Saneelah Nazish Roll:160418733009
Areeba Mohiuddin Roll:160418733004

Under the guidance of:


Dr. Uma N. Dulhare
Professor, Department of Computer Science & Engg.

Department of Computer Science & Engineering.


Muffakham Jah College of Engineering and Technology
2021

1
ABSTRACT

People need robots for dangerous, repetitive and high-precision work. Robots perform tasks in
hostile environments that are impossible for humans, while also carrying out repetitious tasks
with speed and accuracy. Robots are employed in roles ranging from cleaning up dangerous
waste and chemical spills to disarming bombs and protecting soldiers in the field.
K.A.L.B (Kinematical Autonomous Legged Bot) is a cost-efficient robot that simulates specific
human activities by adapting to the dynamic environment with little-to-no supervision. It uses
sensors to perceive the world around it, and then employ decision-making structures to take
the optimal next step based on its data and mission. Built upon this vision sensor system are
autonomous object tracking, SLAM, and obstacle avoidance and navigation. This means that
K.A.L.B can analyse its surroundings in real-time, create navigational maps, plot its
destination, and avoid obstacles. Coupled with human posture and face recognition tracking,
K.A.L.B is capable of following its owner and darting around obstructions.

SYSTEM REQUIREMENTS:
The hardware requirements are:-
12-set of motors
Camera
RP Lidar
Raspberry Pie as processor
Pressure sensor

The software requirements are:-


Ubuntu as OS
ROS as middleware for robot
Python
YOLO V5
OpenCV
ROS navigation , 3D mapping
Gazebo & RViz for simulations
ROS controller

2
INDEX

S.NO TOPIC PAGE NO.

1 Certificate 1

2 Abstract 2

3 Index 3

4 List of figures 4

5 Introduction 5
5.1 Robotics and Automation
5.2 Problem Statement
5.3 Objective
5.4 Organization of Report

6 Literature Survey 8
6.1 SPOT(Boston Dynamics)
6.2 Mini Cheetah(MIT)
6.3 Cyber Dog(Xiaomi)

7 Proposed Work 12
7.1 YoloV5
7.2 OpenCV
7.3 ROS(Robot Operating System)
7.4 Features
7.5 Application

8 Work Flow 19
8.1 Flow-Chart
8.2 Project Timeline

9 References 20

3
LIST OF FIGURES
5.1.1 Humanoid Robot

5.1.2 Mars Rover

5.1.3 Human-Robot Connection

5.1.4 Automatic Vacuum Cleaner

6.1.1 Spot

6.1.2 Specs of Spot

6.2 MIT bot

6.3 Cyber Dog

7 Proposed work

7.1.1 Yolov5 logo

7.1.2 Graph

7.1.3 Table of weights

7.1.4 yolov5 Output

7.2.1 Open-cv logo

7.2.2 Detection Algorithm

7.2.3 Face detection

7.3.1 ROS logo

7.3.2 ROS Navigation Stack

7.3.3 ROS mapping

7.3.4 ROS obstacle avoidance

8.1 Flow chart

8.2 Timeline

4
INTRODUCTION

5.1 Robotics & Automation:


Robotics is an interdisciplinary branch of computer science and engineering. Robotics involves
design, construction, operation, and use of robots. The goal of robotics is to design machines
that can help and assist humans. Robotics integrates fields of mechanical engineering, electrical
engineering, information engineering, mechatronics, electronics, bioengineering, computer
engineering, control engineering, software engineering, mathematics, etc.
Robotics develops machines that can substitute for
humans and replicate human actions. Robots can be
used in many situations for many purposes, but
today many are used in dangerous environments
(including inspection of radioactive materials,
bomb detection and deactivation), manufacturing
processes, or where humans cannot survive (e.g. in
space, underwater, in high heat, and clean up and
containment of hazardous materials and radiation).
Fig 5.1.1
Robots can take on any form, but some are made to
resemble humans in appearance. This is claimed to
help in the acceptance of robots in certain replicative behaviors which are usually performed
by people. Such robots attempt to replicate walking, lifting, speech, cognition, or any other
human activity. Many of today's robots are inspired by nature, contributing to the field of bio-
inspired robotics.
Certain robots require user input to operate while other robots function autonomously. The
concept of creating robots that can operate autonomously dates back to classical times, but
research into the functionality and potential uses of robots did not grow substantially until the
20th century. Throughout history, it has been frequently assumed by various scholars,
inventors, engineers, and technicians that robots will one day be able to mimic human behavior
and manage tasks in a human-like fashion. Today, robotics is a rapidly growing field, as
technological advances continue; researching,
designing, and building new robots serve various Fig 5.1.2
practical purposes, whether domestically,
commercially, or militarily. Many robots are built
to do jobs that are hazardous to people, such as
defusing bombs, finding survivors in unstable
ruins, and exploring mines and shipwrecks.
Robotics is also used in STEM (science,
technology, engineering, and mathematics) as a
teaching aid.
Autonomous robots operate independently of human operators. These robots are usually
designed to carry out tasks in open environments that do not require human supervision. They
are quite unique because they use sensors to perceive the world around them, and then employ

5
decision-making algorithms to take the optimal next step based on their data and mission. An
example of an autonomous robot would be the Roomba vacuum cleaner, which uses sensors to
roam freely throughout a home.
Robotics relies on Computer Science in
Computer Vision. Computer vision is the
science and technology of machines that see.
As a scientific discipline, computer vision is
concerned with the theory behind artificial
systems that extract information from images.
The image data can take many forms, such as
video sequences and views from cameras.
Fig 5.1.3

In most practical computer vision applications, the computers are pre-programmed to solve a
particular task, but methods based on learning are now becoming increasingly common.
Computer vision systems rely on image sensors which detect electromagnetic radiation which
is typically in the form of either visible light or infra-red light. The sensors are designed using
solid-state physics. The process by which light propagates and reflects off surfaces is explained
using optics. Sophisticated image sensors even require quantum mechanics to provide a
complete understanding of the image formation process. Robots can also be equipped with
multiple vision sensors to be better able to compute the sense of depth in the environment. Like
human eyes, robots' "eyes" must also be able to focus on a particular area of interest, and also
adjust to variations in light intensities.
Automation describes a wide range of technologies that reduce human intervention in
processes. Human intervention is reduced by predetermining decision criteria, subprocess
relationships, and related actions — and embodying those predeterminations in machines.
Automation covers applications ranging from a household thermostat controlling a boiler, to a
large industrial control system with tens of thousands of input measurements and output control
signals. Automation has also found space in the banking sector. In control complexity, it can
range from simple on-off control to multi-variable high-level algorithms.
In the simplest type of an automatic control
loop, a controller compares a measured
value of a process with a desired set value
and processes the resulting error signal to
change some input to the process, in such a
way that the process stays at its set point
despite disturbances. This closed-loop
control is an application of negative Fig 5.1.4
feedback to a system. The mathematical
basis of control theory was begun in the 18th century and advanced rapidly in the 20th.

6
5.2 Problem Statement
People need some sort of robots for dangerous, repetitive and high precision work. Robots
perform tasks in hostile environments that are impossible for humans, while also carrying out
repetitious tasks with speed and accuracy. Robots are employed in roles ranging from cleaning
up dangerous waste and chemical spills to disarming bombs and protecting soldiers in the field.
The key to the development of the robot K.A.L.B was the need for agile mobile robots that
could adapt to dynamic tasks and provide service to mankind.

5.3 Objective
To build a cost efficient robot that simulates specific human activities by adapting to the
dynamic environment with little-to-no supervision.

5.4 Organization of Report


The report is organized into 5 parts.
Part 1 consists of the introduction which gives us the basic background like the importance of
the field, existing issues and current trends. This is followed by the problem statement and the
objective of the project.
Part 2 consists of the literature survey which dives into the depths of the project about the future
scope, limitations, performance comparison and finally gives a brief description of the
technology and tools that are used for the project.
Part 3 consists of the vast knowledge of the available technologies to solve a particular problem.
This consists of the existing problem and the proposed solution along with the technical
specifications.
Part 4 is the plan of the work or the project timeline and the preparation of diagrams.
Part 5 gives us the references that provide citations of sources of information.

7
LITERATURE SURVEY

6.1 SPOT (Boston Dynamics)


Boston Dynamics is an American engineering and robotics design company founded in 1992
as a spin-off from the Massachusetts Institute of Technology. Headquartered in Waltham,
Massachusetts, Boston Dynamics has been owned by the Hyundai Motor Group since
December 2020, but having only completed the acquisition on June 2021.
Boston Dynamics is best known for the development of a series of dynamic highly-mobile
robots, including BigDog, Spot, Atlas, and Handle. Since 2019, Spot has been made
commercially available, making it the first commercially available robot from Boston
Dynamics, while the company stated its intent to commercialize other robots as well, including
Handle.
On June 23, 2016, Boston Dynamics revealed the four-legged canine-inspired Spot which only
weighs 25 kg (55 pounds) and is lighter than their other products.
Spot is an agile mobile robot that navigates terrain with unprecedented mobility, allowing you
to automate routine inspection tasks and data capture safely, accurately, and frequently.

Fig 6.1.1

8
Spot Explorer Spot Enterprise

Tablet Work Yes Yes

Autowalk 1000m in length Unlimited mission lengths

WiFi 2.4 Ghz b/g/n Dual band 801.11ac support (availability


dependent on region)

Payload power Always on Toggle via software in tablet or API

Self-charging No Includes a dock

Data offload No Offloads mission data through dock Ethernet


connectivity

Metrics opt-out No Opt-out of sending robot performance metrics


back to Boston Dynamics

Enhanced safety No Safety stop function PLd category 3 per ISO


options 13849-1 available on payload ports

Price $74,500 (usually shipped Not available


within six to eight weeks)

Fig 6.1.2

9
6.2 Mini Cheetah (MIT)
MIT’s new mini cheetah robot is springy and light on its feet, with a range of motion that rivals
a champion gymnast. The four-legged powerpack can bend and swing its legs wide, enabling
it to walk either right-side up or upside down. The robot can also trot over uneven terrain about
twice as fast as an average person’s walking speed.
Weighing in at just 20 pounds — lighter than some Thanksgiving turkeys — the limber
quadruped is no pushover: When kicked to the ground, the robot can quickly right itself with a
swift, kung-fu-like swing of its elbows.
Perhaps most impressive is its ability to perform a 360-degree backflip from a standing
position. Researchers claim the mini cheetah is designed to be “virtually indestructible,”
recovering with little damage, even if a backflip ends in a spill.

Fig 6.2

10
6.3 Cyber Dog (Xiaomi)
CyberDog is tuned by Xiaomi’s self-developed servomotors for superior speed, agility and a
wide range of movements. With maximum torque output and rotation speeds up to 32 Nm /
220 Rpm, Cyber Dog can perform a variety of high speed movements up to 3.2 m / s and
complex actions such as backflip.
CyberDog’s brains are backed by the NVIDIA® Jetson Xavier ™ NX platform, AI
supercomputers for embedded and edge systems, 384 CUDA® cores, 48 tensor cores, and 6
Carmel ARM CPUs. , And two deep learning acceleration engines are included. For the owner,
CyberDog handles the large amount of data captured from the sensor system without any
problems.
To fully model the creature, CyberDog is equipped with 11 precision sensors that provide
immediate feedback to guide its movements. It includes touch sensors, cameras, ultrasonic
sensors, GPS modules and more, providing extensions for sensing, analyzing and manipulating
CyberDog’s environment.

Fig 6.3

11
PROPOSED WORK

Fig 7

Camera and
Lidar

The Proposed work will more or less look like the above figure. In the above proposed work
the Computer science part is to make the robot sense the environment, understand it, give the
commands to the actuators and the User Interface. If we divide the tasks of the robot there are
three parts (a)Sense the environment (b) Actuating the actuators (c)User Interface. First part is
achieved by a Deep learning algorithm Yolov5, second by ROS(Robot Operating System) and
third by OpenCV.

7.1 Yolov5
YOLO an acronym for 'You only look once', is an object detection algorithm that divides
images into a grid system. Each cell in the grid is responsible for detecting objects within itself.
YOLO is one of the most famous object detection algorithms due to its speed and accuracy.
There are some premade
weights with more or less 72
classes and also user train can
custom object using the
Yolov5 algorithm and detect
it. In this project with the help
of the weights environment is
sensed. Once the
environment is sensed then
Fig 7.1.1 the robot is able to make
decisions.

12
Fig 7.1.2

Pretrained Checkpoints:

Fig 7.1.3

13
YoloV5 has some library dependencies they are:
matplotlib>=3.2.2
numpy>=1.18.5
opencv-python>=4.1.2
Pillow>=7.1.2
PyYAML>=5.3.1
requests>=2.23.0
scipy>=1.4.1
torch>=1.7.0
torchvision>=0.8.1
tqdm>=4.41.0
tensorboard>=2.4.1
pandas>=1.1.4
seaborn>=0.11.0
# coremltools>=4.1 # CoreML export
# onnx>=1.9.0 # ONNX export
# onnx-simplifier>=0.3.6 # ONNX simplifier
# scikit-learn==0.19.2 # CoreML quantization
# tensorflow>=2.4.1 # TFLite export
# tensorflowjs>=3.9.0 # TF.js export
# openvino-dev # OpenVINO export
# albumentations>=1.0.3
# Cython # for pycocotools
# pycocotools>=2.0 # COCO mAP
# roboflow
thop # FLOPs computation

Fig 7.1.4

14
7.2 OpenCV
OpenCV (Open Source Computer Vision Library) is an open source computer vision and
machine learning software library. OpenCV was built to provide a common infrastructure for
computer vision applications and to accelerate the use of machine perception in the commercial
products. Being a BSD-licensed product, OpenCV makes it easy for businesses to utilize and
modify the code.
The library has more than 2500 optimized algorithms, which includes a comprehensive set of
both classic and state-of-the-art computer vision and machine
learning algorithms. These algorithms can be used to detect and Fig 7.2.1
recognize faces, identify objects, classify human actions in
videos, track camera movements, track moving objects, extract
3D models of objects, produce 3D point clouds from stereo
cameras, stitch images together to produce a high resolution
image of an entire scene, find similar images from an image
database, remove red eyes from images taken using flash, follow
eye movements, recognize scenery and establish markers to
overlay it with augmented reality, etc. OpenCV has more than 47 thousand people of user
community and estimated number of downloads exceeding 18 million. The library is used
extensively in companies, research groups and by governmental bodies.
Along with well-established companies like
Google, Yahoo, Microsoft, Intel, IBM, Sony,
Honda, Toyota that employ the library, there are
many startups such as Applied Minds,
VideoSurf, and Zeitera, that make extensive use
Fig 7.2.2 of OpenCV. OpenCV’s deployed uses span the
range from stitching streetview images together,
detecting intrusions in surveillance video in Israel, monitoring mine equipment in China,
helping robots navigate and pick up objects at Willow Garage, detection of swimming pool
drowning accidents in Europe, running interactive art in Spain and New York, checking
runways for debris in Turkey, inspecting labels on products in factories around the world on to
rapid face detection in Japan.
It has C++, Python, Java and MATLAB interfaces and supports Windows, Linux, Android and
Mac OS. OpenCV leans mostly towards
real-time vision applications and takes Fig 7.2.3
advantage of MMX and SSE
instructions when available. A full-
featured CUDA and OpenCL interfaces
are being actively developed right now.
There are over 500 algorithms and
about 10 times as many functions that
compose or support those algorithms.
OpenCV is written natively in C++ and
has a templated interface that works seamlessly with STL containers.

15
7.3 ROS
The Robot Operating System (ROS) is a set of software libraries and tools that help you build
robot applications. From drivers to state-of-the-art algorithms, and with powerful developer
tools, ROS has what you need for your next robotics project. And it's all open source. ROS is
released as distributions, also called “distros”, with more than one ROS distribution supported
Fig 7.3.1 at a time. Some are releases come with long term support
(LTS), meaning they are more stable and have
undergone extensive testing. Other distributions are
newer with shorter lifetimes, but with support for more
recent platforms and more recent versions of their constituent ROS packages. See the
distributions list for more details. Generally a new ROS distro is released every year on world
turtle day, with LTS distros being released in even years.

(i) ROS Navigation stack


ROS navigation package is used to move a robot from the start position to the goal position,
without making any collision with the environment. The ROS Navigation package comes with
an implementation of several navigation related algorithms which can easily help implement
autonomous navigation in the mobile robots.

Fig 7.3.2

The user needs to feed the goal position of the robot and the robot odometry data from sensors
such as wheel encoders along with other sensor data streams such as laser scanner data or 3D
point cloud from sensors like Kinect. The output of the Navigation package will be the velocity
commands which will drive the robot to the given goal position.

16
The Navigation stack contains implementation of the standard algorithms, such as SLAM,
AMCL, and so on, which can directly be used in our application.

(ii) SLAM-(Simultaneous Localization And Mapping)


The ROS Gmapping package is a wrapper of open source implementation of SLAM
(Simultaneous Localization and Mapping) called OpenSLAM. The package contains a node
called slam_gmapping, which is the implementation of SLAM which helps to create a 2D
occupancy grid map from the laser scan data and the mobile robot pose.

Fig 7.3.3

The basic hardware requirement for doing SLAM is a laser scanner which is horizontally
mounted on the top of the robot, and the robot odometry data. In this robot, we have already
satisfied these requirements. We can generate the 2D map of the environment using the
gmapping package.

(iii) OBSTACLE AVOIDANCE IN ROS


Image processing and ROS are integrated in order to
detect the objects that comes in path of the robot.
Using this feature bot will decide its own path to
reach out the destination The Robot will plot physical
objects to overcome the obstacles.
These objects which comes in path of the robot can
be mapped virtually in Rviz and displayed on Fig 7.3.4
monitor.

17
7.4 Features
MAPPING
The Robot will map its environment using ROS and g-mapping integrating with Image
processing.
HIGH MOBILITY
12 motors are installed to make the robot highly flexible & stable.
OBSTACLE AVOIDANCE
The Robot will plot physical objects & avoid them.
LONG LIFE
High current batteries will increase waketime of the Robot.
ALTER ORIENTATION
The Robot will be able to inverse its limb joints in a very tight space where it cannot spin
around.

Special features
Inverse Limbs The bot will be able to invert its limbs in areas where it is not possible to spin around.
Adequate Size The bot will be of ample size so that it has the ability to manoeuvre around tight
places.Cost Effective Our bot will be made with low cost without compromising the quality of it.

7.5 Applications
TUNNEL SURVEY
To survey hazardous Tunnels without any supervision
HABITUAL TASKS
To complete set-tasks in repetitive manner with high precision
GUIDE
To lead people to their desired destination in a facility.
INDUSTRY USAGE
To monitor the status of the work in the industry.
MINING TASKS
To scan mining areas.

18
WORK FLOW

8.1 Flow chart

Camer Digital Motor

Raspberry

RP- Algorith Motor


Simulatio

Fig 8.1

8.2 Project Timeline

Fig 8.2

19
RESOURCES
Boston Dynamics: https://www.bostondynamics.com/products/spot
MIT: https://news.mit.edu/mit-mini-cheetah
XIAOMI: https://blog.mi.com/xiaomi-launches-cyberdog
YoloV5: https://github.com/ultralytics/yolov5
ROS: http://wiki.ros.org/Documentation
Open-CV: https://opencv.org/

20

You might also like