Professional Documents
Culture Documents
Capstonefinall PDF
Capstonefinall PDF
Capstonefinall PDF
Page 11 of 39
1.2 Need Analysis
All the researches which have been conducted in past have focused only either on target
detection or on shooting, but this prototype covers both the sectors in one go. Target
detection is carried out by the autonomous robocar controlled via raspberry-pie, that patrols
on a pre-programmed path. Once the target is identified and locked, the shooting assembly
mounted on top of the robocar is aimed at the target and the target is shot down.
This prototype provides dual-mode of functioning, which is capable of shooting down
unwanted enemy targets present on land or in sky. The angle of the shooting assembly can
be modified to aim it at the target by moving the assembly in x and y axis by using two
servo motors. We also use a laser to point at the target, this improves the accuracy of the
prototype as it helps in verifying whether the shooting assembly points at the right target
or not.
The robocar is mobile and it patrols on a pre-programmed path. The shooting assembly is
mounted on top of it. It acts as an autonomous soldier that never sleeps. This prototype can
be used in places where the territory is dangerous, and prevents loss of human life. This
prototype also has an added advantage over humans that it can patrol its area 24x7, without
stopping, sleeping or getting tired, unlike humans.
Object detection algorithms used in our model provides us results quickly, which helps in
real-time object detection and target shooting, all within a few seconds. The model is made
more efficient by using image processing techniques like Open CV (Open Source
Computer Vision) and YOLO (You Only Look Once), instead of LIDAR (Light Detection
and Ranging).
Highest accuracy obtained for shooting so far is 95.26% and for target detection using
LIDAR is found to be 89.36%. Model which can carry out both the tasks efficiently is yet
to be developed. Hence, this prototype is a good solution to all the problems mentioned
above and is in current need in this sector.
Page 12 of 39
1.3 Research Gaps
In review of our sample results, we found that there was no novel approach that could carry
out both, target detection as well as shooting, efficiently, in both the domains, land and sky.
The shooting assemblies that covered both domains with high performance, had one
drawback, that they were immobile and covered a limited range only.
Some prototypes used LIDAR for object detection and tracking, but the same when done by
OpenCV and YOLO gives more accurate results efficiently.
No model with high accuracy has yet been developed that could give optimal results in dim
light.
2. Robots developed using previously proposed techniques can follow a path and take
pictures in the surroundings or detect an obstacle, but no such technique has been
developed that could locate, track and shoot a target from an acceptable distance.
Page 13 of 39
1.5 Assumptions and constraints
The image processing algorithm itself is the main weakness in the gun system. For real-time
systems, it is very difficult to perform the task of processing large data sets and requires a
high clock frequency as well as large amounts of memory. Ideally, the algorithms for image
2
processing should run on a multicore processor and the system should be parallelized.
The robot made works only in a static environment. Environment may slightly impact the
performance of the robot. For weather conditions like fog and rain, accuracy to hit the target
3
decreases. At night, the image processing becomes tough due to unavailability of night
vision camera.
One assumption about the product is that it patrols at a pre-programmed path only. New
4 path can be programmed to it if it is exposed to a new environment.
The weight and size of the power supply is a constraint to it. It must be light enough for the
framework of the robot to support so that the robot can move and detect without wasting
5
power. The power supply unit must be placed on the robot in a such a way that it does not
hinder with other components and wiring.
6 Weight of all the components assemble with the robot should be compatible with each
other. Weight of any component should not affect the performance of the product.
Safety mode should be programmed. The safety mode will allow the user to manually
navigate the robot without using the autonomous detecting, aiming and firing features that
7
will be present on the product. This will ensure that while showcasing the product, others
will not be harmed.
Page 14 of 39
1.6 Approved Objective
To design an autonomous target locking and shooting robocar which is capable of detecting
a certain target and approach it to shoot it down.
To provide a reliable, cost-effective and accurate technique to detect an unusual
environmental hazard using OpenCV (Open Source Computer Vision) and YOLO (You
Only Look Once) and to kill the target with Raspberry Pi shooting assembly.
To design a product that is capable of defending from both land and airborne attacks.
To develop a shooting assembly which can shoot targets accurately within range of 3-5
meter
Page 15 of 39
1.8 Project outcomes and deliverables
The full fledge implementation of proposed technique can be deployed in battle field to
help the military in remote areas without risking human live and handling the battle field
challenges. This technique is based on Machine Learning with Image processing using
OpenCV, Python and other relevant techniques. The final product will be able to detect,
shoot all the air and land borne targets in its range and patrolling on pre-defined path as
well.
Page 16 of 39
REQUIREMENT ANALYSIS
Page 17 of 39
sensor for real-time target detection. Some use immobile robots that can only operate in
certain defined range, while few use mobile robots that can detect ground and airborne
threats but cannot fire targets. During target detection, real-time exposure time is a very
important factor that is unpredictable in most of the prototypes built. It is appropriate to
have a little inaccuracy but the reaction speed must be high imaging instability, device
noise and surface temperature fields have a significant impact on image quality, which
further affects the entire operation. Few can detect targets either inland or in the air while
others can aim only from an appropriate distance and at a particular angle, whereas there
is no such system that can patrol together with target detection, target locking, target
shooting in both air and ground and can operate in two modes, one being autonomous and
the other being manual. Essentially, the robots that have been built so far can all work under
a specific condition and can only perform a specific task and there is no one that can operate
under all the weird conditions.
LiDAR-based Object Detection - Most state-of - the-art object detection techniques rely on
LiDAR to collect accurate details while processing raw LiDAR data in different
representations. It can exploit by fusing numerous LiDAR representations against the RGB
image to extract more dense details. It utilizes arranged voxel grid depiction to quantize
the raw point cloud data and then uses either of 2D or 3D CNN to detect object, while it
takes numerous frames as input and processes object detection, tracking and motion
anticipating concurrently.
The following are the main differences between USVs and manned vehicles.
1. The two vehicles ' total motorization is nearly indistinguishable. Moreover, the USVs
must have on-board communication peripherals that can communicate continuously
with the control device using different signals (i.e. system status data and sensor
signals) to the GCS.
2. The U.S.V. should have a ground control system for remote operation.
3. Most recent USVs are fitted with functions for intelligent, autonomous navigation and
mission execution. Classified as Intelligence, Surveillance and Recognition (ISR),
Mine Countermeasure (MCM), Anti-Submarine Warfare (ASW), Special Operations
Support, Rescue Operations and Electronic Warfare are the main roles that USVs must
perform in the military.
Page 18 of 39
2.1.3 Literature Survey
Roll
S. No. Name Paper Title Tool and Technology Findings Citation
Number
Developers achieve multi-
Multi-UAV Binocular UAV binocular
Intersection with One- Vision-aided
intersection through the
cooperative target
1 401703010 Jiya Midha Shot Communication: integration of limited [1]
tracking.
Modeling and target measurements with
Binocular intersection
Algorithms, 2019 an automatic search
problem
process.
Research on 1. Background The author aims to
Automatic Target modelling method automatically detect and
2 401703010 Jiya Midha Detection and 2. Feature identify targets based on [2]
Recognition Based on extraction method deep learning with 95.8%
Deep Learning, 2019 Deep learning method accuracy.
Design and They developed and
designed semi-
Implementation of
3 401703010 Jiya Midha LIDAR was used autonomous sentry robot [3]
Image Capture Sentry along with Arduino using the Arduino
Gun Robot, 2018 controller.
Robotor an Autonomous robot that
Arduino was used as
goes to a remote area
autonomous vehicle microcontroller and
4 401703010 Jiya Midha from an appropriate [4]
for target detection Coil gun was used in
distance shoots down
and Shooting, 2014 shooting mechanism
goal.
Vision Based Robotic
System for Military 1. Object-tracking
Sum of Absolute The author is responsible
5 401703010 Jiya Midha Applications -- Design [5]
Difference (SAD) for the vision-based robot
and Real Time algorithm for military applications
Validation., 2014
Infrared target 1. Image
detection in Use of Infrared to identify
characteristic
objects in maritime
backlighting maritime analysis
6 401703011 Kritika Ahuja environments in real time, [6]
environment based on1. 2. Gauss difference based on the concept of
visual attention preprocessing
visual attention
model,2019 algorithm
Feature mapping deep
neural network
(FMDNN) model suit-
able for dim and small FMDNN is used to
Dim and small target target detection and a differentiate between dim
detection based on new method for dim and small targets and
7 401703011 Kritika Ahuja [7]
feature mapping and small target background by separating
neural networks, 2019 detection based on background and target,
deep learning(which setting the dim target
includes the network sample label to 1 and
model, training and setting the background
detection) sample label to 0.
Page 19 of 39
The creation of CVS
for mobile robots is
Convolutional neural done using CNN.
networks of the CNN models were
YOLO class in studied for different
use cases of batch To maintain the balance
8 401703011 Kritika Ahuja computer vision [8]
normalization and bias between the precision of
systems for mobile and the best accuracy detection of objects in
robotic complexes, and speed of detecting images and the amount of
2019 objects in images can FPGA resources needed
be provided by CNN to implement this CNN
of the YOLO class.
Azimuth-Only
Estimation for TDOA- Controlling the shooter
assembly with sensory
based Direction
9 401703011 Kritika Ahuja noise cancelation based [9]
Finding with Three- PID controller, IMU on data received from
Dimensional Acoustic sensors different sensors
Array, 2018
Perception, Planning,
Control, and Autonomous vehicle
vision, planning,
10 401703011 Kritika Ahuja Coordination for [10]
LIDAR monitoring and
Autonomous coordination
Vehicles, 2017
Sensor Model Based
Preprocessing of 3-D UAV target tracking
Laser Range Image trajectories are
formalized and To enhance their
Saksham Data and Motion
11 401703023 geometrically robustness, formulas were [11]
Gupta Oriented Feature modelled, so as to implemented in practical
Extraction for Mobile compute maximum UAV intelligent shooting
Robot Applications, allowable focal length systems.
2020 per scenario.
A real-time human- Open Pose library is
robot interaction They developed a human-
integrated with
robot communication
Saksham framework with Microsoft Kinect V2
12 401703023 system using static hand [12]
Gupta robust background to obtain 3D
gestures and extraction of
invariant hand gesture estimation of the
3D skeletons.
detection,2019 human skeleton.
Two stage estimation
algorithm and a local
Stereo vision based POE formulized error
Saksham model is used which We proposed a self-
13 401703023 autonomous robot [13]
Gupta enables robot calibration technique
calibration, 2017 calibration to be based on stereo vision for
completely online the robot.
and fast.
Autonomous Driving
of a Mobile Robot The NOCP is
efficiently solved by a
Saksham Using a Combined A NMPC approach is
14 401703023 combined multiple- [14]
Gupta Multiple-Shooting and proposed to steer an
shooting and
Collocation Method, autonomous mobile robot.
collocation method.
2016
Page 20 of 39
Sensor Model Based
Preprocessing of 3-D Sensor model-based
Laser Range Image error correction
The paper concludes with
Saksham Data and Motion methods as used for
15 401703023 an outlook for the [15]
Gupta Oriented Feature raw image processing
development of advanced
Extraction for Mobile are
logical sensors.
Robot Applications, discussed.
1988
An End-to-End Deep
Neural Network for Use of ultrasonic sensors,
Autonomous Driving Use of neural network. neural networks and
Shiva
16 401703025 Designed for Use of image image processing to [16]
Thavani
Embedded processing techniques. detect objects, lanes and
Automotive traffic light.
Platforms, 2018
YOLO achieves a much
You Only Look Once: YOLO-You look only higher precision than
Shiva Unified, Real-Time once, Object other real-time systems
17 401703025 [17]
Thavani Object Detection, detection, Machine with a faster method for
2018 Learning image detection and
recognition.
The main focus was to
address the existing
restriction of the
Road sign recognition Computer Vision, identification of road
Shiva signs such as single-color
18 401703025 system on Raspberry object recognition, [18]
Thavani or single-class road signs,
Pi, 2018 Machine Learning
low light conditions,
fading signs, and most
notably, non-standard
signs.
Ultrasonic sensor is used
to detect obstacles instead
Obstacle A voidance Image processing of camera because it is
Shiva ,MATLAB, Object
19 401703025 with Ultrasonic more complicated and [19]
Thavani detection, SONAR.
Sensors, 2017 numerical to find distance
from camera compared to
ultrasonic sensors.
Image Processing
Approaches for Use of image processing
Autonomous and neural networks to
Shiva Use of neural network.
20 401703025 Navigation of use day and night to move [20]
Thavani Ultrasonic sensors.
Terrestrial Vehicles in vehicle and lane
Low Illumination, detection.
2016
Page 21 of 39
2.1.3 The Problem that has been identified
Design a system which is capable of protecting the territory from various threats
approaching from land or sky. It should be able to detect, lock and shoot down the target
and keep patrolling in a static environment and checking for suspicious activities
Solution - A prototype will be developed to demonstrate and justify the goals that are
aimed to achieve at the beginning of this project.
For this purpose, we are using Machine learning and computer vision algorithms:
1. The metallic robocar will guard the territory and patrol around a fixed predefined path
controlled by Arduino module.
2. During patrolling, if the robocar comes across an obstacle, it deviates from the fixed
path to avoid collision and then continues its patrolling around the predefined path.
This collision detection and avoidance is controlled using Ultrasonic sensor and
Arduino.
3. While patrolling, if the robocar detects an unusual object using surveillance from its
camera, then it processes the frame of the detected object and compares it with the pre-
trained Machine Learning model to check if the object poses threat or potential risk. If
threat is detected then information is passed to the shooting assembly and an alarm is
raised which alerts the authorities at the base station.
4. When some instructions are received from the target detection module, they are
processed and gun is loaded and aimed towards the target and finally, the shot is fired
to eliminate the threat.
Page 22 of 39
Motor driver module [L298N]- The L298N is a dual H-Bridge motor driver that
simultaneously enables the control of speed and direction of two DC motors.
Servo Motors [MG996r]- The Servo Motor is a widely used engine in various industries,
such as robotics, for high-tech devices. This motor is an electrical device that is self-
controlled and switches part of a machine with high productivity and high precision. The
motor in the prototype is used for dual purposes, one is to pull shooting gun trigger and the
other is for firing assembly orientation for targeting purposes.
Ultrasonic Sensor- An ultrasonic sensor is a device that uses sound waves to measure the
distance to an object. It measures distance by sending out a specific frequency sound wave
and listening to bounce back for that sound wave. It is used to detect objects and to prevent
collisions.
Pi-camera Module- A compact lightweight camera supporting Raspberry Pi is the Pi
camera module. Use the MIPI camera serial interface protocol, it communicates with Pi. It
is used in the classification of images, machine learning or monitoring projects.
Arduino Mega- Arduino Mega is a ATmega1280-based microcontroller module. It has 54
digital input / output pins (of which 14 can be used as PWM outputs), 16 analog inputs, 4
UARTs (hardware serial ports), a 16 MHz crystal oscillator, a USB interface, a power jack,
an ICSP header and a reset button. Arduino Mega is used for shooting assembly
functionality.
Wi-Fi Adapter- Wireless adapters are electronic devices that allow computers to connect
without using wires to the Internet and other computers. We send data to routers through
radio waves that transmit it to modems or internal networks of broadband. This is
essentially an observer's remote access.
Breadboard and Connecting Wires- A breadboard is a solderless device with electronics
and test circuit designs for a temporary prototype.
Li-ion battery- One type of rechargeable battery is a lithium-ion battery. For portable
devices and electric vehicles, lithium-ion batteries are widely used. It is used to supply the
prototype with power.
Shooting Assembly- Shooting gun is used for target aiming and shooting. It is assembled
with Arduino and Servo motors for the programmed working of module. It is mounted on
the robo-tank with the rest of the operating modules.
Page 23 of 39
The base technique used in the capstone project is object detection which is an application
of computer vision. An efficient and accurate object detection has been an important aspect
in the advancement of computer vision systems. At the backend, the project uses YOLO
v3 for image classification. It is the latest variant of popular object detection algorithm
YOLO- You Only Look Once. It is super-fast and nearly as accurate as Single Shot
MultiBox (SSD).
2.2 Standards
TABLE 3: Standards
Standard Number Scope Publish Date Description
Page 24 of 39
2.3 SOFTWARE REQUIREMENTS SPECIFICATION
2.3.1 Introduction
2.3.1.1 Purpose
1. It has introduced an autonomous moving robot that can detect, approach and shoot
down a certain object.
2. To replace manpower with a large number of robots working in various hazardous
locations such as military, surveillance, and borders. Robotic programming's main
goal is to interact with the electronic hardware and the real world.
3. The main objective is to provide an efficient, cost-effective and precise technique by
using raspberry pi to kill an unusual hazard in the region.
Page 25 of 39
2.3.2 Overall Description
Page 26 of 39
2.3.3 External Interface Requirements
Page 27 of 39
2.3.4.3 Security requirements
This interface is safe in terms of user data and confidentiality. More sensors mean more
info. The human factor is replaced by sensors. Lidars, cameras and other task-specific
sensors (e.g. lane sensors, actuators, accelerometers, detectors, cameras, scanners,
devices, lidar) More mimic what the human eye sees. Lidar can generate gigabytes of
data in a short amount of time. All these data must be put in the right place at the right
time for processing and making decisions.
Raspberry Pi 3 2700
Pi - Camera 500
Ultrasonic Sensor 75
TOTAL ₹ 10,230.00
Page 28 of 39
2.5 Risk Analysis
1. Robot might not detect targets in night due to the lack of night vision cameras.
2. Robot will patrol in pre-defined path only; however, we can change the path according to
the need of environment
3. Weight of shooting assembly should be within the threshold of buggy to avoid any
disturbance in the process of patrolling
4. It might give less accurate results in presence of fog and mist as visibility from camera
decreases to a great extent.
Page 29 of 39
METHODOLOGY ADOPTED
Page 30 of 39
3.3 Work breakdown structure
Page 31 of 39
3.4 Tools and Technologies Used
Hardware Components:
Robo-Car with motors
Raspberry Pi
Motor driver module [L298N]
Servo Motors [MG996r]
Ultrasonic Sensor
Pi-camera Module
Arduino Mega
Wi-Fi Adapter
Breadboard and Connecting Wires
Li-ion battery
Shooting Assembly
Software Components:
Raspbian OS
Python
OpenCV
YOLO
Page 32 of 39
DESIGN SPECIFICATION
Page 33 of 39
4.2 Design level diagram
Page 34 of 39
4.3 User Interface Diagram
Page 35 of 39
CONCLUSION AND FUTURE DIRECTIONS
5.2 Conclusion
We will be able to create an autonomous miniature car that will operate on simple image
processing techniques and algorithms. The image processing algorithms used here have
seen many practical applications and are still one of the most researched fields. A step has
been taken to improve the current framework of road traffic management to achieve much
cheaper and more efficient outcomes than the existing know-how. This algorithm can be
further enhanced by training our dataset using machine learning algorithms that can lead
to far better results with better efficiency due to reduced processing time and delivery of
outputs and up-to-date technology. On a small autonomous car, the algorithm described in
the paper was successfully implemented.
Page 36 of 39
5.3 Environmental Benefits
Autonomous robots drive innovation and productivity in the supply chain by-direct and
indirect operating costs and increased revenue potential.
Autonomous robots in particular can help:
1. Improve productivity and efficiency.
2. Reduce rates of error, rework, and risk.
3. Improve safety in high-risk work environments for workers.
4. Perform lower value, mundane tasks so that people can work together to concentrate
on more ambitious projects that cannot be automated.
Boost sales by raising perfect order fulfillment levels, delivery speed, and customer
satisfaction eventually.
Secondary potential benefits of autonomous robots include:
1. Increased satisfaction of workers by concentrating on strategic jobs rather than
routine activities.
2. Reflect on personal safety by reducing jobs for workers in dangerous areas.
3. Signalizing cutting-edge activities and introducing innovative technology improved
the corporate brand.
4. Exponential learning through machine data collection and analysis.
Page 37 of 39