Capstonefinall PDF

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 28

INTRODUCTION

1.1 Project Overview


Humans have learnt the technique to survive better through their inventions. A large
number of robots have replaced humans in various fields, especially in dangerous
workplaces. A human being is able to detect a field of interest, identify a target and shoot
these targets, but it would be way more convenient if a robot could do the same thing, yet
faster and with more accuracy. An autonomous robot is a soldier that never sleeps. This
project provides a modern approach for surveillance using robots at remote and border
areas and focuses on using technology in order to create a robot that would be able to
automatically find targets, identify them and shoot them down. This type of robot is
essential for jobs such as detecting and targeting of unwanted targets in an area that would
not be possible or safe for a human being to do. Hence, the primary application for this
device is in the defence sector. Moreover, it could also be marketed and utilized in the
private sector for security purposes.
The autonomous robot is a camera-based weapons system that uses software to detect and
monitor a target and hardware to aim to and shoot the target. This robot works on two
modes, one being the autonomous mode and other being manual. In the autonomous mode,
the robot functions on its own without human interference whereas, in the manual mode,
its actions are controlled by a security personnel. This prototype covers both land and air-
borne targets by modifying the angle of its shooting assembly and aligning it in the line of
target to strike down, and is capable of defending from attacks of UAVs (Unmanned Aerial
Vehicles).
A research survey was conducted and it was found that so far, there was no novel approach
that could carry out both, target detection as well as shooting, efficiently, in both the
domains, land, and sky. The shooting assemblies that covered both domains with high
performance had a drawback that they were immobile and included a limited range only.
Some prototypes used LIDAR [21] for object detection and tracking, but the same when
done by OpenCV [22] and YOLO [23] gives more accurate results efficiently. No model with
high accuracy has yet been developed that could give optimal results in dim light. The
present time robots cannot operate in any unusual circumstances, they can only perform in
Page 10 of 39
special circumstances that are adapted to their needs. Highest accuracy obtained for
shooting yet is 95.26% and for target detection using LIDAR is 89.36%. Model which can
carry out both the tasks efficiently is yet to be developed.
The prototype that has been built here is an autonomous moving device that patrols a pre-
programmed route and identifies its target from a scene automatically, locks it and
approaches its target and strikes it down using a firing mechanism. The main goal is to
provide an effective technique to kill an uncommon hazard in its environment, both on land
and in air. Hence, the system developed here is robot which is pre-programmed to patrol
on its path in a given area and comprises of a raspberry-pi[24] with a webcam attached to it
and a shooting assembly mounted on top of the robocar. The camera captures real time
frame or images of surroundings. The picture is then used in the computer software as an
input and the image is processed using image processing techniques such as YOLO and
Open CV. Further, target is identified and locked, and the shooting assembly is signalled
to align in line of the target via movement in x and y axis using two servo motors. It is
important to note that the assembly also needs to move closer to the target if the target is
out of its shooting range. A laser attached to the gun helps identify whether the shooting
assembly points at the identified target. This is done to increase accuracy of the system.
Finally, the trigger is pulled using another motor and the target is shot down successfully.
The purpose of robots used in defence is to increase or improve the existing capabilities of
a soldier while keeping them out of the way of harm. This project focuses on using robots
for surveillance on border areas via live streaming of footage which is monitored by a
security personnel at the backend and he can also switch to manual mode in case he feels
like there is a need to take over, hence reducing the overall risk to life of human beings. By
conserving human energy and reducing injury, soldiers will be able to keep going for
longer, reducing down time – a huge military advantage.
This system's hardware comprises a mechanical structure consisting mainly of open
equipment making it a low-budget device. Future scenarios of this prototype can also have
various features like 3D camera with night vision, face recognition, voice password
recognition, etc. added to it with an increase in overall product cost.

Page 11 of 39
1.2 Need Analysis
All the researches which have been conducted in past have focused only either on target
detection or on shooting, but this prototype covers both the sectors in one go. Target
detection is carried out by the autonomous robocar controlled via raspberry-pie, that patrols
on a pre-programmed path. Once the target is identified and locked, the shooting assembly
mounted on top of the robocar is aimed at the target and the target is shot down.
This prototype provides dual-mode of functioning, which is capable of shooting down
unwanted enemy targets present on land or in sky. The angle of the shooting assembly can
be modified to aim it at the target by moving the assembly in x and y axis by using two
servo motors. We also use a laser to point at the target, this improves the accuracy of the
prototype as it helps in verifying whether the shooting assembly points at the right target
or not.
The robocar is mobile and it patrols on a pre-programmed path. The shooting assembly is
mounted on top of it. It acts as an autonomous soldier that never sleeps. This prototype can
be used in places where the territory is dangerous, and prevents loss of human life. This
prototype also has an added advantage over humans that it can patrol its area 24x7, without
stopping, sleeping or getting tired, unlike humans.
Object detection algorithms used in our model provides us results quickly, which helps in
real-time object detection and target shooting, all within a few seconds. The model is made
more efficient by using image processing techniques like Open CV (Open Source
Computer Vision) and YOLO (You Only Look Once), instead of LIDAR (Light Detection
and Ranging).
Highest accuracy obtained for shooting so far is 95.26% and for target detection using
LIDAR is found to be 89.36%. Model which can carry out both the tasks efficiently is yet
to be developed. Hence, this prototype is a good solution to all the problems mentioned
above and is in current need in this sector.

Page 12 of 39
1.3 Research Gaps
 In review of our sample results, we found that there was no novel approach that could carry
out both, target detection as well as shooting, efficiently, in both the domains, land and sky.
 The shooting assemblies that covered both domains with high performance, had one
drawback, that they were immobile and covered a limited range only.
 Some prototypes used LIDAR for object detection and tracking, but the same when done by
OpenCV and YOLO gives more accurate results efficiently.
 No model with high accuracy has yet been developed that could give optimal results in dim
light.

1.4 Problem definition and scope


Design a system which is capable of protecting the territory from various threats
approaching from land or sky. It should be able to detect, lock and shoot down the target
and keep patrolling in a static environment and checking for suspicious activities
Project Scope: -
1. We are developing a robotic device that autonomously detects its target and shoots
from an appropriate distance to any hazardous area.

2. Robots developed using previously proposed techniques can follow a path and take
pictures in the surroundings or detect an obstacle, but no such technique has been
developed that could locate, track and shoot a target from an acceptable distance.

Page 13 of 39
1.5 Assumptions and constraints

TABLE 1: Assumptions and constraints

S. No. Assumptions and Constraints


The robot could only successfully track and reach a maximum of 2 objects in a frame
traveling up to approximately 3 to 4 miles per hour in Autonomous Mode. The gun would
1
not need user control, but will aim and shoot on its own decision powered by real-time
processing.

The image processing algorithm itself is the main weakness in the gun system. For real-time
systems, it is very difficult to perform the task of processing large data sets and requires a
high clock frequency as well as large amounts of memory. Ideally, the algorithms for image
2
processing should run on a multicore processor and the system should be parallelized.

The robot made works only in a static environment. Environment may slightly impact the
performance of the robot. For weather conditions like fog and rain, accuracy to hit the target
3
decreases. At night, the image processing becomes tough due to unavailability of night
vision camera.

One assumption about the product is that it patrols at a pre-programmed path only. New
4 path can be programmed to it if it is exposed to a new environment.

The weight and size of the power supply is a constraint to it. It must be light enough for the
framework of the robot to support so that the robot can move and detect without wasting
5
power. The power supply unit must be placed on the robot in a such a way that it does not
hinder with other components and wiring.

6 Weight of all the components assemble with the robot should be compatible with each
other. Weight of any component should not affect the performance of the product.

Safety mode should be programmed. The safety mode will allow the user to manually
navigate the robot without using the autonomous detecting, aiming and firing features that
7
will be present on the product. This will ensure that while showcasing the product, others
will not be harmed.

Page 14 of 39
1.6 Approved Objective
 To design an autonomous target locking and shooting robocar which is capable of detecting
a certain target and approach it to shoot it down.
 To provide a reliable, cost-effective and accurate technique to detect an unusual
environmental hazard using OpenCV (Open Source Computer Vision) and YOLO (You
Only Look Once) and to kill the target with Raspberry Pi shooting assembly.
 To design a product that is capable of defending from both land and airborne attacks.
 To develop a shooting assembly which can shoot targets accurately within range of 3-5
meter

1.7 Methodology Used


 Goal identification problems consist mainly of identifying a goal by separating the targets
from their backgrounds. We may contribute to one or more targets and can be handled by
tracking signals emitted from fixed static sensors with one or more sensors.
 The product developed detects the target using image processing techniques with the help
of computer vision.
 For collision avoidance, the ultrasonic sensor[25] is positioned at the top of the device. The
camera is used for real-time monitoring and the data collected is analyzed using OpenCV.
This makes the simulation of real-time scenarios algorithmically efficient.
 Object detection and tracking is done by YOLO (You Only Look Once) which is a design
identification of artifacts in real time.
 Robotic assembly for shooting purpose is made using nerf gun with its trigger attached to
a servo motor, which is programmed by Arduino. The motor is programmed in such a way
that it applies mechanical force on trigger as soon as it detects the target.
 Raspberry Pi 3B+ is used as a microprocessor of the prototype. Raspberry pi is basically a
small device with the size of a credit card.
 This robot works in two modes, one is autonomous mode and other is manual mode.
1. The robot would be deployed in an autonomous mode at a secure location and would
automatically aim to shoot the sensor target.
2. In manual mode, the control of the product is controlled by the security staff.

Page 15 of 39
1.8 Project outcomes and deliverables
The full fledge implementation of proposed technique can be deployed in battle field to
help the military in remote areas without risking human live and handling the battle field
challenges. This technique is based on Machine Learning with Image processing using
OpenCV, Python and other relevant techniques. The final product will be able to detect,
shoot all the air and land borne targets in its range and patrolling on pre-defined path as
well.

1.9 Novelty of work


The project is based on the idea of making things autonomous and more robust in real
world scenarios. This project aims at creating an autonomous robot capable of not only
detecting but also eliminating air and land-borne targets. Techniques proposed in past
lacked robustness and accuracy which is must for an autonomous robot to work in real
world scenario. Proposed technique is designed to be invariant of weather conditions
prevailing in battle field. Conditions in battle field require the robot to be robust and
accurate to deliver in real time scenarios. Ability of our robot to tackle both land and air
borne targets by patrolling on a pre-defined path makes our proposed solution unique and
robust from the techniques proposed in this field.

Page 16 of 39
REQUIREMENT ANALYSIS

2.1 Literature Survey


2.1.1 Theory associated with problem area
Through walking around, it periodically and detecting the unexpected land and airborne
attacks, the robot will keep watch on its territory. This operation is performed using the
raspberry-pi-connected pi camera. Using the ultrasonic sensor mounted on top of the robot,
the object detection and collision avoidance is achieved. The camera is used for real-time
monitoring and the data collected is analyzed use OpenCV. It allows the simulation of real-
time scenarios with algorithmic efficiency. The raspberry-pi is designed with the detection
of opposition code, face recognition, and assembly firing. Goal identification is the
detection of threats in the course of the usual activities. Using the camera attached to the
device, this is finished. The robot records all the activities and people around it accurately
and tests the presence of threats from its dataset. The targets are fixed in the camera after
the threat has been identified. There won't be any more camera movement. The shooting
assembly turns in the direction of the target after the target is locked and aligns itself with
the help of the servo motors attached to the target. For the assembly, three servo motors
were used. Two servo motors are used to push the gun in x-direction and y-direction
whereas the third motor is used to pull the trigger. The firing takes place after the shooting
assembly is properly aligned in the direction of the target. Using the third servo motor
attached, the trigger will be pulled after careful aiming and the target will be shot down
from an appropriate distance.

2.1.2 Existing systems and solutions


Earlier some robots were developed that are pre-programmed and can follow a particular
route to capture pictures from the surrounding area or to locate a particular object around
it, but there is no such technology that can detect a threat on the ground or in the air, lock
it up and shoot it down from a suitable distance. Most of the built robots are used only as
a monitoring system for detecting objects on the ground or in the air.. Many robots work
on image processing techniques such as LIDAR and use static sensors such as an infrared

Page 17 of 39
sensor for real-time target detection. Some use immobile robots that can only operate in
certain defined range, while few use mobile robots that can detect ground and airborne
threats but cannot fire targets. During target detection, real-time exposure time is a very
important factor that is unpredictable in most of the prototypes built. It is appropriate to
have a little inaccuracy but the reaction speed must be high imaging instability, device
noise and surface temperature fields have a significant impact on image quality, which
further affects the entire operation. Few can detect targets either inland or in the air while
others can aim only from an appropriate distance and at a particular angle, whereas there
is no such system that can patrol together with target detection, target locking, target
shooting in both air and ground and can operate in two modes, one being autonomous and
the other being manual. Essentially, the robots that have been built so far can all work under
a specific condition and can only perform a specific task and there is no one that can operate
under all the weird conditions.
LiDAR-based Object Detection - Most state-of - the-art object detection techniques rely on
LiDAR to collect accurate details while processing raw LiDAR data in different
representations. It can exploit by fusing numerous LiDAR representations against the RGB
image to extract more dense details. It utilizes arranged voxel grid depiction to quantize
the raw point cloud data and then uses either of 2D or 3D CNN to detect object, while it
takes numerous frames as input and processes object detection, tracking and motion
anticipating concurrently.
The following are the main differences between USVs and manned vehicles.
1. The two vehicles ' total motorization is nearly indistinguishable. Moreover, the USVs
must have on-board communication peripherals that can communicate continuously
with the control device using different signals (i.e. system status data and sensor
signals) to the GCS.
2. The U.S.V. should have a ground control system for remote operation.
3. Most recent USVs are fitted with functions for intelligent, autonomous navigation and
mission execution. Classified as Intelligence, Surveillance and Recognition (ISR),
Mine Countermeasure (MCM), Anti-Submarine Warfare (ASW), Special Operations
Support, Rescue Operations and Electronic Warfare are the main roles that USVs must
perform in the military.

Page 18 of 39
2.1.3 Literature Survey

TABLE 2: Literature Survey

Roll
S. No. Name Paper Title Tool and Technology Findings Citation
Number
Developers achieve multi-
Multi-UAV Binocular UAV binocular
Intersection with One- Vision-aided
intersection through the
cooperative target
1 401703010 Jiya Midha Shot Communication: integration of limited [1]
tracking.
Modeling and target measurements with
Binocular intersection
Algorithms, 2019 an automatic search
problem
process.
Research on 1. Background The author aims to
Automatic Target modelling method automatically detect and
2 401703010 Jiya Midha Detection and 2. Feature identify targets based on [2]
Recognition Based on extraction method deep learning with 95.8%
Deep Learning, 2019 Deep learning method accuracy.
Design and They developed and
designed semi-
Implementation of
3 401703010 Jiya Midha LIDAR was used autonomous sentry robot [3]
Image Capture Sentry along with Arduino using the Arduino
Gun Robot, 2018 controller.
Robotor an Autonomous robot that
Arduino was used as
goes to a remote area
autonomous vehicle microcontroller and
4 401703010 Jiya Midha from an appropriate [4]
for target detection Coil gun was used in
distance shoots down
and Shooting, 2014 shooting mechanism
goal.
Vision Based Robotic
System for Military 1. Object-tracking
Sum of Absolute The author is responsible
5 401703010 Jiya Midha Applications -- Design [5]
Difference (SAD) for the vision-based robot
and Real Time algorithm for military applications
Validation., 2014
Infrared target 1. Image
detection in Use of Infrared to identify
characteristic
objects in maritime
backlighting maritime analysis
6 401703011 Kritika Ahuja environments in real time, [6]
environment based on1. 2. Gauss difference based on the concept of
visual attention preprocessing
visual attention
model,2019 algorithm
Feature mapping deep
neural network
(FMDNN) model suit-
able for dim and small FMDNN is used to
Dim and small target target detection and a differentiate between dim
detection based on new method for dim and small targets and
7 401703011 Kritika Ahuja [7]
feature mapping and small target background by separating
neural networks, 2019 detection based on background and target,
deep learning(which setting the dim target
includes the network sample label to 1 and
model, training and setting the background
detection) sample label to 0.

Page 19 of 39
The creation of CVS
for mobile robots is
Convolutional neural done using CNN.
networks of the CNN models were
YOLO class in studied for different
use cases of batch To maintain the balance
8 401703011 Kritika Ahuja computer vision [8]
normalization and bias between the precision of
systems for mobile and the best accuracy detection of objects in
robotic complexes, and speed of detecting images and the amount of
2019 objects in images can FPGA resources needed
be provided by CNN to implement this CNN
of the YOLO class.
Azimuth-Only
Estimation for TDOA- Controlling the shooter
assembly with sensory
based Direction
9 401703011 Kritika Ahuja noise cancelation based [9]
Finding with Three- PID controller, IMU on data received from
Dimensional Acoustic sensors different sensors
Array, 2018
Perception, Planning,
Control, and Autonomous vehicle
vision, planning,
10 401703011 Kritika Ahuja Coordination for [10]
LIDAR monitoring and
Autonomous coordination
Vehicles, 2017
Sensor Model Based
Preprocessing of 3-D UAV target tracking
Laser Range Image trajectories are
formalized and To enhance their
Saksham Data and Motion
11 401703023 geometrically robustness, formulas were [11]
Gupta Oriented Feature modelled, so as to implemented in practical
Extraction for Mobile compute maximum UAV intelligent shooting
Robot Applications, allowable focal length systems.
2020 per scenario.
A real-time human- Open Pose library is
robot interaction They developed a human-
integrated with
robot communication
Saksham framework with Microsoft Kinect V2
12 401703023 system using static hand [12]
Gupta robust background to obtain 3D
gestures and extraction of
invariant hand gesture estimation of the
3D skeletons.
detection,2019 human skeleton.
Two stage estimation
algorithm and a local
Stereo vision based POE formulized error
Saksham model is used which We proposed a self-
13 401703023 autonomous robot [13]
Gupta enables robot calibration technique
calibration, 2017 calibration to be based on stereo vision for
completely online the robot.
and fast.
Autonomous Driving
of a Mobile Robot The NOCP is
efficiently solved by a
Saksham Using a Combined A NMPC approach is
14 401703023 combined multiple- [14]
Gupta Multiple-Shooting and proposed to steer an
shooting and
Collocation Method, autonomous mobile robot.
collocation method.
2016

Page 20 of 39
Sensor Model Based
Preprocessing of 3-D Sensor model-based
Laser Range Image error correction
The paper concludes with
Saksham Data and Motion methods as used for
15 401703023 an outlook for the [15]
Gupta Oriented Feature raw image processing
development of advanced
Extraction for Mobile are
logical sensors.
Robot Applications, discussed.
1988
An End-to-End Deep
Neural Network for Use of ultrasonic sensors,
Autonomous Driving Use of neural network. neural networks and
Shiva
16 401703025 Designed for Use of image image processing to [16]
Thavani
Embedded processing techniques. detect objects, lanes and
Automotive traffic light.
Platforms, 2018
YOLO achieves a much
You Only Look Once: YOLO-You look only higher precision than
Shiva Unified, Real-Time once, Object other real-time systems
17 401703025 [17]
Thavani Object Detection, detection, Machine with a faster method for
2018 Learning image detection and
recognition.
The main focus was to
address the existing
restriction of the
Road sign recognition Computer Vision, identification of road
Shiva signs such as single-color
18 401703025 system on Raspberry object recognition, [18]
Thavani or single-class road signs,
Pi, 2018 Machine Learning
low light conditions,
fading signs, and most
notably, non-standard
signs.
Ultrasonic sensor is used
to detect obstacles instead
Obstacle A voidance Image processing of camera because it is
Shiva ,MATLAB, Object
19 401703025 with Ultrasonic more complicated and [19]
Thavani detection, SONAR.
Sensors, 2017 numerical to find distance
from camera compared to
ultrasonic sensors.
Image Processing
Approaches for Use of image processing
Autonomous and neural networks to
Shiva Use of neural network.
20 401703025 Navigation of use day and night to move [20]
Thavani Ultrasonic sensors.
Terrestrial Vehicles in vehicle and lane
Low Illumination, detection.
2016

Page 21 of 39
2.1.3 The Problem that has been identified
Design a system which is capable of protecting the territory from various threats
approaching from land or sky. It should be able to detect, lock and shoot down the target
and keep patrolling in a static environment and checking for suspicious activities
Solution - A prototype will be developed to demonstrate and justify the goals that are
aimed to achieve at the beginning of this project.
For this purpose, we are using Machine learning and computer vision algorithms:
1. The metallic robocar will guard the territory and patrol around a fixed predefined path
controlled by Arduino module.
2. During patrolling, if the robocar comes across an obstacle, it deviates from the fixed
path to avoid collision and then continues its patrolling around the predefined path.
This collision detection and avoidance is controlled using Ultrasonic sensor and
Arduino.
3. While patrolling, if the robocar detects an unusual object using surveillance from its
camera, then it processes the frame of the detected object and compares it with the pre-
trained Machine Learning model to check if the object poses threat or potential risk. If
threat is detected then information is passed to the shooting assembly and an alarm is
raised which alerts the authorities at the base station.
4. When some instructions are received from the target detection module, they are
processed and gun is loaded and aimed towards the target and finally, the shot is fired
to eliminate the threat.

2.1.4 Survey of tools and technologies used


Robo-Car with motors- A metallic robocar is used as the prototype for the demonstration
of the autonomous robot with target detection and target shooting. The tank is mounted
with the microprocessor, the shooting assembly and the object detection module.
 Raspberry Pi- The Raspberry Pi is a low-cost, credit-card-sized device that attaches to a
computer monitor or TV using a typical mouse and keyboard.

Page 22 of 39
 Motor driver module [L298N]- The L298N is a dual H-Bridge motor driver that
simultaneously enables the control of speed and direction of two DC motors.
 Servo Motors [MG996r]- The Servo Motor is a widely used engine in various industries,
such as robotics, for high-tech devices. This motor is an electrical device that is self-
controlled and switches part of a machine with high productivity and high precision. The
motor in the prototype is used for dual purposes, one is to pull shooting gun trigger and the
other is for firing assembly orientation for targeting purposes.
 Ultrasonic Sensor- An ultrasonic sensor is a device that uses sound waves to measure the
distance to an object. It measures distance by sending out a specific frequency sound wave
and listening to bounce back for that sound wave. It is used to detect objects and to prevent
collisions.
 Pi-camera Module- A compact lightweight camera supporting Raspberry Pi is the Pi
camera module. Use the MIPI camera serial interface protocol, it communicates with Pi. It
is used in the classification of images, machine learning or monitoring projects.
 Arduino Mega- Arduino Mega is a ATmega1280-based microcontroller module. It has 54
digital input / output pins (of which 14 can be used as PWM outputs), 16 analog inputs, 4
UARTs (hardware serial ports), a 16 MHz crystal oscillator, a USB interface, a power jack,
an ICSP header and a reset button. Arduino Mega is used for shooting assembly
functionality.
 Wi-Fi Adapter- Wireless adapters are electronic devices that allow computers to connect
without using wires to the Internet and other computers. We send data to routers through
radio waves that transmit it to modems or internal networks of broadband. This is
essentially an observer's remote access.
 Breadboard and Connecting Wires- A breadboard is a solderless device with electronics
and test circuit designs for a temporary prototype.
 Li-ion battery- One type of rechargeable battery is a lithium-ion battery. For portable
devices and electric vehicles, lithium-ion batteries are widely used. It is used to supply the
prototype with power.
 Shooting Assembly- Shooting gun is used for target aiming and shooting. It is assembled
with Arduino and Servo motors for the programmed working of module. It is mounted on
the robo-tank with the rest of the operating modules.

Page 23 of 39
The base technique used in the capstone project is object detection which is an application
of computer vision. An efficient and accurate object detection has been an important aspect
in the advancement of computer vision systems. At the backend, the project uses YOLO
v3 for image classification. It is the latest variant of popular object detection algorithm
YOLO- You Only Look Once. It is super-fast and nearly as accurate as Single Shot
MultiBox (SSD).

2.2 Standards

TABLE 3: Standards
Standard Number Scope Publish Date Description

Universal serial bus interface for


IEC 62680-1-2 Ed.1.0 en:2016 USB 21-Oct-08
power and data
Requirements for household
International applications, electrical tools and
Electrotechnical 22-Nov-16 similar apparatus that uses DC
CIS PR 14-1 Ed. 6.0 B:2016
Commission motor
IEEE standard for IT,
802.11-2007 12-Jun-07 telecommunications and exchange
Wireless Communications
of information between systems

IEEE standard for microcontroller


1118.1-1990 Microcontroller 31-Jan-91
system serial control bus

P2700/D1.00 Sensors 12-Aug-14 Standard for sensor performance

Page 24 of 39
2.3 SOFTWARE REQUIREMENTS SPECIFICATION

2.3.1 Introduction
2.3.1.1 Purpose
1. It has introduced an autonomous moving robot that can detect, approach and shoot
down a certain object.
2. To replace manpower with a large number of robots working in various hazardous
locations such as military, surveillance, and borders. Robotic programming's main
goal is to interact with the electronic hardware and the real world.
3. The main objective is to provide an efficient, cost-effective and precise technique by
using raspberry pi to kill an unusual hazard in the region.

2.3.1.2 Intended Audience and Reading Suggestions


This project is specifically targeted to improve the military defense system. These
devices can be used in military settings to provide successful sentries during peacetime
and to improve soldiers ' protection during warfare by performing 3D (dangerous,
filthy, dull) duties that soldiers usually perform. An unmanned robot using AI
technology can therefore reduce manpower efficiently and greatly enhance the
competitive power of a military.

2.3.1.3 Project Scope


1. We are developing a robotic device that autonomously detects its target and shoots
from an appropriate distance to any hazardous area.
2. Robots developed using previously proposed techniques can follow a path and take
pictures in the surroundings or detect an obstacle, but no such technique has been
developed that could locate, track and shoot a target from an acceptable distance.

Page 25 of 39
2.3.2 Overall Description

2.3.2.1 Product Perspective


The main objective of this project is to make an autonomous robot which is capable of
going to a remote area, recognizing a target there, follow it, and shoot target once it is
within an acceptable distance. The successful execution of the prototype will reinforce
the positive contribution of the technologies used. Robots can be utilized to complete
work in perilous zones and can be used to manage troublesome instability levels in
such areas. Gradually robots are becoming dynamically vital for standard subject
applications, for instance, Urban Hunt and Salvage and military applications. A variety
of small robotic applications now arising where robots are utilized to complete an
assortment of errands. By and large, robots are still utilized for unsafe work which is
dangerous for humans, e.g., control automaton, spy robot, salvage robot, therapeutic
operation and so forth.

2.3.2.2 Product Features


This an autonomous moving system which automatically finds its target from a scene,
lock it and approach towards its target and hits through a shooting mechanism. The
main objective is to provide reliable, cost effective and accurate technique to destroy
an unusual threat in the environment using image processing. A variety of small robotic
applications now arising where robots are utilized to complete an assortment of
errands. By and large, robots are still utilized for unsafe work which is dangerous for
humans.

Page 26 of 39
2.3.3 External Interface Requirements

2.3.3.1 User Interfaces


The application will provide an interface page containing the following functionalists:
I. A RIGHT button where the user can take image from right side.
II. A LEFT button where the user can take image from left side.
III. A UP button where the user can take image from upside.
IV. A DOWN button where the user can take image from down side
V. A SHOOT button where the user can give command to robot.

2.3.3.2 Hardware Interfaces


The robotor is in the form of a buggy which can be controlled by the user to patrol the
premises in a remapped path and monitor the surroundings on his own using the camera
surveillance.

2.3.3.3 Software Interfaces


The application is to be developed under the Windows operating system using java
language. Incoming items are the image to be analysed and the outgoing items are to
be a shoot that image.

2.3.4 Other Non-functional requirements


2.3.4.1 Performance requirements
The performance able to control the robots with efficiency and accuracy then you can
guarantee yourself with good results and success. This system is a good step for secure
surveillance using robots.
2.3.4.2 Safety requirements
The model ensures that target will be detected without any errors and the target will
also be shot down with maximum accuracy making sure that no one will be harmed
except the target itself.

Page 27 of 39
2.3.4.3 Security requirements
This interface is safe in terms of user data and confidentiality. More sensors mean more
info. The human factor is replaced by sensors. Lidars, cameras and other task-specific
sensors (e.g. lane sensors, actuators, accelerometers, detectors, cameras, scanners,
devices, lidar) More mimic what the human eye sees. Lidar can generate gigabytes of
data in a short amount of time. All these data must be put in the right place at the right
time for processing and making decisions.

2.4 Cost analysis


TABLE 4: Cost Analysis

Requirements Price per Unit (in Rs)

Raspberry Pi 3 2700

Arduino Board 600

Pi - Camera 500

Ultrasonic Sensor 75

Robo – Car with motors 2600

Servo Motors x 3 [mg996rn] 840 (280 each)

Battery x 2 [li-ion battery] 540 (270 each)

Shooting Gun 650

Mounting stand for gun 725

Motor driver module [L298N] 250

Wires and breadboard 400

Wi-Fi adapter 350

TOTAL ₹ 10,230.00

Page 28 of 39
2.5 Risk Analysis
1. Robot might not detect targets in night due to the lack of night vision cameras.
2. Robot will patrol in pre-defined path only; however, we can change the path according to
the need of environment
3. Weight of shooting assembly should be within the threshold of buggy to avoid any
disturbance in the process of patrolling
4. It might give less accurate results in presence of fog and mist as visibility from camera
decreases to a great extent.

Page 29 of 39
METHODOLOGY ADOPTED

3.1 Investigative Techniques

TABLE 5: Investigative Techniques

S. No. Investigative Projects Investigative Techniques Investigative Projects


Techniques Description Examples
1 Descriptive Shooting assemblies designed in Shooting assemblies using
the past required high power coil guns.
supply and had lower accuracy,
whereas shooting mechanism
designed by us is low power
consuming and more accurate.

2 Comparative Previously invented techniques LIDAR based object


used LIDAR for object detection detection models.
but it was found that on using
models like Open CV and
YOLO, more efficient results
were observed.

3.2 Proposed Solution


This project aims to substitute the soldier (up to a certain extent) in challenging conditions of
battle field to eliminate the risk of live loss and hence increase the efficiency of the force. This
project will be to both detect and shoot targets prevailing in air and on land.

Page 30 of 39
3.3 Work breakdown structure

Figure 1: Work breakdown structure

Page 31 of 39
3.4 Tools and Technologies Used
Hardware Components:
 Robo-Car with motors
 Raspberry Pi
 Motor driver module [L298N]
 Servo Motors [MG996r]
 Ultrasonic Sensor
 Pi-camera Module
 Arduino Mega
 Wi-Fi Adapter
 Breadboard and Connecting Wires
 Li-ion battery
 Shooting Assembly

Software Components:
 Raspbian OS
 Python
 OpenCV
 YOLO

Page 32 of 39
DESIGN SPECIFICATION

4.1 System Architecture

Figure 2: System Architecture

Page 33 of 39
4.2 Design level diagram

Figure 3: Design level diagram

Page 34 of 39
4.3 User Interface Diagram

Figure 4: User-interface diagram

4.4 Snapshots of working prototype model

Figure 5: Working Prototype

Page 35 of 39
CONCLUSION AND FUTURE DIRECTIONS

5.1 Work Accomplished


TABLE 6: Workplan

5.2 Conclusion
We will be able to create an autonomous miniature car that will operate on simple image
processing techniques and algorithms. The image processing algorithms used here have
seen many practical applications and are still one of the most researched fields. A step has
been taken to improve the current framework of road traffic management to achieve much
cheaper and more efficient outcomes than the existing know-how. This algorithm can be
further enhanced by training our dataset using machine learning algorithms that can lead
to far better results with better efficiency due to reduced processing time and delivery of
outputs and up-to-date technology. On a small autonomous car, the algorithm described in
the paper was successfully implemented.

Page 36 of 39
5.3 Environmental Benefits
Autonomous robots drive innovation and productivity in the supply chain by-direct and
indirect operating costs and increased revenue potential.
Autonomous robots in particular can help:
1. Improve productivity and efficiency.
2. Reduce rates of error, rework, and risk.
3. Improve safety in high-risk work environments for workers.
4. Perform lower value, mundane tasks so that people can work together to concentrate
on more ambitious projects that cannot be automated.
Boost sales by raising perfect order fulfillment levels, delivery speed, and customer
satisfaction eventually.
Secondary potential benefits of autonomous robots include:
1. Increased satisfaction of workers by concentrating on strategic jobs rather than
routine activities.
2. Reflect on personal safety by reducing jobs for workers in dangerous areas.
3. Signalizing cutting-edge activities and introducing innovative technology improved
the corporate brand.
4. Exponential learning through machine data collection and analysis.

5.4 Future Work Plans


When this project will go in commercial stage then all kind of liabilities will be removed
and all constraints, risks will be eliminated. A night camera with IR vision is
recommended in production stage of the final product so robocar can serve all day and
night and continue to be an important special asset for the force. An android app will also
be developed to monitor and control the operations of the robocar which can be accessed
by the designated security personnel. A solar panel will also be incorporated in the
robocar so that its dependence on battery can be eliminated and it can be deployed in
remote areas without any hassles.

Page 37 of 39

You might also like