Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 22

3.2.

Camera Management

3.2. Camera Management


For getting data through pi camera, use of the hostname of raspberry pi, which
is declared at the time of installation of the operating system comes in aid. In
first step, in this project to get live video transmission the steps are as follows:

1. Open the putty, type there hostname/IP address of raspberry pi.

2. After that, enter the username and password there, which you must be saved
at the time of installation of Raspbian, which is the operating system of
raspberry pi.

3. In this step, enable Wifi by entering the command sudo rfkill unblock wifi.

4. Now get the IP address from this step by entering the command if config.

5. Now open VNC viewer and enter this IP address there.

6. Insert pi camera in the slot available near to the processor.

7. Enable camera by going into the raspberry pi configuration, check first either
it is enable or not if not enable it first.

8. After this step, reboot your pi.

9. Now go to terminal window of Raspbian and check either that either


camera is working or not, for this step enter required command. That will
take a picture, from which you will check that either your camera is
working or not.

10. Now enter the video height, its width and frames with its port number.

11. At same time open an application on Raspbian to get data from camera,
from their open network stream and write the hostname in place of
raspberry pi the line given below.rtsp://raspberrypi:8080/.

Now one can see real-time video streaming on VNC viewer window.

1
3.

3.3. Face Recognition

3.3.1. Dataset
Dataset of different peoples will be created and this dataset contains the
pictures with the labels. All these pictures in dataset converted in greyscale
because it reduces computational complexity and it is shown below in Figure
3.1.

Figure 3.1: Dataset for face recognition

3.3.2. Model Training


3.2.2.1 Detection Phase

For detection of face, a known algorithm named haar casscade classifier used
here and for training of algorithm dataset created which consists of two types of
images, one with positive and one is with negative pictures. Dataset for positive
picture is shown in Figure 3.2. Positive pictures consist of those which are
required for our

Figure 3.2: Positive Pictures

objective (face of persons) and negative with those which are except faces. After
2
3.3. Face Recognition

that it has been used graphic user interface (GUI) software which then train dataset.
After that training, model in form of xml file generated and for this the code which
have been used is shown in Figure 3.3.

Figure 3.3: Code for face detection

3.2.2.2 Recognition Phase

For training the model, a dataset of images have been created and after that it
is labeled that with the help of each image with particular ids. A specific id has
given to each user and it is appended into numpy array to perform
mathematical operations on it. Following that, the model will be trained, and the
result will be in the form of an xml file, which will then be used to predict the
person’s identity and the code is shown in Figure 3.4.

Figure 3.4: Code for Generating Dataset

3
3.

3.2.2.3 Prediction

After the training, xml file will be created which then in next step used for prediction
of a person. After the recognition of person, person identification will be displayed
in the top left corner. Some threshold value for prediction of face has been used, if
confidence of that particular face crosses that it declares face of particular id with
the name which is in the dataset and all of this will be shown on VNC viewer
window. The code for this is shown in in Figure 3.5.

Figure 3.5: Positive Pictures

3.4. Weapon Detection


For weapon detection, the difference in temperature to either see that a person
has weapon or not will be utilized. The range of temperature here from 20 to 26
degrees to check the variation in surroundings. Resolution of thermal camera is 8x8
it will give total of 64 pixels. So to increase this upto the range which is visible to
the control room, bicubic interpolation will be used. This will give the resolution
increased upto 1024p which is great in this scenario. Some important libraries have
been used here, which have some important functions to perform the required
task. Libraries like Adafruit AMG88xx, is basically used here for thermal camera to
perform the required task

2
3.5. Obstacle Detection and Landmine Detection

Pygame library imported here to see the output of thermal camera on the VNC
viewer screen which will show output as difference in the temperature whose
blank screen in form of the purple color and if a temperature greater the minimum
temperature range occurs this will be shown as red color and whose
temperature less than this will be shown as the blue color.
Scipy library works as a base to perform the interpolation for increasing the
number of pixels to see a good output on the screen. In first step, temperature
has set to its specific range and then thermal sensor will be initialized. In step
after to set the temperature, it is mentioned which color to be displayed on the
screen. In last step, an infinite loop has been displayed to show the output of
thermal sensor.

3.5. Obstacle Detection and Landmine Detection

3.5.1. Obstacle Detection


To detect the obstacles the use of an ultrasonic comes in aid, which usually works
on the sound waves. This is interfaced with processor, but with some extra resistors
due to the reason that processor takes input of 1.8 to 3 V and if voltage fluctuates,
the general purpose input output pins do not have any backup regulator to
maintain this voltage for this scenario and for this reason resistors used here.

3.5.2. Landmine Detection


Landmines are used as an extra source of weapon to save particular lands from
enemies. In this scenario, one would have an interest to develop detector which can
detect those landmines. Metal detectors will be utilized here and they detect the
metals in its area and send signal in form of sound to the control room. 555 timer IC
has been used here, which will with RLC circuit create signals then that transferred
to buzzer.

3.6. Mapping
Mapping of a particular environment is an excellent work, but to perform this is too
much difficult task. For this operation, Lidar will be used and the name is basically

2
3.

tf-luna lidar and this lidar draw the distance in two dimensional form which will
be in real-time. This lidar is same as the ultrasonic sensor and it is based on time
of flight principle. The main libraries which are required in this are matplotlib to
plot the map and another one is serial which will be used to get the serial data.
LIDAR has been interfaced with raspberry pi and that is through serial port 1 which
fist be enabled and then it will receive and send signal through this port. Here
different types of functions has been used, from which the first one is to check the
temperature and signal strength. The second on is to declare the sample rate that
have been used here to get the serial data. The third one is to get the version of this
device and in the next function is to declare the buadrate. And the fourth one is to
plot the map and in the last step the plot is to update the graph.

3.7. Block Diagram

Figure 3.6: Block Diagram

2
3.8. Location Identification

3.8. Location Identification


In this project, Navigation system based on satellites used here for getting
person site, which then send to the control room at the same time. All this have
done by interfacing the GPS with the processor which through serial port deal
with it to give the location of an area. The GPS module is interfaced with the
serial 1 port which is the fourth and fifth pin of raspberry pi. The main thing here is
that only Tx pin has been utilized here of gps module, because only data that is
required is from GPS. So, for this we have left out the Rx pin of GPS module.

3.9. Flow Chart

Figure 3.7: Flow Chart

2
4. Hardware Implementation and
Simulation

The project includes an acrylic frame on which main components has been
placed, and the components are AMG8833, Raspberry pi 4 model B, Pi Camera, GPS
module and DC motors for movement of robot. Power supply has used in
range of 5V, 3A to power Raspberry pi and its GPIO pins to control the motors
and all other components. To make map of an environment LIDAR has been
used, and an ultrasonic sensor which served here as the obstacle sensor.

4.1. Hardware Assembling


Corel Draw has used here to design the frame part of this project, assembling of all
the component done on the upper side of it. Four dc motors with high torque has
been used and the purpose to use high torque motors is stair Climbing.

4.1.1. Complete Hardware Setup

The frame designed to use in this robot made from an acrylic Sheet, and connecting
wires for components connection served as the line to transfer the signals. Some
components have been placed on upper frame and some on the lower frame of the
robot. Pi camera interfaced on the port of Raspberry pi near to the Ethernet port
and its silver strip kept on opposite to the Ethernet port.
The function of this camera is to do live streaming with the help of raspberry
pi and its output be shown on the VLC media player of raspberry pi, which is
built-in in the operating system of raspberry pi. And serial port used to connect
thermal camera on raspberry pi, which are the 2,3 pins on it.

2
4.1. Hardware Assembling

All these pictures in dataset converted in greyscale because it reduces com-


putational complexity. The function of this camera is to do live streaming of any

Figure 4.1: Robot Frame

particular area and it is in form of infrared image. This helped to detect the weapons
with help of temperature variation, and output of this camera will be in form of 8x8
window. To get the location of an unknown person, GPS served as the medium to
transfer it.
The satellite based navigation system connected with processor and it sends the
GPS data through Rx pin of raspberry pi. To get the data that how far obstacle is
from robot, ultrasonic sensor has been interfaced with given device and transmitter
pin of this sensor send data to the processor it displayed on VNC viewer. The robot
body is shown above in figure 4.1.

4.1.2. Robot Livestreams


This robot does livestreaming and all of this is only through pi camera, its
output shows on the VNC viewer screen.

4.1.3. Major Assignments


The main role of this project is that, it draws the map or we could say that it
made a graph in which distance of any place plotted. This robot recognizes the
person, means it performs face recognition and it does weapon detection, it
would be all through the thermal camera. It detects the land mines through the
metal detector and sends signal to raspberry pi according to which control room
perform any further action.

2
4. Hardware Implementation and

4.2. Software Simulation


For performing software simulation, some libraries openCV package, os, numpy
which already downloaded with the openCV package has used. This all work
done with the help of sudo commands which are used to download the required
packages in Linux operating system. For performing the required operation a
software named as Thony python IDE which comes with Raspbian installed on
SD card has used. For performing operation of thermal imaging, pygame and
scipy packages perform their role. To show the data of GPS module, an important
library named as pynmea2 was downloaded from the terminal of Raspbian.

4.2.1. Function of Raspberry Pi


The raspberry pi here is the brain of this project which is used to control all of
my main components to perform their work. All of the main function which are
to be performed by this robot given in figure 4.2. To get live stream through pi

Figure 4.2: Main functions performed by Raspberry Pi

camera, it has used the port which specifically designed for it besides the Ethernet
port.Raspberry pi used VLC media player which is already in it to get the stream by

3
4.2. Software

taking its hostname which have been declared at the time of flashing the operating
system on the raspberry pi. To display the live stream in it, raspberry pi require to
set the frames, height and width of the window which will open at the time of live
streaming. To detect the weapons, the pygame window will be open on VNC viewer
which works as desktop screen for it. To map a particular area this microcontroller
receive signals through its serial port and draw them using matplotlib.

4.2.2. Function of Picamera


Pi camera is same as other cameras which are used to capture the images and to
stream the videos, but the major difference here is that it is specifically designed for
this device and result for this is given in figure 4.3. The function of this camera in

Figure 4.3: Face recognition

this project is to stream the real-time video and all this work could be viewed on
the VNC viewer.

4.2.3. Function of Thermal Camera


To detect the weapons, AMG 8833 thermal camera has used and it interfaced on the
uart side of raspberry pi. Difference in body temperature with the atmosphere will
be in form of color spectrum and to detect weapons spatial temperature difference
comes in aid. The best representation for this is shown in figure 4.4. In fig 4.4, one
can see the Temprature variation in between of figure and it is shown by blur colors
from its surroundings and it helps to detect weapons.

3
4. Hardware Implementation and

Figure 4.4: Thermal camera image

4.2.4. Function of GPS Module

To send the location of any place on this planet the main thing which helps in this
scenario is GPS module.NEO-6M GPS module which through serial communication
shows the position of any person in terms of latitude and longitude.

4.2.5. Function of Metal and Obstacle Detector

As it is known, the landmines are dangerous that it can harm any person when
it comes in contact with it. Metal detector has been used here only to detect the
landmines, in metal detector main component which have been used is 555
timer IC and further the RLC circuit which acts as magnetic core when a metal
comes in contact with it. The output for this objective is shown below in figure 4.5.
To detect obstacles, ultrasonic sensor has been used which through sound waves
sends the distance from any obstacle.

4.2.6. Function of Lidar

Mapping of particular environment is done through the lidar (Light Detection and
Ranging) which basically works on the light emitted from the it and it calculate the
distance of light according to how much time it was away from the source. Here it
has been used the tf-luna lidar which basically works on 2-D space. It is interfaced
with raspberry pi through the serial port and It sends and receive the signal because

3
4.2. Software

Figure 4.5: Result of Ultrasonic Sensor

it has same working procedure as the ultrasonic sensor. The output for this LIDAR
is given below in figure 4.6.

Figure 4.6: Plot from TF Luna-Lidar

3
5. Results and Discussions

5.1. Results
This project has been completed in four steps as mentioned above in chapter 4 and
further details of results which has been gotten from different phases are shown
below.

5.1.1. Facial Recognition Results


In this section, the results of face recognition have been shown, allowing one to
judge how good the algorithm that has been built is, as seen in fig 4.3.

5.1.2. Thermal Imaging


The use of the amg88xx library comes in handy for detecting any concealed weapon
because it provides a great deal of detail about how good it is in the realm of thermal
imaging. To elaborate, the resolution of the thermal streaming has been enhanced
using bicubic interpolation, as illustrated in fig 4.4.
As can be seen in the middle of fig 4.4, there is a temperature variation where we
placed a device, which let us identify whether or not there is something behind that
section of the body.

5.1.3. Distance Plotting


Mapping an environment was accomplished using various Python libraries, with
matplotlib and serial libraries playing important roles. Figure 4.6 depicts the
end result. Here two plots have been displayed in left one, distance is displayed
and in right one the intensity of an obstruction has exhibited by signal strength
bar. The

3
5.2.

graph with a big signal ups and downs from its constant point may be seen on the
left side of the plot.

5.1.4. Metal Detection and Obstacle Detection


Ultrasonic are a great means of recognizing barriers, particularly those which
are close to the ground. Their efficiency, although, can vary depending on a
variety different criteria. The outcome of the ultrasonic sensor is shown in cm in
fig 4.5, along with a metal detector circuit in the in the same figure, which is
benefitted.

5.2. Advantages
Robotic systems and unmanned vehicles outfitted with cameras and sensors are
used in military operations to acquire intelligence, perform reconnaissance, and
give situational awareness. The following are some of the potential benefits for
conflict zone spying robots in military operations:

5.2.1. Advantages to military


5.2.1.1 Increased Safety

Conflict zone surveillance devices can collect information in risky or high


density areas, lowering the possibility of harm to service people. These iot devices
can work in regions that humans would find too unsafe or impossible to reach.

5.2.1.2 Enhanced Intelligence

Iot devices can collect and communicate actual surveillance data to personnel,
allowing them to get a greater grasp of the situation on the ground. This can
result in enhanced choices and capabilities.

5.2.1.3 Versatility

Depending on the needs of the military, conflict zone surveillance robots can be
created and outfitted for a number of duties. They may be employed for observation
as well as logistic transport to forces.

3
5. Results and

5.2.1.4 Reduced Costs

The use of robots in armed conflicts may be less expensive than the presence of
peoples. Smart devices do not consume special, drink, or healthcare, and they may
work for long periods of time without ceasing.

5.2.1.5 Increased Effectiveness

War field surveillance robots can be outfitted with powerful sensors and
imaging devices, allowing them to identify data that peoples might miss. This
can aid in more effectively identifying prospective threats and targets.

5.2.2. Advantages to society


5.2.2.1 Opportunities for students

War field spying robots can be used for educational and research purposes, provid-
ing students and researchers with the opportunity to study robotics, engineering,
and other fields.

5.2.2.2 Search and Rescue

Robots may be employed in emergency situations to help find unsolved cases or to


gain access to areas that are too risky for living rescuers to enter. People can really
be connected with devices, smart meters, and other technology to help with search
and rescue.

5.2.2.3 Reduction in Threats

War field surveillance robots can be outfitted with powerful sensors and
imaging devices, allowing them to identify data that human workers might miss.
This can aid in more effectively identifying prospective threats and targets.

5.2.3. Benefits to the Environment


The following are a few potential environmental benefits of combat field surveil-
lance robots:

3
5.2.

5.2.3.1 Decreased Anthropogenic Environmental Impact

Human soldiers may be able to avoid sensitive environmental areas and lessen their
environmental impact by employing robots to acquire intelligence in conflict zones.

5.2.3.2 Decreased Dependence on Conventional Cars

There may be less need for massive, fuel-intensive military vehicles that can
harm the environment since war field surveillance robots may be able to
navigate chal- lenging terrain and operate in places that are inaccessible to
conventional military vehicles.

5.2.3.3 Enhanced Military Actions With a Focus

War field spying robots may be able to assist military leaders in planning more
targeted operations that reduce collateral damage and decrease the environmental
impact of military operations by giving more accurate and detailed intelligence
about enemy positions and movements.

5.2.4. Ethical Benefits

The following are a few potential ethical benefits of combat field spying robots:
decreased danger to human soldiers

5.2.4.1 Decreased Danger to Human Soldiers

Human soldiers may be able to avoid risky circumstances and lower their risk of
harm or death by utilising robots to gather intelligence in combat zones.

5.2.4.2 A Higher Level of Accuracy and Precision

Robotic war field spies may be able to gather information about enemy
positions and movements with more accuracy and specificity, potentially lowering
the danger of civilian casualties and other unintended effects of military
operations.

3
5. Results and

5.2.4.3 More Accountability and Transparency

The likelihood of human rights violations and other unethical behaviour could
be reduced if war field surveillance robots are able to collect and communicate
data about military operations. This information could be utilised to improve
openness and accountability in military operations.

5.2.4.4 Possibility of Improved Effectiveness

Robotic war field detectives might be able to gather intelligence more rapidly
and effectively than human soldiers, which could reduce military operations’ span
and length and reduce the danger of harm to civilians and non-combatants.

5.2.5. Secure communication and Cities


Information is transferred between more than one party in a secure manner to
prevent illegal access, interception, or tampering. It is done by encrypting messages,
which turns them from plaintext to cypher text that only authorised parties with
the right key can decipher.
Ground vehicles (UGVs) used for surveillance and other hostile conditions in-
clude war field espionage robots. They can gather real-time information on enemy
positions, movements, and activities since they are outfitted with cameras, sensors,
and other cutting-edge technologies. Robots used in the cities of battle fields are
espionage machines that can work in crowded cities.

5.3. Beneficiaries
Military and intelligence organisations who utilise them to acquire crucial infor-
mation in combat zones and other difficult conditions are the main beneficiaries
of war field spying robots. War field spying robots can help various parties in the
following precise ways:

5.3.1. Soldiers in Command


Military commanders are better able to plan and carry out operations when they
have access to real-time intelligence on the positions, movements, and activities of

3
5.3. Beneficiaries

the enemy thanks to war field spying robots. This can help prevent incidences of
friendly fire and save lives.

5.3.2. Spy Agencies


Spy agencies could use conflict site spying robots to collect data on terrorist
orga- nizations, criminal gangs, and other non-state participants who present a
danger to the nation’s safety. This data may be utilized to interrupt their
operations and avoid potential threats.

5.3.3. Soldiers
Soldiers on the ground can benefit from war field spying robots by providing
situational awareness and early warning of potential threats. This can assist
them in making better decisions and remaining safe during combat operations.

5.3.4. Civilians
In some cases, war field spying robots can be used to monitor areas affected by
natural disasters or humanitarian crises, allowing aid organisations to better assess
the situation and deliver aid.

5.3.5. Security Companies


Security firms can benefit from war field spying robots in a variety of ways. Among
these benefits are the following:

1. Increased security personnel safety in dangerous or high-risk situations.

2. Real-time threat intelligence allows for rapid response to security breaches or


incidents.

3. Cost-effectiveness when compared to using human security personnel.

4. Advanced sensors and technology for accurate and detailed data collection.

5. Improved efficiency through round-the-clock surveillance and monitoring.

6. Flexibility in performing various tasks, such as surveillance and monitoring.

3
5. Results and

5.3.6. Justice
5.3.6.1 Transparency and Accountability

There are explicit norms and regulations governing the use of spying robots, and
their use is open to the public. In order to guarantee that the employment of
surveillance robots does not infringe upon human rights or endanger civilians,
accountability systems have also been put in place.

5.3.6.2 Sensitivity to Human Dignity

Principles of respect for human dignity, such as the principle of proportionality,


which states that the harm brought on by the employment of spying robots should
not be disproportionate to the military goal pursued, have served as guidelines for
their usage.

5.3.6.3 Defending Against Danger

The risk to both locals and visitors may have been reduced with the use of surveil-
lance robots.

5.3.6.4 Observation and Evaluating

The use of surveillance machines has contributed to effective and moral monitoring.
This means assessing how spy robots will impact security operations, innocent
bystanders, and individual rights then taking the appropriate remedial action as
needed.

4
6. Conclusion

6.1. Conclusion

In conclusion, the use of spy robots in conflict has evolved into a key component
of contemporary situation. These robots have shown to be quite successful at
gathering crucial intelligence, lowering the possibility of human losses, and
giving security troops a tactical advantage. Yet, the deployment of surveillance
robots also brings up a number of moral issues with regard to their appropriate
application. The ethical implications of spying robot development and
deployment must be carefully considered. To ensure their appropriate usage and
reduce unintended consequences, it is crucial to create explicit standards and
protocols. These robots can be made more effective while posing less risks by
making investments in research and development to increase their capabilities,
integrating them with other military technologies, and giving military people the
necessary training. Here are a few suggestions for the future that might be taken
into account to improve the capabilities of spying robots and their successful
application in the field of violent conflict:

6.2. Innovations

To create surveillance robots that are more compact and fast that can operate
in confined locations and challenging environment, researchers can investigate
miniaturization techniques. This may improve their capacity to gather information
in intricate urban settings.

4
6. Conclusion

6.2.1. Artificial intelligence(AI)


The addition of AI to spy robots can improve their autonomy in decision-making,
allowing them to operate with less oversight from humans and adapt to shifting
ground situations.

6.2.2. Sensing Architecture


The ability of spying robots to detect and identify targets and acquire crucial
knowledge on potential threats can be improved by investing in advanced
sensor technology, such as chemical and biological sensors.

6.2.3. Communications System


In order to provide a more complete image of the battlefield, spy robot communica-
tion systems can be improved to permit seamless integration with other tools, such
as drones and satellites.

You might also like