Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 61

IOT BASED ALIVE HUMAN DETECTION IN WAR FIELD AND

CALAMITY AREA USING RASPBERRY PI

ABSTRACT:

The IoT-based alive human detection system for war zones and calamity areas
presents a deep learning technology. Leveraging cameras for visual data capture
and GPS for precise location tracking, the system employs IoT communication to
enable seamless connectivity and remote monitoring. With the Raspberry Pi as the
central processing unit. Cameras record visual information and GPS gives the
context of location. The IoT module facilitates real-time communication for remote
monitoring and control. This multifaceted approach not only enhances the accuracy
and reliability of human detection but also enables real-time monitoring and
response coordination from remote location. A robotic module, driven by DC
motors and powered by 12V batteries, offer unparalleled mobility in rugged
terrain, enhancing the system's adaptability and coverage. This system promises to
revolutionize the efficiency and effectiveness of rescue operations, ultimately
saving lives and mitigating the impact of conflicts.
INTRODUCTION:

In war zones and calamity areas, the swift and accurate detection of living
individuals is paramount for effective rescue and relief operations. Traditional
methods often prove inadequate in these high-stakes environments, prompting the
need for innovative solutions. This introduction sets the stage for the exploration of
an IoT-based alive human detection system, which leverages a combination of
cutting-edge technologies including cameras, GPS, IoT communication, robotic
mobility, and infrared sensors. By seamlessly integrating these components, the
proposed system aims to revolutionize the way human presence is identified in
challenging circumstances, ultimately enhancing the efficiency, accuracy, and
safety of rescue efforts. Through this exploration, we seek to uncover the potential
of IoT technology in mitigating the impact of disasters and conflicts, and
ultimately, in saving lives.
BLOCK DIAGRAM:

CAMERA
CLOUD
SERVER

WIFI

IR SENSOR
GPS

RASPBERRY PI

SD CARD MOTOR

DRIVER

DC

MOTOR

ROBOT

12V BATTERY MODEL


RECEIVER SIDE:

CLOUD
PC
SERVER
SOFTWARE AND HARDWARE REQUIREMENTS:

HARDWARE REQUIRE MENTS

 RASPBERRAY PI
 ROBOT MODEL
 MOTOR DRIVER
 DC MOTOR
 IR SENSOR
 GPS
 WHEEL
 12V BATTERY
 SD CARD
 CAMERA

SOFTWARE TOOLS:
 PYTHON
 THONNY IDE
 PHP,MYSQL
EXISTING SYSTEM:

The current methods for alive human detection in war zones and calamity areas
often rely on manual reconnaissance and limited sensor deployments, leading to
inefficiencies and potential risks. Traditional systems may lack integration and
real-time capabilities, hampering timely response efforts. Moreover, these methods
are often constrained by environmental factors and may not provide
comprehensive coverage or accurate data. As a result, there is a pressing need for
more advanced and integrated systems that leverage IoT technologies to enhance
the accuracy, efficiency, and safety of human detection in critical situations. By
incorporating, GPS, IoT communication, robotic mobility, and infrared sensors, the
proposed system aims to address these shortcomings and revolutionize the way
alive human detection is conducted in challenging environments.
EXISTING SYSTEM BLOCK DIAGRAM:

CLOUD
SERVER

WIFI

IR SENSOR
GPS
MICR

OCONTROLLER

SD CARD MOTOR

DRIVER

DC

MOTOR

ROBOT

12V BATTERY MODEL


EXISTING SYSTEM DISADVANTAGE:
 Manual Reconnaissance: Current methods often rely on manual
reconnaissance, which can be time-consuming and inefficient in critical
situations where swift action is required [T6].

 Limited Sensor Deployments: Traditional systems may have limited sensor


deployments, leading to gaps in coverage and potentially missing individuals
in need of rescue [T6].

 Lack of Integration: Some existing systems lack integration of various


technologies, resulting in disjointed operations and hindering real-time
response capabilities [T6].

 Environmental Constraints: The current methods may be constrained by


environmental factors, limiting their effectiveness in challenging terrains or
adverse conditions [T6].

 Inefficiencies: Due to the limitations mentioned above, the current systems


may suffer from inefficiencies in terms of accuracy, timeliness, and overall
effectiveness in detecting living individuals in critical situations [T6].
PROPOSED SYSTEM:

The proposed IoT-based alive human detection system for deployment in war
fields and calamity areas represents a paradigm shift in rescue and relief
operations. By integrating state-of-the-art technologies such as cameras, GPS, IoT
communication, robotic mobility, and infrared sensors, the system aims to provide
a comprehensive solution for rapidly identifying living individuals in high-risk
environments. The Raspberry Pi serves as the central hub for data processing and
decision-making, facilitating real-time monitoring and response coordination.
Robotic modules, driven by DC motors and powered by 12V batteries, offer
enhanced mobility and coverage, enabling the system to navigate rugged terrains
and inaccessible areas with ease. Complemented by infrared sensors for heat
signature detection and SD card storage for data backup, the proposed system
promises to significantly improve the efficiency, accuracy, and safety of rescue
operations, ultimately saving lives and mitigating the impact of disasters and
conflicts.

EXISTING SYSTEM IMPLEMENT METHOD:


 Sensor Integration: Existing systems often integrate various sensors, such
as infrared sensors, to detect heat signatures emitted by living individuals .
These sensors play a crucial role in identifying the presence of humans in
challenging environments.
 Data Processing: The data collected from sensors, cameras, and GPS
devices is processed using microcontrollers like Raspberry Pi . This
processing involves analyzing the sensor inputs, identifying human
presence, and making decisions based on the data received.
 Communication: IoT communication is utilized to enable seamless
connectivity and remote monitoring in these systems . This allows for real-
time transmission of data and alerts to rescue teams, enabling swift response
in emergency situations.
 Robotic Mobility: Robotic modules, often driven by DC motors and
powered by batteries, are employed to enhance mobility and coverage in
rugged terrains . These robots can navigate difficult terrains and inaccessible
areas to locate and assist individuals in distress.
 Real-time Monitoring: The systems are designed to provide real-time
monitoring and response coordination from remote locations . This ensures
that rescue teams have up-to-date information on the situation and can make
informed decisions quickly.
 Optimization: Continuous optimization of the system through rigorous
design, testing, and integration of technologies is essential to improve the
effectiveness, precision, and safety of rescue operations . This includes
refining algorithms, enhancing sensor capabilities, and improving
communication protocols.
OBJECTIVES:

The objectives of implementing the proposed IoT-based alive human detection


system in war zones and calamity areas are multi-faceted. Firstly, the system aims
to enhance the efficiency and effectiveness of rescue operations by providing real-
time and accurate information on the presence of living individuals in critical
situations. Secondly, it seeks to improve the safety of rescue personnel by
minimizing their exposure to hazardous environments through the deployment of
robotic modules and remote monitoring capabilities. Thirdly, the system aims to
optimize resource allocation by enabling targeted and prioritized responses based
on the detected human presence. Additionally, it aims to enhance situational
awareness and coordination among rescue teams by facilitating seamless
communication and data sharing. Overall, the objectives revolve around saving
lives, minimizing risks, and maximizing the impact of rescue efforts in high-stakes
environments.

IMPLEMENTATION FLOW CHART:

START/RUN SD-CARD

(OS)

COLLECT SENSOR
DATA AND CAMERA
FRAME

YES RASPBERRY PI &


CLOUD STOREAGE & IMAGE
CONTROLLING PROCESSING
SYSTEM OF THE
ROBOT

CLICK STOP

TERMINATE /STOP
LITRATURE SURVEY

TITLE: Alive Humanbeing Dectector in War Field


AUTHOR : Pooja S N, Deepika J
DESCRIPITION:

In this modern, various sensors are using in different fields. In this project we are
using IR sensors to identify the alive human body in war field. There are numerous
IR sensors being used today however the sensor that is utilized will identify the
Infrared beams that are transmitted from the human body. There is a need to use PI
camera because, we are using PI Camera for detecting our soldiers. At the point
when an alive human is recognized in the scope of that IR sensor at that point it

sends the flag to the raspberry pi. The code contains the enrolled portable number
of the safeguard group. At that point the protect operation will be quick in
identifying the people who are alive.
TITLE: A Review of Alive Human Detection During Calamity and
Landmine/Bomb/Human Detection in War Fields Field
AUTHOR : Ujjwal Mishra, Sankalp Mishra, Ananya Shukla
DESCRIPITION:

Many people lose their life when a natural disaster such as earthquake, tsunamis,
windstorms occur. Similarly, in war-fields soldiers lose their life when they put
their foot incidentally on landmines. This paper focuses on investigating,
analyzing, and exploring the previous research papers on the alive human detection
robot. The further modifications that have been done in the past are compared
extensively in this paper. A quantitative analysis of the various technologies used
in the past such as microcontrollers (Arduino UNO, Raspberry pi), sensors (PIR
sensor for detecting the motion in the body, IR sensor) and other wireless
technologies for data and instruction transmission (Bluetooth) is done in this paper.
Further, methods have been devised in order to make the robot more effective.
TITLE: Alive Human Detection and Health Monitoring Using IOT
Based Robot

AUTHOR Dr. Y. Amar Babu1, A. Reshma2, P.V.N.S. Tejeswar2, R. Vinay


Kumar2, M. D. Sri Vastav2

DESCRIPITION:
Every year number of deaths due to wars and some natural and manmade disasters
are increasing. Due to this many people will get stuck and it is very time-
consuming for the emergency rescue operators should come there and start
the rescue operation till this process a person may lose their precious lives. In
order to solve this problem firstly the ground robot moves in the open field in the
affected area to find out the affected human being. Ultrasonic sensor is used
to guide the robot if in case there are obstacles in the path. The low-cost camera
sends live video continuously. The PIR sensor is used to finds motion in human
beings. If there is any motion detected then the affected person then the robot asks
victim to place their finger on the sensors so that the health parameters can be
monitored. GPS sends the location of the corresponding victim. This
information is sent through Wi-Fi transceiver to the Thing-speak app in the
smartphone through IoT.
TITLE: Search and Rescue System for Alive Human Detection by
Semi-autonomous Mobile Rescue Robot

AUTHOR: Mojaharul Islam, Zia Uddin

DESCRIPITION:
In this modern era, technological development lead the creation of sky scraper
buildings and dwellings which increase risks of losing life due to natural and
manmade disasters. Many people died by trapping under debris as their presence
cannot detect by the rescue team. Sometimes, it is impossible to reach in certain
points of the disasters in such calamity hit zones. The situation is worst for
developing country like Bangladesh because of low quality design and
construction. In this paper, PIR sensor based semi-autonomous mobile rescues
robot is developed which can detect live human being from an unreachable point of
the disaster area. Joystick and RF technology is used to control the semi-
autonomous robot and communicate with control point. Ultrasonic sensor is used
for obstacle detection in navigation path of robot and gas sensor is used to detect
gas leak inside the building. IP Camera is also integrated to observe and analyze
conditions that will facilitate human detection in reliable manner with highest
probability of success rate in that kind of situation.
TITLE: Live human detection robot

AUTHOR: Hemantha v1, Karthik BP2, Krishnaprasad3, Sindhu G4,


Sudarshan5 1234Dept of ECE, K S Institute of Technology, Bangalore,
Karnataka, India5Professor, K S Institute of Technology, Bangalore, Karnataka,
India

DESCRIPITION:
During war times it is difficult to check whether the soldier is dead or alive at
crucial movement. And also, when a natural disaster happens it is crucial to
implement the plan to save the people who will be needing help into action. The
Live human detection robot helps in those crucial times, it enters to the places
which the human or bulk machines cannot enter and identifies whether the person
is dead or alive.

HARDWARE DESCRIPTION

HARDWARE DESCRIPTION:
RASPBERRY PI 4

DESCRIPTION
The Raspberry Pi 4 Model B is the third generation Raspberry Pi. This
powerful credit-card sized single board computer can be used for many
applications and supersedes the original Raspberry Pi Model B+ and Raspberry Pi
4 Model B.

Will maintaining the popular board format the Raspberry Pi 4 Model B brings you
a more powerful processer, 10x faster than the first generation Raspberry Pi.
Additionally it adds wireless LAN & Bluetooth connectivity making it the ideal
solution for powerful connected designs.

Raspberry pi 4 Model B
The Raspberry Pi
4 is the third
generation Raspberry Pi.
It replaced the Raspberry
Pi 2 Model B in February
2016. The Raspberry Pi 4 has an identical form factor to the previous Pi 3 (and Pi 1
Model B+) and has complete compatibility with Raspberry Pi 1 and 2.

The best part about all this is that the Pi 4 keeps the same shape, connectors, and
mounting holes as the Pi 2.Dual Core Video Core IV® Multimedia Co-Processor.
Provides Open GL ES 2.0, hardware-accelerated OpenVG, and 1080p30 H.264
high-profile decode.
FEATURES

 A 2.4 GHz 64-bit quad-core ARMv8 CPU


 IEEE 802.11 Wireless LAN
 Bluetooth 5.0
 Bluetooth Low Energy (BLE)
 4GB RAM
 40 GPIO pins
 Ethernet port

APPLICATIONS

 Server/cloud server
 Security monitoring
 Environmental sensing/monitoring (e.g. weather station)
 IOT applications
 Robotics
 Wireless access point
 Print server
BROADCOM BCM 2711 PROCESSOR

DESCRIPTION

The Broadcom BCM2711 SoC (System of a chip)used in the first generation


Raspberry Pi is somewhat equivalent to the chip used in first generation smart
phones (its CPU is an older ARMv6 architecture), which includes a 700 MHz
ARM 7 Core processor, Video Core IV graphics processing unit (GPU), and
RAM. The Raspberry Pi is a series of small single-board computers developed in
the United Kingdom by the Raspberry Pi Foundation to promote the teaching of
basic computer science in schools and in developing countries. The original model
became far more popular than anticipated, selling outside of its target market for
uses such as robotics. Peripherals (including keyboards, mice and cases) are not
included with the Raspberry Pi. Some accessories however have been included in
several official and unofficial bundles

BCM 2711 SOC(System on a chip)


The Broadcom BCM2711SoC used in the first generation Raspberry Pi is
somewhat equivalent to the chip used in first modern generation smart phones (its
CPU is an older ARMv6 architecture), which includes a 700 MHz ARM11 76JZF-
S processor, Video Core IV graphics processing unit (GPU) and RAM. It has a
level 1 (L1) cache of 16 KB and a level 2 (L2) Cache of 128 KB. The level 2 cache
is used primarily by the GPU. The SoC is stacked underneath the RAM chip, so
only its edge is visible.

On the older beta Model B boards, 128 MB was allocated by default to the GPU,
leaving 128 MB for the CPU. On the first 256 MB release Model B (and
Model A), three different splits were possible. The default split was 192 MB
(RAM for CPU), which should be sufficient for standalone 1080p video decoding,
or for simple 3D, but probably not for both together. 224 MB was for Linux only,
with only a 1080p frame buffer, and was likely to fail for any video or 3D. 128 MB
was for heavy 3D, possibly also with video decoding (e.g. XBMC). Comparatively
the Nokia 701 uses 128 MB for the Broadcom Video Core IV. For the new
Model B with 512 MB RAM initially there were new standard memory split files
released(arm256_start.elf, arm384_start.elf, arm496_start.elf) for 256 MB,
384 MB and 496 MB CPU RAM (and 256 MB, 128 MB and 16 MB video RAM).
But a week or so later the RPF released a new version of start.elf that could read a
new entry in config.txt (gpu_mem=xx) and could dynamically assign an amount of
RAM (from 16 to 256 MB in 8 MB steps) to the GPU, so the older method of
memory splits became obsolete, and a single start.elf worked the same for 256 and
512 MB Raspberry Pi.

The Raspberry Pi 2 and the Raspberry Pi 3 have 1 GB of RAM. The Raspberry


Pi Zero and Zero W have 512 MB of RAM.
GPIO PINS

DESCRIPTION

General-purpose input/output (GPIO) is a generic pin on an integrated


circuit or computer board whose behavior including whether it is an input or output
pin is controllable by the user at run time. GPIO pins have no predefined purpose,
and go unused by default.

GPIO PINS
The Raspberry Pi 4 features the same 40-pin general-purpose input-output
(GPIO) header as all the Pi going back to the Model B+ and Model A+. Any
existing GPIO hardware will work without modification; the only change is a
switch to which UART is exposed on the GPIO’s pins, but that’s handled
internally by the operating system.

Serial Peripheral Interface (SPI) is an interface bus commonly used to send


data between microcontrollers and small peripherals such as shift registers,
sensors, and SD cards. It uses separate clock and data lines, along with a select line
to choose the device you wish to talk to.

The I2C bus was designed by Philips in the early '80s to allow easy
communication between components which reside on the same circuit board.
Philips Semiconductors migrated to NXP in 2006. The name I2C translates into
“Inter IC”. Sometimes the bus is called IIC or I²C bus. I2C is a serial protocol for
two-wire interface to connect low-speed devices like microcontrollers, EEPROMs,
A/D and D/A converters, I/O interfaces and other similar peripherals in embedded
systems. It was invented by Philips and now it is used by almost all major IC
manufacturers.

Selecting between I2c and SPI, the two main serial communication protocols,
requires a good understanding of the advantages and limitations of I2C, SPI, and
your application. Each communication protocol will have distinct advantages
which will tend to distinguish itself as it applies to your application. The key
distinctions between I2C and SPI are:
 I2C requires only two wires, while SPI requires three or four
 SPI supports higher speed full-duplex communication while I2C is slower
 I2C draws more power than SPI
 I2C supports multiple devices on the same bus without additional select
signal lines through in-communication device addressing while SPI requires
additional signal lines to manage multiple devices on the same bus
 I2C ensures that data sent is received by the slave device while SPI does not
verify that data is received correctly
 I2C can be locked up by one device that fails to release the communication
bus
 SPI cannot transmit off the PCB while I2C can, albeit at low data
transmission speeds
 I2C is cheaper to implement than the SPI communication protocol
 SPI only supports one master device on the bus while I2C supports multiple
master devices
 I2C is less susceptible to noise than SPI

 SPI can only travel short distances and rarely off of the PCB while I2C can
transmit data over much greater distances, although at low data rates
 The lack of a formal standard has resulted in several variations of the SPI
protocol, variations which have been largely avoided with the I2C protocol
HDMI PORT

DESCRIPTION

HDMI (High-Definition Multimedia Interface) is proprietary audio/video interface


for transmitting uncompressed video data and compressed or uncompressed digital
audio data from a HDMI-compliant source device, such as a display controller, to a
compatible computer monitor, video projector, digital television, or digital audio
device. HDMI is a digital replacement for analog video standards.

HDMI provides an interface between any audio/video source, such as a set-


top box, DVD player, or A/V receiver and an audio and/or video monitor, such as a
digital television (DTV), over a single cable. HDMI supports standard, enhanced,
or high-definition video, plus multi-channel digital audio on a single cable

HDMI CABLE

TYPES OF HDMI CABLES


 TYPE A
 TYPE B
 TYPE C
 TYPE D
 TYPE E

TYPE A:

The plug (male) connector outside dimensions are 13.9 mm × 4.45 mm,
and the receptacle (female) connector inside dimensions are 14 mm ×
4.55 mm. There are 19 pins, with bandwidth to carry all SDTV, EDTV, HDTV,
and 4K UHD modes. It is electrically compatible with single-link DVI-D.

TYPE B:

This connector is 21.2 mm × 4.45 mm and has 29 pins, carrying six


differential pairs instead of three, for use with very high-resolution displays such
as WQUXGA (3,840×2,400). It is electrically compatible with dual-link DVI-D,
but has not yet been used in any products. With the introduction of HDMI 1.3, the
maximum bandwidth of single-link HDMI exceeded that of dual-link DVI-D. As
of HDMI 1.4, the pixel clock rate crossover frequency from single to dual-link has
not been defined.

TYPE C:

This Mini connector is smaller than the type A plug, measuring 10.42 mm ×
2.42 mm but has the same 19-pin configuration. It is intended for portable
devices. The differences are that all positive signals of the differential pairs are
swapped with their corresponding shield, the DDC/CEC Ground is assigned to pin
13 instead of pin 17, the CEC is assigned to pin 14 instead of pin 13, and the
reserved pin is 17 instead of pin 14. The type C Mini connector can be connected
to a type a connector using a type A-to-type C cable.

TYPE D:

This Micro connector shrinks the connector size to something resembling


a micro-USB connector, measuring only 6.4 mm × 2.8 mm For comparison, a
micro-USB connector is 6.85 mm × 1.8 mm and a USB Type-A connector is
11.5 mm × 4.5 mm. It keeps the standard 19 pins of types A and C, but the pin
assignment is different from both.

TYPE E:

The Automotive Connection System has a locking tab to keep the cable from
vibrating loose and a shell to help prevent moisture and dirt from interfering with
the signals. A relay connector is available for connecting standard consumer cables
to the automotive type.
COMMUNICATION CHANNELS

HDMI has three physically separate communication channels, which are the
DDC, TMDS and the optional CEC.HDMI 1.4 added ARC and HEC.

DISPLAY DATA CHANNEL (DDC)

The Display Data Channel (DDC) is a communication channel based on


the I²C bus specification. HDMI specifically requires the device implement
the Enhanced Display Data Channel (E-DDC), which is used by the HDMI source
device to read the E-EDID data from the HDMI sink device to learn what
audio/video formats it can take. HDMI requires that the E-DDC
implement I²C standard mode speed (100 kbit/s) and allows it to optionally
implement fast mode speed (400 kbit/s).

The DDC channel is actively used for High-bandwidth Digital Content


Protection (HDCP).

TRANSITION-MINIMIZED DIFFERENTIAL SIGNALING (TMDS)

Transition-minimized differential signaling (TMDS) on HDMI interleaves


video, audio and auxiliary data using three different packet types, called the Video
Data Period, the Data Island Period and the Control Period. During the Video Data
Period, the pixels of an active video line are transmitted. During the Data Island
period (which occurs during the horizontal and vertical blanking intervals), audio
and auxiliary data are transmitted within a series of packets. The Control Period
occurs between Video and Data Island periods.
Both HDMI and DVI use TMDS to send 10-bit characters that are encoded
using 8b/10b encoding that differs from the original IBM form for the Video Data
Period and 2b/10b encoding for the Control Period. HDMI adds the ability to send
audio and auxiliary data using 4b/10b encoding for the Data Island Period. Each
Data Island Period is 32 pixels in size and contains a 32-bit Packet Header, which
includes 8 bits of BCH ECC parity data for error correction and describes the
contents of the packet. Each packet contains four sub packets, and each sub packet
is 64 bits in size, including 8 bits of BCH ECC parity data, allowing for each
packet to carry up to 224 bits of audio data. Each Data Island Period can contain up
to 18 packets. Seven of the 15 packet types described in the HDMI 1.3a
specifications deal with audio data, while the other 8 types deal with auxiliary data.
Among these are the General Control Packet and the Gamut Metadata Packet. The
General Control Packet carries information on AVMUTE (which mutes the audio
during changes that may cause audio noise) and Color Depth (which sends the bit
depth of the current video stream and is required for deep color). The Gamut
Metadata Packet carries information on the color space being used for the current
video stream and is required for xvYCC.
CONSUMER ELECTRONICS CONTROL (CEC)

Consumer Electronics Control (CEC) is an HDMI feature designed to allow


the user to command and control up to 15 CEC-enabled devices, that are connected
through HDMI, by using only one of their remote controls (for example by
controlling a television set, set-top box, and DVD player using only the remote
control of the TV). CEC also allows for individual CEC-enabled devices to
command and control each other without user intervention.

It is a one-wire bidirectional serial bus that is based on the link protocol to perform
remote control function. CEC wiring is mandatory, although implementation of
CEC in a product is optional. It was defined in HDMI Specification 1.0 and
updated in HDMI 1.2, HDMI 1.2a and HDMI 1.3a (which added timer and audio
commands to the bus). USB to CEC adapters exist that allow a computer to control
CEC-enabled devices.

HDMI ETHERNET AND AUDIO RETURN CHANNEL

Introduced in HDMI 1.4, HDMI Ethernet and Audio Return Channel


(HEAC) adds a high-speed bidirectional data communication link (HEC) and the
ability to send audio data upstream to the source device (ARC). HEAC utilizes two
lines from the connector: the previously unused Reserved pin (called HEAC+) and
the Hot Plug Detect pin (called HEAC−).If only ARC transmission is required,
a single mode signal using the HEAC+ line can be used, otherwise, HEC is
transmitted as a differential signal over the pair of lines, and ARC as a common
mode component of the pair.
AUDIO RETURN CHANNEL (ARC)

ARC is an audio link meant to replace other cables between the TV and the
A/V receiver or speaker system. This direction is used when the TV is the one that
generates or receives the video stream instead of the other equipment. A typical
case is the running of an app on a smart TV such as Netflix, but reproduction of
audio is handled by the other equipment. Without ARC, the audio output from the
TV needs to be routed by another cable, typically TOS-Link or coax, into the
speaker system.

HDMI ETHERNET CHANNEL (HEC)

HDMI Ethernet Channel technology consolidates video, audio, and data


streams into a single HDMI cable, and the HEC feature enables IP-based
applications over HDMI and provides a bidirectional Ethernet communication at
100 Mbit/s.[43] The physical layer of the Ethernet implementation uses
attenuated 100BASE-TX type signals on a single twisted pair for both transmit and
receive.
IR SENSOR

DESCRIPTION

An infrared sensor is an electronic device, that emits in order to sense some


aspects of the surroundings. An IR sensor can measure the heat of an object as well
as detects the motion. These types of sensors measures only infrared radiation,
rather than emitting it that is called as a passive IR sensor.

IR SENSOR

An infrared sensor is an electronic device, that emits in order to sense some aspects
of the surroundings. An IR sensor can measure the heat of an object as well as
detects the motion. These types of sensors measures only infrared radiation, rather
than emitting it that is called as a passive IR sensor. Usually in the infrared
spectrum, all the objects radiate some form of thermal radiations. These types of
radiations are invisible to our eyes, that can be detected by an infrared sensor.The
emitter is simply an IR LED (Light Emitting Diode) and the detector is simply an
IR photodiode which is sensitive to IR light of the same wavelength as that emitted
by the IR LED. When IR light falls on the photodiode, The resistances and these
output voltages, change in proportion to the magnitude of the IR light received.

CIRCUIT DIAGRAM
FEATURES

 Input voltage : 3.3v


 Output : analog

APPLICATION

 Radiation Thermometers
 Flame Monitor
 Moisture Analyzers
 Gas Analyzers

BATTERY(12 V)

DESCRIPTION

Batteries are a collection of one or more cells whose chemical reactions


create a flow of electrons in a circuit. All batteries are made up of three basic
components: an anode (the ‘-’ side), a cathode (the ‘+’ side), and some kind of
electrolyte (a substance that chemically reacts with the anode and cathode).

When the anode and cathode of a battery is connected to a circuit, a


chemical reaction takes place between the anode and the electrolyte. This reaction
causes electrons to flow through the circuit and back into the cathode where
another chemical reaction takes place. When the material in the cathode or anode is
consumed or no longer able to be used in the reaction, the battery is unable to
produce electricity. At that point, your battery is “dead.”

Batteries that must be thrown away after use are known as primary
batteries. Batteries that can be recharged are called secondary batteries.

12-volt battery, in its most common form was introduced for the early
transistor radios. It has a rectangular prism shape with rounded edges and a
polarized snap connector at the top. This type is commonly used in walkie talkies,
clocks and smoke detectors. They are also used as backup power to keep the time
in certain electronic clocks.

This format is commonly available in primary carbon-zinc and alkaline


chemistry, in primary lithium iron disulfide, and in rechargeable form in nickel
cadmium, nickel-metal hydride and lithium-ion. Mercury oxide batteries in this
form have not been manufactured in many years due to their mercury content.

12V battery has a rectangular prism shape with rounded edges and a
polarized snap connector at the top. A zinc–carbon (6F22) battery is a dry cell
battery that delivers a potential of 1.5 volts between a zinc metal electrode and a
carbon rod from an electrochemical reaction between zinc and manganese dioxide
mediated by a suitable electrolyte.

It was introduced for the early transistor radios. It is usually conveniently


packaged in a zinc can which also serves as the anode with a negative potential,
while the inert carbon rod is the positive cathode. An advantage is that several
nine-volt batteries can be connected to each other in series to provide higher
voltages.
FEATURES

 Output voltage: 12v


 Current capacity: 2.5Ah
 Approximate Volume: 0.2 cu. in. (3.3 cu. cm.)
 Approximate Weight: 0.4 oz. (11 gm.)

APPLICATIONS

 Walkie talkies.
 It is used to assorted electronics projects.
 Use a 12V battery clip to easily connect your 9V battery to your Arduino.
 The "12V clip" is also used on some batter holders of assorted voltages.

USB CAMERA

Active WebCam captures images up to 30 frames per second from any video
device including USB cameras, Analog cameras connected to capture card,
TVboards, camcorders with FireWire (IEEE 1394) interface and from Network
cameras. When the program detects motion in the monitored area, it can sound an
alarm, e-mail you the captured images, and start broadcasting or record a video.
The program has features to add text captions and image logos to the images, to
place a date/time stamp on each video frame, and to adjust the frame rate, picture
size, and quality.
A webcam is a video camera that feeds or streams its image in real time to or
through a computer to computer network. When "captured" by the computer, the
video stream may be saved, viewed or sent on to other networks via systems such
as the internet, and email as an attachment. When sent to a remote location, the
video stream may be saved, viewed or on sent there. Unlike an IP camera (which
connects using Ethernet or Wi-Fi), a webcam is generally connected by a USB
cable, or similar cable, or built into computer hardware, such as laptops.

FEATURES

 Frame rate(30 frames per second and above)


 Resolution
 Continuous autofoucus
 Microphones

APPLICATIONS
 Video calling and video conferencing.
 Video monitoring.
 Health care (for capturing arterial pulse rate).
 Input control devices.
 Commerce (e.g Webcam social shopper)
 Home security

MOTOR DRIVER

DESCRIPTION

L293D is a typical Motor driver or Motor Driver IC which allows


DC motor to drive on either direction.L293D is a 16-pin IC which can control a set
of two DC motors simultaneously in any direction. It means that you can control
two DC motor with a singleL293D IC. Dual H-bridge Motor Driver integrated
circuit (IC)

L293D is a dual H-bridge motor driver integrated circuit (IC). Motor drivers act as
current amplifiers since they take a low-current control signal and provide a
higher-current signal. This higher current signal is used to drive the motors.
L293D contains two inbuilt H-bridge driver circuits. In its common mode of
operation, two DC motors can be driven simultaneously, both in forward and
reverse direction. The motor operations of two motors can be controlled by input
logic at pins 2 & 7 and 10 & 15. Input logic 00 or 11 will stop the corresponding
motor. Logic 01 and 10 will rotate it in clockwise and anticlockwise directions,
respectively.

Enable pins 1 and 9 (corresponding to the two motors) must be high for motors to
start operating. When an enable input is high, the associated driver gets enabled.
As a result, the outputs become active and work in phase with their inputs.
Similarly, when the enable input is low, that driver is disabled, and their outputs
are off and in the high-impedance state.

Features

 Easily compatible with any of the system


 Easy interfacing through FRC (Flat Ribbon Cable)
 External Power supply pin for Motors supported
 Onboard PWM (Pulse Width Modulation) selection switch
 2pin Terminal Block (Phoenix Connectors) for easy Motors Connection
 Onboard H-Bridge base Motor Driver IC (L293D)

Technical Specification:

 Power Supply : Over FRC connector 5V DC


 External Power 9V to 24V DC
 Dimensional Size : 44mm x 37mm x 14mm (l x b x h)
 Temperature Range : 0°C to +70 °C

GPS MODULE

DESCRIPTION

The Global Positioning System (GPS), originally Navstar GPS, is a space-


based radio navigation system owned by the United States government and
operated by the United States Air Force. It is a global navigation satellite
system that provides geo location and time information to a GPS
receiver anywhere on or near the Earth where there is an unobstructed line of sight
to four or more GPS satellites.

The GPS system does not require the user to transmit any data, and it
operates independently of any telephonic or internet reception, though these
technologies can enhance the usefulness of the GPS positioning information. The
GPS system provides critical positioning capabilities to military, civil, and
commercial users around the world. The United States government created the
system, maintains it, and makes it freely accessible to anyone with a GPS receiver.

GPS MODULE

The Global Positioning System (GPS) is a global navigation satellite system that
provides location and time information in all weather conditions. The GPS
operates independently of any telephonic or internet reception, though these
technologies can enhance the usefulness of the GPS positioning information. GPS
satellites transmit signal information to earth. This signal information is received
by the GPS receiver in order to measure the user’s correct position.

The GPS concept is based on time and the known position of specialized
satellites. GPS satellites continuously transmit their current time and position. A
GPS receiver monitors multiple satellites and solves equations to determine the
precise position of the receiver and its deviation from true time. At a minimum,
four satellites must be in view of the receiver for it to compute four unknown
quantities.

Each GPS satellite continually broadcasts a signal (carrier wave with


modulation) that includes a pseudorandom code (sequence of ones and zeros) that
is known to the receiver and a message that includes the time of transmission
(TOT) of the code epoch and the satellite position at that time.

FEATURES

 Supply voltage: 12v DC


 Interface: UART RS232
 Optional T-TL uart also available
 Precision: 5 meters
 Automatic antenna switching function

APPLICATIONS

 GPS trackers
 Automated vehicle
 Robotics
 Fleet tracking

ROBOTIC VEHICLE

DESCRIPTION

The field of robotics encompasses a broad spectrum of technologies in


which computational intelligence is embedded in physical machines, creating
systems with capabilities far exceeding the core components alone. Such robotic
systems are then able to carry out tasks that are unachievable by conventional
machines, or even by humans working with conventional tools. The ability of a
machine to move by itself, that is, “autonomously,” is one such capability that
opens up an enormous range of applications that are uniquely suited to robotic
systems. This chapter describes such unmanned and autonomous vehicles and
summarizes their development and application within the international perspective
of this study.

Robotic vehicles are machines that move “autonomously” on the ground, in


the air, undersea, or in space. Such vehicles are “unmanned,” in the sense that no
humans are on board. In general, these vehicles move by themselves, under their
own power, with sensors and computational resources onboard to guide their
motion. However, such “unmanned” robotic vehicles usually integrate some form
of human oversight or supervision of the motion and task execution. Such
oversight may take different forms, depending on the environment and application.
It is common to utilize so-called “supervisory control” for high-level observation
and monitoring of vehicle motion. In other instances, an interface is provided for
more continuous human input constituting a “remotely operated vehicle,” or ROV.
In this case, the ROV is often linked by cable or wireless communications in order
to provide higher bandwidth communications of operator input. In the evolution of
robotic vehicle technology that has been observed in this study, it is clear that a
higher level of autonomy is an important trend of emerging technologies, and the
ROV mode of operation is gradually being replaced by supervisory control of
autonomous operations.
ROBOTIC VEHICLE

The field of robotics encompasses a broad spectrum of technologies in


which computational intelligence is embedded in physical machines, creating
systems with capabilities far exceeding the core components alone.

Such vehicles are “unmanned,” in the sense that no humans are on board.
These vehicles move by themselves, under their own power, with sensors and
computational resources onboard to guide their motion. Robotic vehicles are
capable of traveling where people cannot go, or where the hazards of human
presence are great.

A Vehicle full set robot contains two DC gear motors. The machine consists
of minimum mechanical tools resulting in a high quality robot. These motors are
directly controlled by two modes. Pulses from micro controller and it can be
controlled by means of relay switch. It can be moved to forward direction and
reverse direction for detection of the object. .
Robotic vehicles are capable of traveling where people cannot go, or where
the hazards of human presence are great. To reach the surface of Mars, a spacecraft
must travel more than one year, and on arrival the surface has no air, water, or
resources to support human life.

FEATURES

 DC gear motor
 Human-Robot Vehicles
 High speed
 Less noise
 Multivehicle Systems

APPLICATIONS

 Industrial products.
 Lab automation.
 Military and law enforcement.
 Recreation and hobby.

SD CARD PORT

DESCRIPTION

Secure Digital (SD) cards are removable _ash-based storage devices that are
gaining in popularity in small consumer devices such as digital cameras,
PDAs, and portable music devices. Their small size, relative simplicity, low
power consumption, and low cost make them an ideal solution for many
applications.

This application note describes the implementation of an SD Card interface


for the Texas Instruments MSP430, a low-power 16-bit microcontroller .
This interface, combined with the MSP430, can form the foundation for a
low-cost, long-life data logger or media player or recorder.

SD CARD PORT IN RASPBERRY PI

The SD card standard is a standard for removable memory storage designed


and licensed by the SD Card Association . The SD Card standard is largely a
collaborative e_ort by three manufacturers, Toshiba, SanDisk, and MEI and
grew out of an older standard, MultiMediaCard (MMC). The card form
factor, electrical interface, and protocol are all part of the SD Card
speci_cation. The SD standard is not limited to removable memory storage
devices and has been adapted to many di_erent classes of devices, including
802.11 cards, Bluetooth devices, and modems.

SD CARD PIN DETATILS

The SD 1-bit protocol is a synchronous serial protocol with one data line,
used for bulk data transfers, one clock line for synchronization, and one
command line, used for sending command frames. The SD 1-bit protocol
explicitly supports bus sharing. A simple single-master arbitration scheme
allows multiple SD cards to share a single clock and DAT0 line.
The SD 4-bit protocol is nearly identical to the SD 1-bit protocol. The main
di_erence is the bus width _ bulk data transfers occur over a 4-bit parallel
bus instead of a single wire. With proper design, this has the potential to
quadruple the throughput for bulk data transfers. Both the SD 1-bit and 4-bit
protocols by default require CRC protection of bulk data transfers. A CRC,
or Cyclic Redundancy Check, is a simple method for detecting the presence
of simple bit-inversion errors in a transmitted block of data.

In SD 4-bit mode, the input data is multiplexed over the four bus (DAT)
lines and the 16-bit CRC is calculated independently for each of the four
lines. In an all-software implementation, calculating the CRC under these
conditions can be so complex that the computational overhead may mitigate
the bene_ts of the wider 4-bit bus. A 4-bit parallel CRC is trivial to
implement in hardware, however, so custom ASIC or programmable-logic
solutions are more likely to bene_t from the wider bus.

BLOCK READ

The block read command is a bulk data command. The command response is
followed by a delay, then followed by a _start of block_ token, and then
followed by the actual block itself.

In order to make the most efficient use of resources and enable fast block
transfers, the block read function uses the DMA (Direct Memory Access)
controller on the MSP430. First, the command is sent and the response is
received. Then, the function waits until the start token is received. When it is
received, the function starts a DMA transfer. Since SPI requires that a byte
be sent for a byte to be received, two DMA units are used to complete the
transfer.

DMA0 is triggered by a UART receive. The source for the DMA transfer is
the USART receive buffer, U0RXBUF.

The source is set to byte-wide, no increment. The destination for the DMA
transfer is the data buffer. The destination is set to byte-wide, with an
increment. The count is fixed at 512, the default block size for a typical SD
card.

DMA1 is also triggered by a UART receive. The source for this register is a
constant 0xFF (the idle bus condition).

The output is the USART transmit buffer, U0TXBUF.

DMA priorities ensure that a byte will be received before a new 0xFF (idle
byte) is sent. Since both DMA units use the same trigger, DMA0 will always
be serviced before DMA1.

Finally, the receive and transmit interrupt flags are reset and the entire block
transfer is triggered by manually sending a single idle byte.
The function sd_read_block() implements the block read. The function will
return immediately and normal program execution can continue while the
block transfer finishes. sd_wait_notbusy() can be used to synchronously wait
for any pending block transfers to finish.
BLOCK WRITE

The block write is similar to the block read function in that it uses a
DMA transfer and also starts with a data token. However, since no bytes
need to be received during the block transfer, the block transfer only requires
one DMA trigger
.
DMA0 is triggered by a UART send. The destination for the DMA transfer
is the USART receive buffer, U0RXBUF. The destination is set to byte-
wide, no increment. The source for the DMA transfer is the data buffer. The
source is set to byte-wide, with an increment. The count is fixed at 512, the
default block size for a
typical SD card.

Finally, the receive and transmit interrupt flags are reset and the entire block
transfer is triggered by manually sending a single idle byte.

The function sd_write_block() implements the block write. The function will
return immediately and normal program execution can continue while the
block transfer finishes. sd_wait_notbusy() can be used to synchronously wait
for any pending block transfers to finish.

PYTHON

Python features a dynamic type system and automatic memory management.


It supports multiple programming paradigms, including object-
oriented, imperative, functional and procedural, and has a large and
comprehensive standard library.

Python interpreters are available for many operating systems. CPython,


the reference implementation of Python, is open sourcesoftware[28] and has a
community-based development model, as do nearly all of its variant
implementations. CPython is managed by the non-profit Python Software
Foundation.

Python is a multi-paradigm programming language. Object-oriented


programming and structured programming are fully supported, and many of its
features support functional programming and aspect-oriented
programming (including by metaprogramming and metaobjects (magic
methods)). Many other paradigms are supported via extensions, including design
by contract and logic programming.

Python uses dynamic typing, and a combination of reference counting and a cycle-
detecting garbage collector for memory management. It also features
dynamic name resolution(late binding), which binds method and variable names
during program execution.

Python's design offers some support for functional programming in


the Lisp tradition. It has filter(), map(), and reduce() functions; list
comprehensions, dictionaries, and sets; and generator expressions.[45] The standard
library has two modules (itertools and functools) that implement functional tools
borrowed from Haskell and Standard ML.[46]

The language's core philosophy is summarized in the document The Zen of


Python (PEP 20), which includes aphorisms such as:[47]

 Beautiful is better than ugly


 Explicit is better than implicit
 Simple is better than complex
 Complex is better than complicated
 Readability counts

Rather than having all of its functionality built into its core, Python was designed
to be highly extensible. This compact modularity has made it particularly popular
as a means of adding programmable interfaces to existing applications. Van
Rossum's vision of a small core language with a large standard library and easily
extensible interpreter stemmed from his frustrations with ABC, which espoused the
opposite approach.

While offering choice in coding methodology, the Python philosophy rejects


exuberant syntax (such as that of Perl) in favor of a simpler, less-cluttered
grammar. As Alex Martelli put it: "To describe something as 'clever'
is not considered a compliment in the Python culture." Python's philosophy rejects
the Perl "there is more than one way to do it" approach to language design in favor
of "there should be one—and preferably only one—obvious way to do it".

Python's developers strive to avoid premature optimization, and reject patches to


non-critical parts of CPython that would offer marginal increases in speed at the
cost of clarity.When speed is important, a Python programmer can move time-
critical functions to extension modules written in languages such as C, or
use PyPy, a just-in-time compiler. Cython is also available, which translates a
Python script into C and makes direct C-level API calls into the Python interpreter.

An important goal of Python's developers is keeping it fun to use. This is reflected


in the language's name—a tribute to the British comedy group Monty Python—and
in occasionally playful approaches to tutorials and reference materials, such as
examples that refer to spam and eggs (from a famous Monty Python sketch)
instead of the standard foo and bar.

A common neologism in the Python community is pythonic, which can have a


wide range of meanings related to program style. To say that code is pythonic is to
say that it uses Python idioms well, that it is natural or shows fluency in the
language, that it conforms with Python's minimalist philosophy and emphasis on
readability. In contrast, code that is difficult to understand or reads like a rough
transcription from another programming language is called unpythonic.

RESULT OF IMPLEMENTATION :

o Enhanced Efficiency: By integrating cutting-edge technologies like


cameras, GPS, IoT communication, robotic mobility, and infrared
sensors, the system can rapidly and accurately identify living
individuals in high-risk environments [T4]. This leads to more
efficient rescue operations and quicker response times.

o Improved Safety: The real-time monitoring and coordination enabled


by these systems enhance the safety of both the rescue teams and the
individuals in distress [T4]. By providing accurate information on the
presence of living individuals, the system helps minimize risks and
ensures a safer rescue operation.
o Increased Accuracy: The use of sensors and data processing
algorithms improves the accuracy of human detection in challenging
circumstances [T4]. This results in fewer false alarms and more
precise identification of individuals who require assistance.

o Remote Monitoring and Control: The IoT communication capabilities


allow for remote monitoring and control of the system, enabling
rescue teams to coordinate operations from a distance [T2]. This
remote accessibility enhances the system's flexibility and
responsiveness.

o Adaptability in Rugged Terrain: The robotic modules with DC motors


and batteries offer unparalleled mobility in rugged terrains, allowing
the system to navigate difficult environments with ease [T2]. This
adaptability increases the coverage area and the system's effectiveness
in reaching individuals in inaccessible locations.

o Mitigation of Impact: By revolutionizing the efficiency and


effectiveness of rescue operations, these systems ultimately save lives
and mitigate the impact of disasters and conflicts [T2]. The timely
detection and response to human presence in critical situations can
significantly reduce casualties and improve overall outcomes.
COMPARISON TABLE:
Aspect Existing System Proposed System
Sensor Integration Relies on limited sensor Integrates advanced
deployments sensors like infrared
sensors, cameras, and
GPS for comprehensive
coverage
Data Processing Manual reconnaissance Utilizes Raspberry Pi for
and limited real-time data processing, enabling
capabilities real-time monitoring and
response coordination
Communication Limited IoT Utilizes IoT
communication communication for
capabilities remote monitoring and
optimized resource
allocation
Robotic Mobility Limited mobility and Robotic modules with DC
coverage motors and 12V batteries
for enhanced mobility in
rugged terrains
Efficiency Inefficiencies in accuracy Enhances efficiency and
and timeliness accuracy in identifying
living individuals in
critical situations
Safety Improves safety through
Potential risks due to lack real-time monitoring and
of integration coordination, minimizing
risks
Impact Significantly improves the
May not provide efficiency, accuracy, and
comprehensive coverage safety of rescue
or accurate data operations, ultimately
saving lives
CONCLUSION:

In conclusion, a major advancement in rescue and relief activities may be seen in


the creation of an Internet of Things-based living human identification system for
conflict and disaster zones. Modern technologies including infrared sensors,
robotic mobility, GPS, cameras, and Internet of Things connection are all
integrated into one system to provide a complete solution for recognizing live
people in emergency circumstances. The system improves the effectiveness,
precision, and safety of rescue operations through rigorous design, testing, and
optimization, ultimately saving lives and lessening the effects of disasters and
conflicts. By offering real-time insights, enabling remote monitoring, and
optimizing resource allocation, the suggested system equips rescue teams with the
necessary instruments to respond to crises efficiently and confidently traverse
dangerous regions. Going ahead, sustained investigation and creativity in In
conclusion, a major advancement in rescue and relief activities may be seen in the
creation of an Internet of Things-based living human identification system for
conflict and disaster zones. Through the integration of state-of-the-art technologies
including infrared sensors, robotic mobility, GPS, cameras, and IoT connectivity,
this field will further expand the capabilities of IoT-based systems, leading to
advancements in global humanitarian aid and crisis management .
FUTURE SCOPE:

o Integration with Advanced Sensors:


 Explore the integration of advanced sensors such as infrared
sensors, thermal cameras, or even LiDAR for improved
detection capabilities. This can enhance the system's ability to
detect humans in various environmental conditions.

o Machine Learning and AI Algorithms:


 Implement machine learning algorithms for more accurate and
efficient human detection. These algorithms can adapt and
improve over time, learning from real-world scenarios and
reducing false positives.

o Real-time Communication and Data Analysis:


 Enhance the communication capabilities of the system to
transmit real-time data to command centers. Utilize edge
computing to process data locally on the Raspberry Pi, reducing
latency and improving response times.

o **Drone Integration:**
 Integrate the system with drones equipped with cameras to
provide aerial surveillance. Drones can cover larger areas and
provide a comprehensive view of the situation, enhancing the
overall effectiveness of the system.

o Energy Efficiency and Autonomy:


 Develop energy-efficient algorithms to prolong the Raspberry
Pi's battery life in remote areas where power sources may be
limited. Consider solar power options for sustainability and
prolonged deployment.

o Localization and Mapping:


 Implement localization and mapping features to track the
movement of detected individuals. This can aid in coordinating
rescue or military operations more effectively.

o Human Health Monitoring:


 Integrate health monitoring sensors to assess the vital signs of
detected individuals. This information can be crucial for
prioritizing rescue efforts and providing immediate medical
attention.

o Robust Security Measures:


 Implement robust security measures to protect the system from
unauthorized access or tampering, especially in military
applications where the system's integrity is critical.

o Collaboration with Emergency Services:


 Establish protocols for seamless collaboration with emergency
services and military units. The system should be designed to
integrate with existing communication channels and emergency
response frameworks.

o Global Deployment and Standardization:


 Work towards standardization of such systems to ensure
interoperability and compatibility across different regions and
countries. This can facilitate global deployment and
collaboration during international crises.

o Humanitarian Aid and Civilian Use:


 Explore applications beyond military use, such as humanitarian
aid and disaster response. The system could be adapted for
search and rescue operations, providing crucial support in
natural disasters or other emergencies.
REFERENCES :

[1] Aiman Neha and Baswaraj Gadgay published a paper on IOT Based
Rescue Robot for Alive Human Detection and Health Monitoring System in
The International journal of analytical and experimental modal analysis Volume
XII, Issue VI, June/2020 ISSN NO:0886-9367

[2] Farooq, Naeem; Ilyas, Umar; Adeel, Muhammad; Jabbar, Sohail.”


Ground Robot for Alive Human Detection in Rescue missions” in IEEE 2018
International Conference on Intelligent Informatics and Biomedical Sciences
(ICIIBMS) - Bangkok, Thailand (2018.10.21-2018.10.24)
DOI:10.1109/ICIIBMS.2018.8550003

[3] Purnima G and Assistant Professor Aravind S. “Alive Human Body Detection
System Using Autonomous Rescue Robot”, published in International Journal of
Emerging Technology and Advanced Engineering on December [2014].

[4] Sandeep Bhatia, Hardeep singh, Nitin kumar prepare and issue a paper on the
topic “Alive Human Detection Using an Autonomous Mobile Rescue Robot", 2011
Annual IEEE India Conference on 16-18 December, DOI:
10.1109/INDCON.2011.6139388.

[5] Machaiah M.D, Akshay S.” IOT Based Human Search and Rescue
Robot using Swarm Robotics”. International Journal of Engineering and
Advanced Technology (IJEAT) ISSN: 2249-8958, Volume-8 Issue-5, June 2019.
[6] P. K. Ranveer, A. Kulkarni, S. Vaste and N. Tarte, "Alive Human
Detection Robot Using Wifi", International Engineering Research Journal (IERJ),
vol. 2, no. 1, pp. 370-372, 2016.
[7] Digambar Jadhav, S. V. Chobe, M. Vaibhav, Laxman Khandare . “Missing
Person Detection System in IoT” 2017 International Conference on
Computing, Communication, Control and Automation (ICCUBEA) DOI:
10.1109/ICCUBEA.2017.8463857

[8] E. Guizzo, "Japan Earthquake: Robots Help Search For Survivors", IEEE,
March 2011, [online] Available: http://spectrum.ieee.org.

[9] P. K. Ranveer, A. Kulkarni, S. Vaste and N. Tarte, "Alive Human


Detection Robot Using Wifi", International Engineering Research Journal (IERJ),
vol. 2, no. 1, pp. 370-372, 2016.

[10] Ko and H. Y. K. Lau, "Intelligent Robot-assisted Humanitarian Search and


Rescue System", International Journal of Advanced Robotic Systems, vol. 6, no. 2,
pp. 121-128, 2009.

[11] N. Gupta, D. Panchal Dangi, D. Desai and M. V. Patel, "Live Human


Detection Robot", International Journal for Innovative Research in Science &
Technology, vol. 1, no. 6, pp. 293-297, 2014.

[12] S. Jabbar et al., "Heuristic Approach for Stagnation Free Energy Aware
Routing in Wireless Sensor Networks" in AHSWN, Old City Publishing, Inc.
Switzerland, vol. 31, pp. 21-45, 2016.

You might also like