Download as pdf or txt
Download as pdf or txt
You are on page 1of 143

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/361458042

Assessment and Analysis of Hyperloop : A Contemporary Transportation


System

Conference Paper · June 2022

CITATIONS READS

0 817

1 author:

Shylaja P.
Kannur University
10 PUBLICATIONS   1 CITATION   

SEE PROFILE

All content following this page was uploaded by Shylaja P. on 22 June 2022.

The user has requested enhancement of the downloaded file.


2021 National Conference on
Recent Trends in Engineering

COLLEGE OF ENGINEERING TRIKARIPUR CHEEMENI


13/12/21 & 14/12/21

NATCON21

Proceedings of Papers

JOINTLY CONDUCTED BY
ELECTRICAL AND ELECTRONICS ENGG. DEPT. (NBA)
COMPUTER SCEINCE ENGG. DEPT.
ELECTRONICS AND COMMUNICATION ENGG. DEPT.

Technical Co-Sponsors:
ARIES BIOMED Technology
ATHEENAPANDIAN Trainers
IEEE Bulgaria Section
Part Number: CFP21UWE-ART
National Conference on Recent Trends in Engineering

COLLEGE OF ENGINEERING Trikaripur, Cheemeni

on 13/12/21 & 14/12/21

MODE GOOGLE MEET

ABOUT THE INSTITUTION:

Established in the year 2000 under the auspices of the Co-operative


academy of professional education(CAPE ), College of Engineering
Trikaripur is a fast growing engineering college in north Kerala. It offers
undergraduate courses, leading to the award of B.tech degree of APJ Abdul
Kalam Technological University. It is NAAC and ISO accreditted. EEE
department is NBA accredited.

ABOUT THE CONFERENCE:

The conference aims to bring together leading academic scientists,


researchers, scholars, and students across the globe to exchange and share
their experiences and research results and development about all new
technologies in the field of Engineering .

The whole idea of the conference is to exchange thoughts and ideas in the
concerned fields and transform those in real-time to solve the problems.
The conference is a rare opportunity for all individuals in the
environmental community to know about the latest technologies and
strategies

TOPICS OF INTEREST INCLUDE, BUT NOT LIMITED TO

Computer Vision
Robotics
Machine Learning
Cloud Computing
Signal Processing
Automation and Control
Embedded &VLSI
Power Engineering
Soft computing
KEY NOTE Details

On 13/12/21, Prof. Nishchal K Verma (IIT Kanpur) delivered a "Talk on Research


Trends in Artificial Intelligence" . He started by wishing the participants and the
officials. He then went on to explain the importance of the AI in recent years. He then
started explaining the basics of neural network. His explained with lot many real time
industrial examples, that enlightened the audience. He pointed that, many recent
advances in AI were made possible by deep learning.

From recommendations on streaming services to voice assistant technologies to


autonomous driving, the ability to identify patterns and classify many different types
of information is crucial for processing vast amounts of data with little to no human
input. He finally concluded by interacting with the faculty and the students and
answered their doubts. The online meeting concluded by 11.25 AM
On 14/12/21, Prof. Deepak Mishra (IIST, Trivandrum) took the stage and carried
forward from the previous day topic. His talk was on Research Scope in Computer
Vision. He explained the potential of Computer vision and image processing in their
own unique ways. He discussed about the usage of image processing and computer
vision in retail, healthcare, and many other industries.

He also pointed that, these technologies can evolve business operations that include a
visual aspect. These technologies can simplify business processes like quality control,
inventory management, and medical imaging. He showed lot of interesting results in
deep learning. He finally interacted with Engineering students and faculty. The
meeting concluded by 11.30 AM
CHIEF PATRON

Dr. R. Sasikumar, Director, CAPE.

PATRON

Dr. Vinod Pottakulath, Principal, CETKR.

CO ORDINATORS Of NATCON21

Prof.J.Mohanalin, HOD, EEE

Prof. Naveena AK, HOD,

CSE Prof. Mahesh VV, HOD, ECE

ADVISORY COMMITTEE

Prof. Domenec Puig, URV, Spain.

Prof. Jordina Torrents, URV, Spain.

Prof. Nischal Verma, EEE, IITKANPUR.

Prof. Sarith P Satyan , App. Mech. IIT Ch.

Prof. Vrinda. V.Nair, SPFU Director.

Prof. Deepak Mishra, Avionics, IIST.

Prof. Yachin Maajma University Saudi.

Lt.Col. P J Anand Kumar, CEO, CriticalAI.

Prof. SheraliZeadally,CCI, USA.

Prof. Rajiv Tripathi, EEE, NIT DELHI.

Prof. N. Sivakumaran, ICE, NIT Trichy.

Prof. Ojus Thomas Lee, COE Kidangoor.

Prof. Seldev, ECE, SXCCE TamilNadu.

Prof. Jaya Prakash P, EEE, GEC Waynad.

Dr. Leon Raj, Scientist, CSIR, Jorhat.

Prof.Smitha Sunil K Nair, CSE, Oman


INDEX OF CONTENTS

Sl.No Title Name Authors name Page Number


Assessment and Analysis of Hyperloop: A P Shylaja and P A Bibin
1 Contemporary Transportation System 1-5

2
Secure Key Management System for
Automotive Environment Binitha Christudas and 6-10
Jose T. Joseph
3 DNA COMPRESSION USING RLE AND Dr Gomathi S,
HUFFMAN ENCODER Dr.Ignisha Rajathi G and 11-16
Dr. Gopala Krishnan C
4 Intelligent Parking System Vedhapriyavadhana R, G
Ignisha Rajathi, S.L 17-22
Jayalakshmi and R Girija
5 Dr. Prinza L andDr
Detecting COVID_19 in X-ray images using Jerald Prasath G 23-27
artificial neural network
6. LANDSLIDE MONITORING AND Dr.Ignisha Rajathi G,Dr.
EARLY WARNING SYSTEM USING Prinza L and Dr.Jerald 28-34
CLUSTER OF SENSORS AND NEURAL Prasad G
NETWORK
7. FACE EMOTION DETECTION USING SANJAY SANTHOSH
CNN AND HAAR 35-39
CASCADE CLASSIFIER
8.
A RULE BASED MALAYALAM PARTS Anjali M K
OF SPEECH TAGGER 40-43

9. A survey on Alzheimer’s disease diagnosis Dr.Prinza L, Dr.Beena


and its impacts on EEG Mol M, Dr.Ignisha 44-49
Rajathi G and Dr.Jerald
Prasath G
10 JOINT ZEROFORCING SUB OPTIMAL Priya L R, Ignisha
POWER ALLOCATION FOR Rajathi G and Allwyn
COOPERATIVE WIRELESS NETWORKS Kingsly Gladston J 50-54

11 SMART HOME SECURITYUSING M.Lavanya,


LabVIEW M.Arivalagan,
Dr.P.Hosanna Princye, 55-63
Dr.S.Sivasubramanian
12 SENSORS FOR STRUCTURAL HEALTH PRADEEP KUMAR P,
MONITORING Beenamol.M 64-71
-AN EXPERIMENTAL EVALUATION
13 A Detailed study on CAD based Dr. Jerald Prasath G and
Mammogram Techniques Dr.L Prinza 72-79
14 AN IMAGE SUPER RESOLUTION DEEP Athirasree Das, Dr.K.S
LEARNING Angel Viji, Linda
MODEL LapSRN WITH TRANSFER Sebastian 80-84
LEARNING
15 AMPLITUDE CHARACTERISTICS OF Dr.M. Beena Mol
STRONG GROUND MOTION , Dr.L. Prinza,Dr. Jerald 85-90
ACCELEROGRAMS - A REVIEW ,Dr. Johny Elton
And Dr. Ignisha Rajathi
16 A REVIEW ON POWER GENERATION Dr.V.Sampathkumar
ENHANCEMENTS IN A PUMPED
STORAGE Dr.P.Sridharan 91-102
POWERHOUSE BY USING
APPROPRIATE GUIDE VANE SEALING Dr.Parthiban KP
MATERIAL

17. A Performance Comparison of MobileNet Gargi Chandrababu,


and VGG16 CNN Models in Plant Species Ojus Thomas Lee, Rekha 103- 108
Identification KS
18 A Novel Method of Rice Disease Detection Reenu Susan Joseph
Based on Deep Learning 109-114
19 Text Detection and Recognition From Revathy A S, Jyothis
Signboards with Hazy Background Joseph, Anitha Abraham 115 - 119
20 M. Lavanya,
IOT BASED HOME AUTOMATION M.Arivalagan,
Dr.P.Hosanna Princye, 120-129
Dr.S.Sivasubramanian

21 WAVELET BASED DETECTION OF THE Dr.M. Beena Mol 1


ARRIVAL OF SEISMIC P-WAVES ,Dr. L. Prinza, Dr.Jerald
, Dr.Johny Elton 130-136
and Dr.Ignisha Rajathi
NATCON 21 College of Engineering Trikaripur
Conference Proceedings
13/12/21 & 14/12/21

Assessment and Analysis of Hyperloop: A Contemporary Transportation System


P Shylaja and P A Bibin
Kannur University

Abstract-

Hyperloop is a unique way of transport connecting different countries with a speed of more than 1200kmph. Its design ensures safety and
efficiency as well as sustainability in a fixed guideway, a tube-based system fundamentally designed by a young Scientist named Elon Musk.
The system, “pod” works using magnetic levitation in a low-pressure condition. The pod is designed in such a way that it accommodates
external low-pressure conditions as well as safety aspects without compromising comfort. The pod system is powered by onboard
rechargeable batteries. Each pod gets its power on its own for different operations such as levitation, acceleration, control, etc. The
Hyperloop system involves experts in Engineering fields like Mechanical, Electrical, Aeronautical as well as Structural and Business
domains. Many leading companies such as Virgin Hyperloop, Hyperloop Transportation Technologies, TransPod, DGW Hyperloop, Hard
Global Mobility, Zeleros, and Nemovo put forward their idea on Hyperloop and working on it. Virgin Hyperloop Company makes a
landmark in History by providing a test run with two people. With Hyperloop connectivity between different cities, people could live in one
Country and work in another. This paper reviews the studies of Hyperloop conducted by researchers so far. Apart from that, the operation
of the Hyperloop, internal structure, and the advantages of Hyperloop Technology as well as the Limitations analyzed and presented in this
paper which will enlighten researchers to do effective research work.

Keywords: Hyperloop, Transportation, Pod, Maglev, Capsule

INTRODUCTION

Hyperloop is a very fast technology adopted for traveling long routes with a speed more than aircraft, Maglev, and high-
speed trains. Elon Musk, a Young Scientist introduced the Space X design, the idea of traveling in a capsule in 2012. He
made it public as an Open Source with the intention to modify the design and nurture the idea commercially viable. Many
companies had put forward their ideas and contributions towards Hyperloop [1]. The proposed travel timing taken from Los
Angeles and San Francisco is 35 minutes and with a load/unload time of 59 minutes [2]. The 5th mode of transportation
technology is making transportation among two countries situated at a difference more than 1500 kilometers with a speed of
more than 1200kmph, same as the speed of sound. Virgin Hyperloop model has a seating capacity of 28 in a single capsule
with a maximum design speed of 1,223kmph, which is very high compared to the present Maglev system which is of
maximum design speed of 604kmph. The present maximum speed is 387kmph in Hyperloop whereas the present Maglev
speed is 431kmph [3].

SYSTEM ANALYSIS

The conventional transportation modes available so far are road, water, rail, and air. These modes are comparatively
slower and costlier. Hyperloop is considered to be the blend of both high-speed rail and airline systems. Hyperloop is an
ultra-new mode of transportation in which the initial idea is developed by Elon Musk, where it would propel the people or
cargo-filled pods at speed of more than 1200kmph over long distances. The peculiarity of Hyperloop is that the technology
made it minimum friction and air resistance. More speed with less power makes the pod run fast with the speed of sound [5-
6]. Hyperloop is 3 times faster than the high-speed rail and 10 times faster than the traditional rail [14].
China Aerospace Science and Industry Corporation and the North University of China began construction of a low-
vacuum test system of 2km expected to be complete on July 2022 and of length 15km within 2 years for a capacity of
1000kmph [7].
Hyperloop Model competition was announced under the leadership of Elon Musk in July and 20 Companies have been
selected among many participants from different countries. One of them is a group of 30 students from IIT Chennai. They
made a model Hyperloop named ‘Avishkar’ of 3 meters long.
Wireless charging technology makes the hyperloop get more powered during traveling. In addition to that, a solar panel
array is covered on the top surface throughout the track that produces a high amount of energy without consuming petrol or
diesel [4][8].

A. Two Versions
Hyperloop has two versions : passenger-only version and the passenger plus vehicle version. It stated a capacity of
transporting people, vehicles, and freight between Los Angeles and San Francisco within a time frame of 35minutes. The
Hyperloop passenger-only version has a seating arrangement of 28 passengers per capsule and can meet the travel of 840
passenger/hour with an typical departure time of 2 minutes. The passenger in addition vehicle version has the capacity of
loading 3 vehicles rather than passengers [26].

B. Pod
1
NATCON 21 College of Engineering Trikaripur
Conference Proceedings
13/12/21 & 14/12/21

The pod [Figure1] is designed using cutting-edge composite materials and is expected to move on the magnetic cushion in
the tube provided. The shape of the pod is similar to an aircraft exclusive of wings. Each pod is having its compressor in the
front end which passes the pressured air to the back end so that during the traveling time, it reduces the drag which results
from air friction [9-11].

Figure1 Pod [5]


Solar panels and other renewable energy sources attached makes more energy than they utilize. Hyperloop use a driving
force system as in Maglev and runs at a speed higher than that. On the way to the route, containers will be there every 6.2
miles. Many companies put their effort to develop pod and vacuum tubes. The pylon supported tube contains many capsules
in an average distance of 37km each [12-13]. Electromagnetic supported Maglev was originally developed in Germany
reached a maximum speed of 420kmph as though the design speed was 500kmph [11]. The tube provides a low-pressure
travel approach environment, protecting the pod from all outer environmental conditions. Concrete pylons were built in to
help the tube [14].

Figure2 Internal Structure of the Pod [6]

HOW HYPERLOOP WORKS

The two main parts of the Hyperloop are the pod and the capsule, for traveling people. Every land transport has a specific
limit of air resistance and friction. There is no air resistance in Hyperloop since the air in the capsule is maximumly
vacuumed. The air pressure in the tunnel is very low and at a time, only one capsule can move through the tunnel with a
speed of 700-720miles/hour. When an airplane flies 1,50,000 feet from the earth, air friction is less. But inland transport, air
friction is more. Maximum air has been removed from the tunnel and friction is controlled by advanced software’s which is
developed by Virgin Hyperloop and ensures smooth acceleration for the passengers [6]. Richard Branson, founder of Virgin
Hyperloop uses Maglev Technology instead of air bearing, proposed by Elon Musk. Various researches have been going on
the technology of Magnetic Levitation (MagLev) technology, especially by Inductrack in the USA and SC Maglev in Japan.
The two magnetic trains using Maglev technology are Shanghai Transrapid and Limino in Japan in which trains levitate on
the guideway for lift and propulsion [15-17].
Virgin Hyperloop Chief Technology Officer and Co-founder John Giegel and the Head of Passenger experience, Sara
Luchian were the first two passengers to test ride the Hyperloop onboard the prototype named “Pegasus’. They were
2
NATCON 21 College of Engineering Trikaripur
Conference Proceedings
13/12/21 & 14/12/21
traversed into an airlock by removing the air inside. The pod then set off to 100mph down to the test track before coming to a
halt soon after. The test track located about 30 minutes from Las Vegas is 500meters in length and has a width of 3.3 meters
[18]. The test took place with a maximum speed of 387kmph and was successful in completing a passenger trial on 8th
November 2020. The Company has carried out several hundred unmanned tests so far. The Pegasus pod, nicknamed ‘XP-2’
was designed with the help of an architect in Denmark Bjarke Ingels. The difference between the ordinary railway route and
the Hyperloop route is that we can walk at the railway track and can see the passengers sitting inside through the window.
But in Hyperloop, the capsule containing passengers move inside the tunnel, with no windows in the capsule [19].
The Hyperloop is partially powered with solar energy generated from the solar panels fitted on top of the tubes. At the
same time, the HTS (High-Temperature Superconducting Suspension) [20] and EMS(Electro Magnetic Suspension) [1] also
can assist the hyperloop pod suspension. The tube is designed in such a way that it provides flexibility withstanding
vibrations caused by natural disasters. Linear induction motor fixed helps for the forward movement of the capsule.

Figure3 Capsule of Virgin Hyperloop One [20]


Capsule [Figure3] moves by applying an initial velocity by an external linear motor and should give energy every 30
seconds to maintain speed. Since the air pressure in the tunnel is 1/6th of the pressure in Mars, the speed is comparatively
more. Two ways of increasing the speed of the capsule are: (1) By connecting the motor on the side of the capsule and (2) By
applying induction through rolled coils in the tunnel. Maglev (Magnetic levitation) concept is used here [21]. The capsule has
been moved through the vacuumed tunnel at a very high speed by applying magnetism of the Maglev system. So there is no
friction in this system which has been seen in much-wheeled transportation. The pods are designed in such a way that it floats
on air skis using the idea of a hockey table or magnetic levitation to reduce friction [22]. The communication transfer from
the pod to the data processor is to be happened by transferring sensor data between radio waves and antennas by the use of
optical fibers and the hardware’s installed on the pods using 820.11 wi-fi standards.[23-24]

OPERATIONAL COST

Musk, in his alpha paper, evaluated the cost of traveling from Los Angeles to San Francisco to be US$20, by expecting
the number of passengers per year as 7.4 million. The cost of the Hyperloop system proposed by Musk is $6 billion for the
passenger only version and $7.5billion for the passenger plus vehicle version [25]. Passenger and cargo capsules and the
total cost are outlined as $7.5 billion. Recent researches found that it may not be feasible as there are changes in the
preliminary technical design. Musk stated in his white paper Hyperloop Alpha that traveling below 500 mile distances
makes economic sense.

HYPERLOOP ROUTE SELECTED IN INDIA

Virgin Hyperloop has announced a Global Challenge to find out the best team for finding appropriate route for the
Hyperloop. They selected 10 route among 5 countries from the 2600 entries. Further they select 5 routes in India. The details
are shown in the following table [Table1].
Table1 : Hyperloop routes in India [26]

3
NATCON 21 College of Engineering Trikaripur
Conference Proceedings
13/12/21 & 14/12/21

Sl Route Distance Date of MoU Company Max. Permissible


No signature Speed of
Hyperloop
1 Mumbai-Pune Hyperloop 117.5Km 16th November 2020 Virgin Hyperloop One 1223kmph
2 Anantapur-Vishakhapatnam 700Km 06th September 2017 Hyperloop Transportation Technology 1223kmph
3 Amritsar-Chandigarh 226km 3rd December 2019 Virgin Hyperloop One 1100-1300kmph
4 Delhi-Chandigarh 240km On discussion Virgin Hyperloop One 1100-1300kmph
5 Bangaluru-Kempegowda 40km 27th September 2020 Virgin Hyperloop One 1080kmph

ADVANTAGES AND LIMITATIONS

A. Advantages
Hyperloop is expected to be the fastest mode of transportation in the coming era. As the traveling capsule is completely
inside the tube, it is safer from natural disasters like ice, rain, fog, and changing weather conditions. Traffic problems will
not be a nightmare for travelers. Driverless operations can improve the speed of the Hyperloop [3,4,18,26].

B. Limitations
Hyperloop facility is having fewer seats compared to other high-speed vehicles such as aircraft or high-speed rail.
Bidirectional travel in a single tube is yet to fulfill. Researches have to do to find out the traveling as there are more
challenges regarding land acquisition, construction of route through a highly populated urban area, and construction. As it is
a tube-like infrastructure, no windows are made in the capsule. So one cannot get direct airflow or outside view in traveling.
Researchers have been going on the measurements to be taken in the case of emergencies. Lateral gravity may affect
inversely as the traveling route is not always straightforward. Velocity variations can impact health problems for passengers.
Existing Hyperloop system lacks full scale operation [3-5,16,26].

CONCLUSION

Many private and public stakeholders took curiosity in the development of Hyperloop from the discharge of a White
paper by Elon Musk[1]. A rapid change in transportation has been seen after the evolution of the 5G –Hyperloop Technique.
Many companies have put their effort into the initial study of selecting the route, developing pod, and the construction of
Hyperloop, a reality after the test run conducted by Virgin Hyperloop in November 2020 with 2 people. India got signed
MoU to start initial work in five different routes and probably complete by 2030. This will be the fastest land transportation
medium for passengers and vehicles developed so far. This review may help for further research and provide a pathway for
identifying gaps in hyperloop development. Future research may be focused on updating with powerful technology and
stakeholders.

REFERENCES

[1] Musk E (2013) Hyperloop Alpha. SpaceX, Texas http://www.spacex.com/ sites/spacex/files/hyperloop_alpha-20130812.pdf


[2] https://www.icomera.com/hyperlooptt-connects-with-icomera-traxside-for-wireless-communications/
[3] K. Armağan , "The fifth mode of transportation: Hyperloop", Journal of Innovative Transportation, vol. 1, no. 1, pp. 1105, Jul. 2020.
[4] Vinay Pandey, Shyam Sasi Pallissery, “Hyperloop, Train of Future”, International Journal of Scientific & Engineering Research,
Volume 8, Issue 2, February-2017 ISSN 2229-5518]
[5] https://www.wired.com/story/guide-hyperloop/(accessed on 02.11.2021)
[6] Gieras, J. F. (2020). Ultra high-speed ground transportation systems: Current Status and a vision for the future, Przeglad
Elektrotechniczny. https://doi.org/10.15199/48.2020.09.01
[7] https://en.wikipedia.org/wiki/Hyperloop accessed on 02.11.2021
[8] Jia. P. Z, Razir et. Al., “Consumer Desirability of the Proposed Hyperloop”, University of California, Santa Barbara, USA, 2019]
[9] HyperloopTT Great Lakes Feasibility Study;https://www.greatlakehyperloop.com/results
[10]Delft Hyperloop Safety Framework for the European Hyperloop Network;https://hyperloopconnected.org/2020/07/report-safety-
framework-for-the-european-hyperloop-network/
[11] Ingo A. Hansen (2020) Hyperloop transport technology assessment and system analysis, Transportation Planning and Technology,
43:8, 803-820, DOI: 10.1080/03081060.2020.1828935
[12] Gkoumas, K.; Christou, M. A Triple-Helix Approach for the Assessment of Hyperloop Potential in Europe. Sustainability 2020, 12,
7868. https://doi.org/10.3390/su12197868
[13] https://www.tesla.com/sites/default/files/blog_images/hyperloop-alpha.pdf
[14] http://hyperloopconnected.org/2019/06/report-the-future-of-hyperloop/
[15] Steven Goddard, “The Hyperloop High Speed Transportation System An Aerodynamic CFD Investigation of Nozzle Positions and
4
NATCON 21 College of Engineering Trikaripur
Conference Proceedings
13/12/21 & 14/12/21
Flow Phenomena”,10038749, University of the west of England
[16] Mitropoulos, Lambros, et al. "The Hyperloop System and Stakeholders: A Review and Future Directions." Sustainability 13.15 (2021):
8430
[17] Virgin Hyperloop. (2019). Hyperloop One. Retrieved from: https://bit.ly/31Nd4QU
[18] https://virginhyperloop.com/project/devloop(accessed on 23.10.2021)
[19] Sui, Yang, et al. "Numerical analysis of the aerothermodynamic behavior of a Hyperloop in choked flow." Energy 237 (2021): 121427
[20] Hongliang Wang, Jian Li, Ronghai Qu, Junquan Lai, Hailin Huang, Hengkun Liu, "Study on High Efficiency Permanent Magnet
Linear Synchronous Motor for Maglev", Applied Superconductivity IEEE Transactions on, vol. 28, no. 3, pp. 1-5, 2018.
[21] Goddard, E. C., “Vacuum tube transportation system,” June 20 1950, US Patent 2,511,979.
[22] Akhouri Amitanand Sinhaa and Suchithra Rajendran, “Can Hyperloops Substitute High Speed Rails in the Future?”
[23] Ai, B. Cheng et. Al, “Challenges Toward Wireless Communications for High Speed Railway, IEEE Trans., Intell. Transp. Syst. 2014,
2143-2158.
[24] Small, K. A., “Valuation of travel time,” Economics of transportation, Vol. 1, No. 1-2, 2012, pp. 2–14.
[25] https://www.indianeagle.com/travelbeats/hyperloop-transport-system-in-india/(accessed on 17.11.2021)
[26] https://static.independent.co.uk/s3fs-public/thumbnails/image/2017/01/16/13/hyperloop-1.jpg

5
NATCON 21 College of Engineering Trikaripur
Conference Proceedings
13/12/21 & 14/12/21

Secure Key Management System for Automotive Environment


Binitha Christudas and JoseT.Joseph
Government Engineering College Idukki

Abstract—
Automotive software security is a key agenda of automobile manufacturers. Most of the OEMs struggle to meet the demands of
securing the in-vehicle networks which are now vulnerable to threats that impact the confidentiality, integrity, availability and more
importantly safety of passengers and the operation of the vehicle. This paper proposes a software security mechanism for key
management that accelerated the software development of automobile software security.

Keywords—ECUs, Key Rotation, Key Generation, HSM

INTRODUCTION
In today’s world of connectivity, everyone is receiving the data at a faster pace. Communication happens in such a way
that within a matter of sec the required information is at our fingertips. The same software revolution in automotive industry
demanded a large amount of electronics applied over the traditional hydraulic mechanical systems of a vehicle. The ECUs
(Electronic Control Units) has now become the superior part of any vehicle. In a modern day vehicle there are around 150
ECUs that are crucial for the entire vehicle system. These ECUs are running on microcontrollers with an OS such as Linux
or Android. Each ECU in a car have a unique functionality that is major stone for the different functions like braking ADAS,
safety-critical applications, sunroof, door locks and autonomous driving Automotive ECUs are connected together by
forming networks inside the car. These networks use different automotive buses namely CAN (Controller Area Network),
LIN (Local Interconnect Network),Ethernet, BroadR-reach, MOST and FlexRay. LIN network is predominantly used for low
data rate functions such as communication between ECUs, sensor unit and actuators. LIN network is used for door lock,
climate, wiper control and mirror adjustments. The maximum data transfer speed is 19.2kbps.
CAN network is used for medium data transmission that requires only two data lines which are shielded by a sheath or
twisted pair. This network is based on multi master principle. Different body control ECUs, Transmission Control ECUs and
Powertrain Control ECUs are connected using the CAN network. The maximum data transfer for CAN is 1Mbps over 40m.
FlexRay have fully deterministic low latency and high- speed transmissions. The safety critical applications such as steer-by-
wire, brake-by-wire are managed by FlexRay. They support different bus systems such as passive bus and hybrid, active star
topologies each using two channels and cascaded two-level star/bus hybrid. MOST stands for Media Oriented Systems
Transport. This network is specifically applied to multimedia and infotainment bus system.
Ethernet used for data packets received from outside the vehicle & processed inside the in-vehicle network. These data
packets are then send to the respective domains like ADAS, Infotainment ECUs. The main applications are the measurement
and calibration diagnostics via DoIP. All these networks, buses and the ECUs arevulnerable to different security threats. The
CAN bus is considered to be more vulnerable to cyber security attacks. Accessing ECUs via CAN bus is providing
opportunities to the attacker and effectively control the car. The connectivity with mobile and other infrastructure is also
attack vector points. Thus it is very crucial to implement effective security mechanisms to the bus and the ECUs.
The traditional cyber security mechanisms like encryption, decryption uses a symmetric or asymmetric key. In
automotive industry the keys are sent from the OEM server to the ECU. These keys are encryption keys (private keys)
used to decrypt data sent to the ECU. During the development phase of the ECU, the OEM will request the hard coded
keys. The keys will have to be continuously .replaced from the SOP (Start of the Project) till production. The different
keys will be BOOT MAC key, memory region access keys, signature verification keys etc. All these keys can be
randomly generated using HSM engine which is having an RNG engine (Random number generator)..The HSM is a
hardware security module used for encryption. This will reduce the software file system in TCU. HSM is used to create,
store the cryptographic keys inside a hardware chip. The keys stored in the HSM core gives the digital identify that is
needed for authenticating thevehicle.

SCENARIO
A. Background
Lot of studies have been carried out for securing data in adhoc networks especially MANETs. In [1], data security in
heterogenous MANETs are addressed efficiently but have not been particular about encryption keys in vehicular HSM.
Although key management for access hierarchies as in [5] is reliable and efficient for user access control, in privilege
classes of users, it is not suitable for key management in vehicular HSMs. Hierarchical key management schemes as in [3]
are found to exhibit considerable computational complexity that make inappropriate for in-vehicle HSMs.
The Secure Key Management System aims at implementing a secure key management design where all keys
generated are completely random, stored securely, used properly to perform a hybrid encryption process, enable key
6
NATCON 21 College of Engineering Trikaripur
Conference Proceedings
13/12/21 & 14/12/21
expiry etc. The whole process is exhibited with the help of a client-server setup so as to give a view of real-time
implementation. Here, the objective is to enable a system where a message in the communication network is safely
encrypted and sent through the communication channel to the intended recipient. On receiving the message, the receiver
decrypts the encrypted message. The secret key generated to encrypt the message is never stored anywhere, which is the
reason why it is practically not possible to retrieve the key from a file or any other stored place. Only the key-encrypting
keys, that is, the keys used to encrypt the secret key is stored in PEM format. After encryption, only the encrypted key is
sent to the recipient from which the receiver will be able to decrypt the key and later the
message, if and only if the recipient is the authorized one that is, the authorized recipient shall contain the corresponding
private-key.

B. Challenges
 Security: Keys when compromised leads to total failure of the system. Securing the keys themselves is as
important as securing data.
 Usability : Complexity of the key management system make it clumsy and less usable.
 Availability : Data inaccessible make it useless for authorized users. Overcomplexity may result in denial of
access.
 Hetrogeneity : System compatibility with variety of applications from various sources is highly desirable for
wide acceptance in the market.

Fig. 1. Challenges in Key Management System


I. BENEFITS OF SECURE KEY MANAGEMENT SYSTEM

 The Secure Key Management system provides enhanced security. Apart from following a single type of encryption,
adopting a hybrid encryption scheme not only enhances the strength of the cryptosystem but also increases its
performance in terms of speed.
 The proposed secure key management technique is highly economical, unlike other HSMs that are usually highly
expensive.
 Every HSM vendor or Key Management Servers have their own predefined policies and rules that are strictly
ensured. These policies may not always be in accordance with the user’s policies. However, the proposed system
generates and uses keys according to the user-defined policy. Modifications may be made afterwards according to
the user policy .
 Apart from enhanced security and efficiency, it is also an easy solution to the key management challenges existing
now. There is no need to worry about the hardware maintenance or KMS renew policies since there is no third-party
application used here for key management.
Third party vendor applications are not completely secure. The proposed approach mitigates the risks in third-
party applications and develops a strategy for secure KMS implementation in a more authentic way and avoids
compliance issues.

II. BENEFITS OF SECURE KEY MANAGEMENT SYSTEM The Key Management System (KMS) is a client- server
7
NATCON 21 College of Engineering Trikaripur
Conference Proceedings
13/12/21 & 14/12/21
set up capable of performing functions such as:
 Key Generation
 Hybrid encryption and decryption
 Digital Signature creation and verification
 Key Rotation
 Automatic transfer of updated keys

Fig. 2. Hybrid Secure Key Management System Flowchart.

A. Key Generation
Client generates 16-byte symmetric key before AES encryption which is not stored anywhere. Each time a new file is
received for encryption a new symmetric key is generated. A pair of asymmetric keys is also generated at client side for
Signature creation and verification. To encrypt the symmetric key, a pair of asymmetric keys (2048 bit) will also be
generated at the server/receiver side.
B. Hybrid Encryption and Decryption
The file been shared is encrypted using the 16-byte symmetric key. The symmetric key itself is then encrypted using
receiver's public key. The encrypted symmetric key and encrypted file along with digital signature is sent to receiver.
The hybrid decryption takes place at the receiver side. The encrypted key file will be decrypted using receiver’s
private key, and the resulting symmetric key received is then used to decrypt the file and read the contents.
C. Digital Signature Generation and Verification
A Digital signature provides means to verify authenticity of the sender and give the property of non-repudiation. The
hash of the symmetric key is generated using SHA256 algorithm and the hash is encrypted using sender’s private
key. The digital signature is sent to the recipient along with encrypted key and encrypted file.

Fig. 3. Digital Signature Generation and Verification

D. Key Rotation
8
NATCON 21 College of Engineering Trikaripur
Conference Proceedings
13/12/21 & 14/12/21
At the client and server side, an operating system task is scheduled for ‘Key Rotation’ that automatically executes the
python script to create new keys on a specific date and time of every month. This is to ensure proper expiry of keys at
regular time period and also enabling the newly generated keys to be used instead of invalid keys.
E. Automated transfer of Keys
A script is assigned in the task scheduler that automatically uploads the sender’s public key and downloads receiver’s
public key. At regular intervals as scheduled, the script will be executed and the file transfer is done automatically.
F. Client-Server Set up
The three files to be uploaded to the server are encrypted file, encrypted key file and the signature file. These files are
transferred from windows machine using WinSCP application. Once the sender-receiver authentication is completed, the
files are transferred using SFTP protocol.

CASE STUDY
A. Secure Software Updating
The software update will enable upgrading the car functionalities or bug fixations in the embedded software installed on
its Electronic Control Units (ECUs) remotely. The introduction of secure software updates in the automotive industry has
brought many advantages for both the Original Equipment Manufacturer (OEM) and the driver. Secure software is often
found critical for Infotainment systems and other safety critical powerful ECUs. Secure software updates are typically
implemented based on RSA digital signatures. For each ECU, a private/public key pair is generated. The public key is stored
in the ECU and the private key is stored in a server. The server then uses the private key to digitally sign firmware, and the
ECU applies the public key to digitally verify the firmware. In order to reduce the number of key pairs, it is reasonable to
generate key pairs that are used by more than one ECU, e.g., one key pair for each ECU model and year. It is also important
to share the generated keys securely and cost effectively, for that the proposed key management approach can be adapted.
B. Securing vehicle to vehicle communication
To address the providing confidentiality of vehicle-to- vehicle (V2V) broadcasting, key management and message
encryption method that is secure, lightweight, and scalable shall be used. The key management method can efficiently
handle various scenarios like a node joining or leaving the group, with scalable algorithms. The secure key management
offers several advantages such as the reduction of the key management overhead and the enhancement of the security level
by hybrid cryptographic encryption. It is security practice to use separate cryptographic keys for each application such that it
can be foreseen that each vehicle will be loaded with numerous cryptographic keys. This does not only apply to passenger
vehicles but also to trucks, construction machines, etc.
Further applications for key management in automotive environment include sender authentication and key
management provisioning for securing onboard communication. Another application of secure key management is in
providing security to communication with in vehicle ad hoc networks. The secure key management in VANET
communication is responsible to generate, distribute and update the keys in network. The system allows VANET to
authenticate new vehicle before exchanging keys. The Key Management Server (KMS) distributes group key in its group
and update when a vehicle joins or leaves the group. Whenever a vehicle joins or leaves group, the key is updated in
linear time. Moreover, the secure key management system is able to generate and distribute keys securely, resistant to
number of attacks and maintain forward as well as backward secrecy in the network.

CONCLUSION
The Secure Key Management System performs some very important operations like key generation, hybrid
encryption, key rotation, hybrid decryption, and automated transfer of public keys. The client-server architecture is used
to have a demonstration of these operations. Though there are expensive HSMs for key generation and storage, and KMS
facilities, all these operate according to their own pre- defined policies. The proposed key management system has an
independent structure could be used in any sector of the business where confidential files are to be transmitted without
financial expenses. The proposed approach helps to develop a truly hardware independent system a solution for existing
applications.
REFERENCES
[1] W. Chu-yuan, ” A Hybrid Group Key Management Architecture for Heterogeneous MANET”, 2010 Second International Conference on
Networks Security,Wireless Communications and Trusted Computing,vol. 2, pp. 537-540
[2] M. Cobb, “Advanced Encryption Standard”, Last updated April 2020,URL https://searchsecurity.techtarget.com/definition/Advanced-
Encryption-Standard
9
NATCON 21 College of Engineering Trikaripur
Conference Proceedings
13/12/21 & 14/12/21
[3] M. L. Das, A. Saxena,V.P. Gulati, and D.B. Phatak, “Hierarchical key management scheme using polynomial interpolation”, ACM SIGOPS
Operating Systems Review, vol.39(1), pp 40-47,January 2005
[4] S. Admin, “Understanding digital signatures: The cryptography of our modern way of signing”, URL
https://www.fillanypdf.com/BlogDetail.aspx?id=10203.
[5] M. J. Atallah, M. Blanton, N. Fazio, and K. B. Frikken, “ Dynamic and Efficient Key Management for Access Hierarchies”, ACM
Transactions on Information and System Security, vol.12(3) pp 1-43, January 2009

10
NATCON 21 College of Engineering Trikaripur
Conference Proceedings
13/12/21 & 14/12/21

DNA COMPRESSION USING RLE AND HUFFMAN ENCODER


Dr.S.Gomathi1, Dr.Ignisha Rajathi2, Dr.C.Gopala Krishnan3
Professor,Department Of CSBS, Francis Xavier Engineering College,Tamilnadu
1
2
Associate Professor,Department of CSBS, Sri Krishna College of Engineering and Technology,Coimbatore, Tamilnadu
3
Associate Professor, Computer Science and Engineering, Gitam University,Bangalore

Abstract:
Recently, the ever – increasing growth of genomic sequences DNA or RNA stored in database poses a serious challenge to
the storage, process and transmission of these data. Hence effective management of genetic data is very necessary which
makes data compression unavoidable. The current standard compression tools are insufficient for DNA sequences
compression algorithm based on One-Bit Compression method (OBComp) that will compress both repeated and non-
repeated sequences. Unlike direct coding technique where bits are assigned to each nucleotide resulting compression ratio of
2 bits per byte (bpb), OBComp used just a single bit 0 or 1 to code the two highest occurrences nucleotides. The positions of
the two others are saved. To further enhance the compression, modified version of Run Length Encoding technique and
Huffman coding algorithm is then applied respectively. The proposed algorithm has efficiently reduced the original size of
DNA sequences. The easy way to implement our algorithm and the remarkable compression ratio makes its use interesting.

INTRODUCTION

Bioinformatics is an interdisciplinary field that combines computer science and biology to research, develop, and
apply computational tools and approaches to solve the problems that biologists face in the agricultural, medical science and
study of the living world.The most important element in bioinformatics is the study of biomolecules present in DNA.
Deoxyribonucleic Acid (DNA) is a molecule that encodes the genetic information, it helps a lot of medical treatments,
identify individuals, detect the risk for certain diseases and diagnose genetic disorders.DNA sequences are the combinations
of only four bases adenine, cytosine, guanine, and thymine (A, C, G, T). Around 3 billion characters and over 23 pair of
chromosome are there in human genome and can even reach more than 100 billion nucleotides for certain amphibian species.
One million Terabytes (TB) is required to store one million genomes, equivalent to 1000 Petabyte (PB). All of this amount of
data is stored in special databases also known as databanks which has been developed by the scientific community where
biologists can also analyse their data. To further enhance the compression, Run Length Encoding technique and Huffman
coding algorithm are then applied respectively. The proposed algorithm has efficiently reduced the original size of DNA
sequences. The easy way to implement this algorithm and the remarkable compression ratio makes its use interesting.
Biological sequence compression is an effective apparatus to extract information from biological sequences. With
respect to computer science data compression stands for curtailment of the size of memory used to store a data. From a
mathematical point of view, compression implies better understanding and comprehension. The compression of DNA
sequences is not an easy task. General purpose compression algorithms fail to perform well with biological sequences. Most
of the Existing software tools worked well for English text compression but not for DNA Genomes. Lossless and lossy are
two compression techniques. In lossless compression all of the information is completely restored after decompression.
While in the lossy technique, we loss some of the data and do not recover the complete original data after decompression.

RELATED WORK
Data Storage costs have an appreciable proportion of total cost in the creation and analysis of DNA sequences [1]. In
particular, the increase in the DNA sequences is highly remarkable with compare to increase in the disk storage capacity.
General text compression algorithms do not utilize the specific characteristics of DNA sequences. In this paper we have
proposed a compression algorithm based on cross complementary properties of DNA sequences. This technique helps for
comparing DNA sequences and also to identify similar subsequence which may lead to the identification of structure as well
as similar function.The experimental results show that it performs better compression as compared to other existing
compression algorithms.Universal data compression algorithms fail to compress genetic sequences. It is due to the specificity
of this particular kind of “text”. We analyse in some details the properties of the sequences, which cause the failure of
classical algorithms. [2] presents a lossless algorithm, biocompress-2, to compress the information contained in DNA and
RNA sequences, based on the detection of regularities, such as the presence of palindromes. The algorithm combines
substitutional and statistical methods, and to the best of our knowledge, lead to the highest compression of DNA. The results.
11
NATCON 21 College of Engineering Trikaripur
Conference Proceedings
13/12/21 & 14/12/21
although not satisfactory, gives insight to the necessary correlation between compression and comprehension of genetic
sequences.

A Compression Tool, “GenBit Compress”, [3] for genetic sequences based on our new proposed “GenBit Compress
Algorithm”. Our Tool achieves the best compression ratios for Entire Genome (DNA sequences). Significantly better
compression results show that GenBit compress algorithm is the best among the remaining Genome compression algorithms
for non-repetitive DNA sequences in Genomes. The standard Compression algorithms such as gzip or compress cannot
compress DNA sequences but only expand them in size. In this paper we consider the problem of DNA compression. It is
well known that one of the main features of DNA Sequences is that they contain substrings which are duplicated except for a
few random Mutations. For this reason, most DNA compressors work by searching and encoding approximate repeats. We
depart from this strategy by searching and encoding only exact repeats. our proposed algorithm achieves the best
compression ratio for DNA sequences for larger genome. As long as 8 lakh characters can be given as input While achieving
the best compression ratios for DNA sequences, our new GenBit Compress program significantly improves the running time
of all previous DNA compressors. Assigning binary bits for fragments of DNA sequence is also a unique concept introduced
in this program for the first time in DNA compression.The databases of genomic sequences are growing at an explicative rate
because of the increasing growth of living organisms. Compressing deoxyribonucleic acid (DNA) sequences is a momentous
task as the databases are getting closest to its threshold. Various compression algorithms are developed for DNA sequence
compression. An efficient DNA compression algorithm [4] that works on both repetitive and non-repetitive sequences known
as “HuffBit Compress” is based on the concept of Extended Binary Tree. In this paper, it is proposed and developed a
modified version of “HuffBit Compress” algorithm to compress and decompress DNA sequences using the R language which
will always give the Best Case of the compression ratio but it uses extra 6 bits to compress than best case of “HuffBit
Compress” algorithm and can be named as the “Modified HuffBit Compress Algorithm”. The algorithm makes an extended
binary tree based on the Huffman Codes and the maximum occurring bases (A, C, G, T). Experimenting with 6 sequences the
proposed algorithm gives approximately 16.18 % improvement in compression ration over the “HuffBit Compress” algorithm
and 11.12 % improvement in compression ration over the “2-Bits Encoding Method”.The development of efficient data
compressors for DNA sequences [5] is crucial not only for reducing the storage and the bandwidth for transmission, but also
for analysis purposes. In particular, the development of improved compression models directly influences the outcome of
anthropological and biomedical compression-based methods. In this paper, we describe a new lossless compressor with
improved compression capabilities for DNA sequences representing different domains and kingdoms. The reference-free
method uses a competitive prediction model to estimate, for each symbol, the best class of models to be used before applying
arithmetic encoding. There are two classes of models: weighted context models (including substitutional tolerant context
models) and weighted stochastic repeat models. Both classes of models use specific sub-programs to handle inverted repeats
efficiently. The results show that the proposed method attains a higher compression ratio than state-of-the-art approaches, on
a balanced and diverse benchmark, using a competitive level of computational resources. An efficient implementation of the
method is publicly available, under the GPLv3 license.A universal algorithm for sequential data compression [6]has been
presented. Its performance is investigated with respect to a non-probabilistic model of constrained sources. The compression
ratio achieved by the proposed universal code uniformly approaches the lower bounds on the compression ratios attainable by
block-to-variable codes and variable-to-block codes designed to match a completely specified source.The number of DNA
sequences stored in databases is growing exponentially. Thus, decreasing the Data storage costs for the creation and the
analysis of DNA sequences has become a necessity. Standard compression algorithms failed to get a good compression ratio.

A LOSSLESS algorithm [7, 8] that compresses the DNA sequence in its equivalent in hexadecimal representation
has been presented. In addition, by using a mathematical operation, it identifies the regions of similarity between two
sequences. Data storage and DNA banking are receiving tremendous attention recently [9], due to the explosion of genetic
research, biomedical research and forensic sciences.DNA databases when compared to other databases occupy more storage
due to the enormous size of each DNA sequence. They are growing exponentially as newer samples are being collected and
stored. This poses a major task for storage, transfer, searching and retrieval of data. Hence, compression of DNA Databases
has become relevant.DNA sequences contain repetitions of A, C, T, G,so compression of the sequence involves searching
and encoding these repetitions,then decoding them whiledecompression.This paper explores a few existing DNA sequence
compression algorithmswhich were developed based on certainprevailing generalcompression algorithms. First a few,
existing compression techniques are explained and then, fiveavailable DNA sequence compression algorithms are evaluated
for their suitability in compressing large collections of DNA sequences. Bioinformatics requires a huge amount of genomic
data to analysis, so optimal storage and compression [10] is a great challenge to this field. The DNA sequence is
categorized into three different parts according to the occurrence of A T C and G and use two different symbols tables to
map the DNA sequences into a compressed sequence. We are intended to deploy our proposed compression algorithm in
cloud, so that the user of this field can access this Software as a Service over the cloud. Through our proposed method of
compression, we claim to achieve a compression rate of 1.82.Relative compression [11], where a set of similar strings are
compressed with respect to a reference string, is an effective method of compressing DNA datasets containing multiple
12
NATCON 21 College of Engineering Trikaripur
Conference Proceedings
13/12/21 & 14/12/21
similar sequences. Moreover, it supports rapid random access to the underlying data. The main difficulty of relative
compression is in selecting an appropriate reference sequence. In this paper, we explore using the dictionary of repeats
generated by COMRAD, RE-PAIR and DNA-X algorithms as reference sequences for relative compression. We show that
this technique allows for better compression, and allows more general repetitive datasets to be compressed using relative
compression.

PROPOSED SYSTEM

The input to the algorithm is a DNA(Deoxyribonucleic Acid) sequence which consists of A, C, G, bases with
repeated and non-repeated sequence. The DNA Sequence is stored in databases of big architectural designs but unfortunately
the DNA sequence takes up a lot of bytes to store it in a database which causes storage consumption and takes up a lot of
memory to process for analyzing and studying the biological nature of the organism which might lead to new Biological
Findings and Achievements. Various storage techniques are being developed and the DNA sequences are being compressed
and stored which reduces the use of large number of databases for storage and retrieval. The DNA Sequence is compressed
using the proposed algorithm to the best compressed value for future retrieval and analysis with a expected compression ratio
of 12.5% which is the best compression ratio than that of the existing DNA compression algorithms such as Lossless, lossy
algorithms etc. In this proposed algorithm the input DNA Sequence is taken and compressed using the combination of
various other techniques which is explained in the Proposed System.

The proposed algorithm works on two phases. In the first phase it uses Huffman code to generate an extended binary
tree based on the frequencies of the bases.In the second phase, each base is replaced by its corresponding binary bits
generated from the tree. New technologies for producing high-definition audio, videos, and images are growing at a fast pace,
leading to create a massive amount of data that can exhaust easily storage and transmission bandwidth. To use the available
disk and network resources more efficiently, the raw data in those large files are often compressed. The Huffman algorithm is
one of the most used algorithms in data compression (Huffman et al., 1952). Huffman encoding is a lossless encoding
algorithm that generates variable length codes for symbols that are used to represent information. By encoding high
frequency symbols with shorter codes and low frequency symbols with longer codes, the original information is encoded as
small as possible. The codes are constructed in such a way that no code is a prefix of any other code, a property that enables
unambiguous decoding. While it was proposed half a century ago, the Huffman algorithm and its variants are still actively
used in the development of new compression methods.

Fig 1 The Steps of constructing a Huffman tree

Given the DNA sequence ‘‘ACTGAACGATCAGTACAGAAG,’’ for example, which contains twenty-one bases,
its bit count is therefore 21 · 8 = 168 bits, assuming each base is stored with an 8-bit ASCII code, where A = 01000001, C=
01000011, G = 01000111, and T = 01010100. The compression by Huffman method works as follows. The algorithm at first
counts the frequency of each base, as described in Table 1. Each base, along with its associated frequency, is to be regarded
as a tree node. Mark the four initial nodes as unprocessed. Then, a binary tree is constructed by iterating the following steps.
Firstly, search the two unprocessed nodes with the lowest frequencies. Then, construct a parent node whose frequency is the
sum of those of the two child nodes. Lastly, return the parent node to the unprocessed list of nodes. The tree construction
process ends when all nodes are processed. The construction of the Huffman tree for the given example is illustrated in figure
1. Note that, the left link of each internal node is labeled 0, while the right link is labeled 1. After building the Huffman tree,
each symbol has a new bit representation that can be extracted by traversing the edges and recording the associated bits from

13
NATCON 21 College of Engineering Trikaripur
Conference Proceedings
13/12/21 & 14/12/21
the tree’s root to the desired symbol (which must be located at the leaf level). For example, the path from the root to base A
results in a bit representation of 0, while the path from the root to base T results in 111. The new bit representation for each
base is described in Table 1.The new bit count after encoding is (9 · 1) + (4 · 3) + (5 · 2) + (3 · 3) = 40, with a compression
ratio of 40/168 = 23.8%.

ARCHITECTURE DIAGRAM

Fig 2. Architecture Diagram


PRE PROCESS
 One Bit Compression

DNA sequence is a combination of four nucleotides bases {A, C, G and T} which are both repetitive and non-
repetitive in nature. The proposed method is a combination between three compression algorithms. We propose a
new pre-processing method based on one-bit representation of nucleotides named OBComp. It compresses DNA
sequences by replacing just the two highest occurrence nucleotides by single bit 0 or 1 and the positions of the two
others nucleotides is saved. Thus, three output files are generated in this step, binary file which represent the input
file of next step and position files.
 One-Bit Compression Method
The following algorithm explain OBComp.
Begin
– Calculate the frequency Fr of each nucleotide in DNA input file.
– Put the frequency in descending order, then we have:
Fr(X) > Fr(Y) > Fr(Z)> Fr(Q)
When each of: X, Y, Z and Q represent a specific DNA base.
– For each occurrence of Z in DNA input file, save its position in PosZ file.
– For each occurrence of Q in DNA input file, save its position in PosQ file.
– Remove each occurrence of Z and Q in the original file.
– Replace each X by 0 and each Y by 1 in DNA input file.
End.
ENCODE PROCESS
RLE - It replaces consecutive repeating occurrences of a symbols by 1 occurrence of the symbol itself, then
followed by the number of occurrences.
HUFFMAN ENCODER
 Huffman Coding is a famous Greedy Algorithm.
 It is used for the lossless compression of data.
 It uses variable length encoding.
14
NATCON 21 College of Engineering Trikaripur
Conference Proceedings
13/12/21 & 14/12/21
 It assigns variable length code to all the characters.
 The code length of a character depends on how frequently it occurs in the given text.
 The character which occurs most frequently gets the smallest code.
 The character which occurs least frequently gets the largest code.
 It is also known as Huffman Encoding.

MAJOR STEPS IN HUFFMAN CODING


There are two major steps in Huffman Coding-Building a Huffman Tree from the input characters and Assigning
code to the characters by traversing the Huffman Tree.

POST PROCESS
Binary To Decimal: To better reduce the size of the data stored in the databases, we will convert the binary
representation to an extended-ASCII representation. The benefit from the use of this technique is that one extended-ASCII
character encodes 8 binary digits. The output result will be reduced to 12.5% of the initial binary representation.

RESULTS AND DISCUSSION

The given input, which is a DNA Sequence with A, C, G, T bases is compressed using the proposed algorithm which is stated
to be the best compression algorithm and the output from the algorithm is a single decimal value which will be used to
retrieve the original input DNA Sequence which takes up only less memory and processing time than other compression
methods. Thus the memory usage can be reduced and as the DNA sequences are stored with a higher compression ratio as
mentioned in this paper which is 12.5% so, the storage issues are also being reduced. Finally, our proposed algorithm for
DNA Sequence compression is said to be a better compression algorithm than that of the existing algorithms in terms of
compression, memory, storage consumption and time taken to complete the compression and decompression as shown in
figure 3.

Fig 3.Input and Output samples

CONCLUSION & FUTURE ENHANCEMENTS

DNA compression is an important topic in bioinformatics which helps in storage, manipulation and transformation
of large DNA sequences. If the sequence is compressed using Modified HuffBit Compress algorithm, it will be easier to
compress large bytes of DNA sequences with better compression ratio. An advantage of the proposed algorithm is, it works
well for large sequences. So, it can be helpful to save storage problem greatly. Moreover, it uses less time, memory and easy
to implement using MATLAB. This research also creates a new dimension of using MATLAB in DNA compression which is
the main potentiality of the research work. Then using combination between two compression algorithms like Modified RLE
and Huffman helps to reduce the size of input file to less than the quarter. Not only the average compression ratio of the
proposed method is better than the existing compression algorithms but also it is simple, fast, flexible and very easy to
implement make it more interesting. The future work of the research is to overcome the limitations of the proposed algorithm
and to come with a better outcome. In future, this work can be implemented using R tool. In the future, we will try to
compress position files using an efficient compression method specially for integer number to get better compression, we will
try also to associate our new method to other vertical compression algorithms based on statistical approaches to achieve
better result.

REFERENCES
15
NATCON 21 College of Engineering Trikaripur
Conference Proceedings
13/12/21 & 14/12/21

1. Bharti RK, Singh RK. A biological sequence compression based on look up table (LUT) using complementary
palindrome of fixed size. Int JComput Appl. 2011;35:0975–8887.
2. Grumbach S, Tahi F. A new challenge for compression algorithms: genetic sequences. Inf Process Manage
1994;30:875–86.
3. Rivals E, Delahaye J-P, Dauchet M, Delgrange O. A guaranteed compression scheme for repetitive DNA sequences.
LIFL I University, technical report 1995; IT-285.
4. Rajarajeswari P, Apparao A. DNABIT compress – genome compression algorithm. Bioinformation 2011;5:350–60.
5. Grumbach S, Tahi F. Compression of DNA sequences. In: IEEE Symposium on the Data Compression Conference,
DCC-93; Snowbird, UT, 1993:340–50.
6. Ziv J, Lempel A. A universal algorithm for sequential data compression. IEEE Trans Inf Theory. 1977;IT-23:337.
7. Saada, B., Zhang, J.: Vertical DNA sequences compression algorithm based on hexadecimal representation. In:
Proceedings of the World Congress on Engineering and Computer Science, pp. 21–25. WCECS, San Francisco
(2015)
8. Jahaan, A., Ravi, T., Arokiaraj, S.: A comparative study and survey on existing DNA compression techniques. Int. J.
Adv. Res. Comput. Sci. 8, 732–735 (2017)
9. GenBank. https://www.ncbi.nlm.nih.gov/genbank/
10. Kuruppu, S., Puglisi, S.J., Zobel, J.: Reference sequence construction for relative compression of genomes. In:
Grossi, R., Sebastiani, F., Silvestri, F. (eds.) SPIRE 2011. LNCS, vol. 7024, pp. 420–425. Springer, Heidelberg
(2011).
11. Majumder, A.B., Gupta, S.: CBSTD: a cloud based symbol table driven DNA compression algorithm. In:
Bhattacharyya, S., Sen, S., Dutta, M., Biswas, P., Chattopadhyay, H. (eds.) Industry Interactive Innovations in
Science, Engineering and Technology. LNNS, vol. 11, pp. 467–476. Springer, Singapore (2018).
https://doi.org/10.1007/978-981-10-3953-9_45

16
NATCON 21 College of Engineering Trikaripur
Conference Proceedings
13/12/21 & 14/12/21

Intelligent Parking System


Dr.R.Vedhapriyavadhana 1, Ignisha Rajathi, Jayalakshmi, Girija
1,3,4
Vellore Institute of Technology (VIT), Chennai
Sri Krishna College of Engineering and Technology ,Coimbatore

ABSTRACT:
Nowadays, the need for vehicle parking is increasing day by day and has turned into an important issue. Car owners and
drivers in India are still struggling with fuel waste and time waste and finding when and where we need to park our car which needs a
good amount of lighting due to the usage of a manual vehicle parking system. Also, when there is no special system in place, chaos
occurs in parking spaces as damage will be incurred to vehicles while cars are going in or out of the parking space. So, a new car
parking system has been introduced to tackle these problems. Hence, this research looks into smart parking methods to find and book
the proper parking space efficiently.
Keywords: Parking System, Computer Vision, Image Processing, Moving Object Detection, Mask R-CNN

INTRODUCTION
Advancing in interdependency in world’s economy has continually accumulated every individual into city
territories leads to significant urban communities like Bangalore to be vigorously crowded and blocked. The expansion
in density of people implies an expansion additionally movement of individuals. This influences the expansion in the
usage of motors thus influences the parking circumstance. These days, a few people are purchasing vehicles regardless
of whether they have no place to put them, and a few streets are in any event, turning out to be parking spaces which at
that point causes substantial traffic. Ordinary parking regions are typically simply vacant spaces, and individuals
needed to search for an empty one manually. Not just that this parking technique is very tedious, it is also inefficient
particularly where taller buildings with more story and in such cases the motorists or drivers looks for the spot to park
which consumes a lot of time and if the driver is driving the owning car by himself then the situation is going to be
really challenging and he will be perplexed in finding out the place to park.
In recent scenarios, the number of cars in usage are increasing fast which led to many office buildings and
shopping malls to build underground or multilevel parking to tackle the problem. The motorists are frustrating when it
comes to finding available parking slots as it not only lapses the time but also wastage in fuel which affects the
economy of an individual. If the scenario still goes on, the influence of having a lack of parking slots indirectly yield in
pressure which may lead to take decisions not to procure anything and leave the mall or building with disappointments.
More frequently it leads to a worst situation and needs to be addressed immediately. The problem is to identify
available parking spots in the parking areas given bird’s eye view images of the parking lots captured by the cameras
using Computer Vision and Image Processing.
LITERATURE REVIEW
The data in digitized form can be transmitted by digital communications. The topic of the transmitted data may
change. The researchers have decided to take up Computer Vision in Parking Management [1-2] and delve into it.
This study specifically focuses on parking, although research into traffic enforcement has been made before. The
research utilizes set theory, spatial imaging, data handling and transformation for predicting, Neural Network,
Rational scoring preference and sensor set ups. To sense the parking to understand the presence of car , sensors are
used intelligently. It traces out the parked car efficiently by applying the number plate character recognition
technique. To recognize the digits of the plate number, image processing would be used to enhance the image and the
digits will be stored in a database and can be used in the future. This research can be used to maintain parking spaces
without people where the human labor reduces and percolates more for an efficient processing. Implementing this
system around the country in parking spaces could allow generation of parking bills instant and the appropriate
pricing of the parking bill concerning the time for calculation.
The researchers have used simulation for this specific study.
● Spatial Imaging - The usage of the system to implement the processing techniques involved in computer vision
over images to provide the driver with their parking spot by locating the digits of the parking license.
● Rough Set Theory - Accessing the data in the future can be made easier with the compartmentalization process
in the database thanks to this feature. For example, let us take the build of a car, the structure and dimension of it
would help us in great to finalize and freeze the area of the parking lot.
● Digital Sensors –Through the image of the characters read in number plate of the Car, these digital sensors
sensitize the information sensed and process it to find the availability and a camera can be used per parking spot
17
NATCON 21 College of Engineering Trikaripur
Conference Proceedings
13/12/21 & 14/12/21
so the system could know the count of the available spaces.
● Logic Scoring - decision-making algorithm is improved in the coding process of the encoder by adding suitable
weights through adjusting the variables involved in it.
● Neural Network – for managing the cars in available parking lots.
● Database Monitoring - it stores the data which can be used later on to process the data to generate the electric
bills as well as reading the characters in the license plate.
● Data/Information Transfer - utilized from the transmitter to the receiver through the sensors. The digits of the
license plate obtained would be stored.

Using sensor networks, available data on vacant parking lots are gathered to guide the drivers using sensor
networks but the problem is that existing parking systems do not guide them to their respective parking slots. Drivers
tend to park their vehicles closer to their destination and hence be on a lookout for vacant spots near their destination
due to lack of parking information. To prevent multiple vehicles from chasing or aiming for the same slot and the
details pertaining to the same will be shared is also changed. The number of available slots is decreased intentionally
by the designers to act as buffer slots. The system will have a few places reserved to avoid conflict but to estimate the
number of spaces would prove to be a difficult task. Two performance metrics namely, Walking distance and Traffic
volume look into issues. Systems such as Reservation Performance have been proposed to counter the challenges as
the system retrieves the data on the performance metrics and stores them. Iris Net proposed a system with cameras,
motion detectors and microphones that will help the system to automatically block the space available through
obtaining the proofs for the motorist’s identity for the respective vehicle. On spot updations through smart web
application is also available through which preferences will be given based on the time of booking. This system is
highly energy consuming, generates a huge amount of data and also suffers from technical aspects. A driver can make
use to the new E-parking system that merges reservation of parking slots and payment systems to check the
availability and reserve parking spaces and also to make the payment while leaving all while accessing the system at
the comfort of your smartphone or the web. This still does not eliminates the contribution rendered with the existing
detectors rather in addition helps to detect smartly for quick process. The limited number of parking spaces can be put
to efficient use with the help of an automated parking system.

A smart parking system will paves the solution instantly with restricted availability of the parking space.Mask
R-CNN for Object Detection and Segmentation [3-6].

Fig. 1 Mask R-CNN Implementation

The implementation of Mask R-CNN on Python 3, Keras, and TensorFlow has been shown in figure 1. The
model generates bounding boxes and segmentation masks for each instance of an object in the image. It's based on
Feature Pyramid Network (FPN) and a ResNet101 backbone. It reduces the required manpower to maintain a parking
system, checks for available slots and finds the best optimal path to the parking slot, shows directions to the available
parking slot, updates the database with the car license number, entry time, and exit time [7-10]. Also, this system
uses the deep learning model Mask R-CNN which is pre-trained on the MS-COCO dataset and used for identification
of cars and subsequently, parking spots in the input images. In the present world, there are very large numbers of
shopping complexes and cinema theatres are present. Each shopping complex has its parking place for their
customer's vehicles. For that, they need to assign some people to show the directions to drivers to park the vehicles.
Here is the motivation for this project. To reduce the manpower in parking places by making an automated system to
display the way to the drivers to the parking slot. This results in reducing costs in wages to the employees every
month.

18
NATCON 21 College of Engineering Trikaripur
Conference Proceedings
13/12/21 & 14/12/21

PROPOSED WORK

The first part of the problem is to identify all parking spaces in a parking lot from the camera images. The
second part works on detecting empty parking spaces from a new image input to the system using locations of
parking spaces identified in the previous phase. To resolve this, initially the number of vertices are identified. Then
the edges for each vertex until the last vertex are entered. Further, the indices of the parking slots are taken and
having entered the vertex of entry, the shortest path is identified. The images from Kaggle Dataset have been used.
This solution treats the problem as an instance segmentation problem to identify cars from an image of a parking lot
and infer empty parking spots in the image. This system uses deep learning model on COCO dataset and is used for
identification of cars and subsequently, parking spots in the input images. The first part of the problem is to identify
all parking spaces in a parking lot from the camera images. The second part works on detecting empty parking spaces
from a new image input to the system using locations of parking spaces identified in the previous phase.
In this solution, Mask R-CNN model is used for identifying cars in the images. This model is designed for
object instance segmentation and is an extension to Faster R-CNN model. For an image of a parking area, the model
returns 4 things -
The bounding boxes of each object identified, as X,Y pixel coordinates.
The class label for the object, one of 80 categories as supported by COCO dataset.

Confidence score of object detection. Higher value denotes more certainty of object’s correct identification.

A bitmap mask telling which pixels within the bounding box belong to the object and which ones do not.
This result is filtered to obtain this information for objects identified as ‘car’.

Fig. 2 Model of the System

The complete working Model of the system is shown in figure 2. An image of completely occupied parking
area is first input to this system.
The inference step from object instance identification returns bounding boxes of all the cars. The location of
each car is treated as a valid parking spot and bounding box of all of these spots are maintained in the system.
The coordinate locations of these boxes are only needed to be stored for the first time an image from a new
parking lot is input to the system. These locations are used for reference for subsequent images passed as
input to determine occupancy of parking spots.
For any subsequent image, after the inference step, bounding box for cars in the image are obtained. This
time, the image will contain bounding boxes wherever car objects have been identified.
To determine empty parking spots, we use a measure IOU (Intersection over Union).

19
NATCON 21 College of Engineering Trikaripur
Conference Proceedings
13/12/21 & 14/12/21
This gives us a measure of overlap of a car’s bounding box with the parking spot’s bounding box. A value greater than a
threshold (threshold here 0.5) denotes that the spot is occupied, otherwise vacant. For any
image of a parking area with few spots vacant, the image after the instance detection stage will only have
bounding boxes at locations where cars were identified and hence, IOU value for those locations will
be > 0.5 (denoting occupied) and 0 (denoting empty) for the rest of the locations in the reference coordinate
locations.
To depict the output, all the bounding boxes and their center locations from the reference parking spot
locations are overlayed on the current input image. The spots where cars were identified are displayed in red
color bounding box and center points and green color for the rest of the spots.

Training data

This COCO dataset is designed for the detection and segmentation of objects occurring in their natural context
and has more than 200K labelled images. This dataset has more than 80 object categories including capability to
identify car in an image. Data used in this solution is CNR Car parking dataset (CNR-
EXT_FULL_IMAGE_1000x750) for solution demonstration. This dataset is for visual occupancy of car parking lots
and contains images captured in 3 different weather conditions: rainy, overcast & sunny. This variant of the dataset
contains full frames (complete view of the parking lot) similar to the kind of input for the problem statement at hand.
Resolution – 1000x750

RESULTS AND DISCUSSION

Fig. 3 Process depiction using images

The detection of available parking spots from parking lot camera images has been shown in figure 3.

20
NATCON 21 College of Engineering Trikaripur
Conference Proceedings
13/12/21 & 14/12/21

Fig. 4 Available Parking Spot Detection Overcast Day

Fig. 5 Available Parking Spot Detection Rainy Day

Fig. 6 Available Parking Spot Detection Sunny Day

The available parking spot detection on three different cases have been examined with the proposed system for its
accuracy. The detection process with the images such as Overcast day image, rainy day image and sunny day image are
shown in figures 4, 5 and 6 respectively. It eliminates any dependency of the model on a human to provide hard coded
locations of parking spots. It enables the system to identify empty parking spots for new parking lots as well with minimal
requirement of a single image of its completely occupied parking area without any compromise on its performance.
Treating bounding boxes of parked cars as valid parking spot is more reliable and easier to detect than detecting parking
meters or vehicle boundaries in the parking spaces as neither of them are always visible and may also be confused with
noise by the model and hence detecting such objects do not promise reliable results every time for different areas.

21
NATCON 21 College of Engineering Trikaripur
Conference Proceedings
13/12/21 & 14/12/21

CONCLUSION AND FUTURE WORK


This smart parking system is developed completely through processing the captured images in the parking lot
to check the availability of the car instead of relying on the conventional sensor based information capture which will
greatly reduce the cost of sensors and wiring hassle. This system has been developed with the help of integrated image
processing to increase the effectiveness and speed of parking cars without creating any chaos in this smart digitalized
world. As within years the globe is expected to be completely smart with each countries and cities functions only with
smart city integration. This research work will significantly contribute to the above said statements. To complement
this parking system, in mere future ,this research work would be elevated towards security which plays a vital
role .Furthermore, Placing LEDS at each car parking space and light guidance to the respective parking spot have also
been considered as additional guidance devices.

REFERENCES

[1] Z. Bin, J. Dalin, W. Fang, and W. Tingting, A design of parking space detector based on video image, The Ninth
International Conference On Electronic Measurement &Instruments ICEMI 2009.
[2] K. Ueda, I. Horiba, K. Ikeda, H. Onodera, and S. Ozawa, (1991). An algorithm for detecting parking cars by the
use of picture processing (1981), IEICE Trans. Information and Systems.vol.J74-D-I1. IO, pp. 1379-1389.
[3] T. Hasegawa and S. Ozawa, “Counting cars by tracking of moving object in the outdoor parking,” Vehicle
Navigation and Information Systems Conference 1994, pp 63-68.
[4] E. Maeda and K. Ishii(1992), “Evaluation of normalized principal component features in object detection,” IEICE
Trans. Information and Systems. vol.J75-D-11(3):520-529.
[5] M. Yachida, M. Asada, and S. Tsuji. Automatic Analysis of Moving Image. IEEE Trans. Pattern Anal and
Mach.Intell. vol.PAM1-3(1) pp.12-19.
[6] Ms .Sayanti Banerjee, Ms. Pallavi Choudekar and Prof. M. K. Muju. “Real time car parking system using image
processing,” 2011. IEEE, pp. 99-103.
[7] S. Arora, J. Acharya, A. Verma, Prasanta, and K. Panigrahi. “Multilevel thresholding for image segmentation
through a fast statistical recursive algorithm,” Elsevier 2007.Pattern recognition letters 29 (2008).pp. 119-125
[8] L.Wang and J. Bai. Threshold Selection by Clustering Grey Levels of Boundary. Elsevier Science. Pattern
recognition letters 24. pp. 1983–1999
[9] Thiang, Andre Teguh Guntoro, and Resmana Lim, “Type of vehicle recognition using template matching
method,” International conference on Electrical, Electronics, Communication,
Information 2001. March 7-8

[10] S. Saleh Al-Amri, N. V. Kalyankar, and Khamitkar S, “Image segmentation by using threshold techniques,”
Journal of Computing. vol 2, Issue 5. MAY 2010, ISSN 2151-9617.

22
NATCON 21 College of Engineering Trikaripur
Conference Proceedings
13/12/21 & 14/12/21

Detecting COVID_19 in X-ray images using artificial neural network


Prinza L and Jerald Prasad G
TKR COLLEGE OF ENGINEERING AND TECHNOLOGY,
HYDERABAD
Abstract –
In this paper, we propose a method for detecting COVID-19, which is most important for now days in lowering mortality rates. COVID-
19 had a rapid impact on our daily lives, businesses, and global trade and transportation. Chest CT scans and CT x-rays reveal
particular radiographic abnormalities in people with COVID-19 by means of faster and more efficient representation of the patient's
lung that enables for the detection of viral phenomena. Preprocessing the CT image to remove undesirable noises found in medical
images is the first stage in our procedure. After that, Discrete Wavelet Transform based decomposition was done to exhibit the
significant information from the image. Then the features are extracted from the coefficients and are trained in feed forward neural
network to classify normal and COVID-19 patients. In this article a classification rate of about 85% is obtained.

Keywords – COVID-19,CT X-ray, DWT, Feature Extraction, Neural network.

INTRODUCTION
Humanity was confronted with a pestilence before the end of 2019 because of the extremely severe
respiratory disease COVID- 19. SARS CoV-2 is related to pneumonia, referred as COVID-19 illness 2019 that no
one expected to see in this age of invention [1,2,3]. While the COVID-19 outbreak started in Wuhan, China [4],
the global expansion of the plague has meant that the amount of hardware available to experts fighting the
sickness is insufficient. At the time of writing, there had been over 257,090,259 confirmed cases and over
5,158,478 confirmed passings over the planet. Man-made brainpower (AI) and profound learning examinations
and applications have been launched to assist experts who intend to treat patients and battle the disease, taking
into account the time required for completion and the financial costs of the lab equipment used for analysis [5].
Despite the fact that it is a quick care facility COVID-19 tests are expected to be used in clinical settings sooner or
later; nevertheless, turnaround times for COVID-19 test results range from 3 to over 48 hours, and not all
countries will likely approach those test units that provide findings rapidly. One of the fundamental ideas in a
recently released global agreement statement by the Fleischer Society is to use chest radiography for patients with
COVID-19 in a resource constrained setting when access to computed tomography (CT) is limited. When fighting
the disease, the financial costs of the research facility packets used for conclusion are a major concern,
particularly for developing and undeveloped countries [6,7]. Using X-beam images for COVID-19 robotized
identification could be especially valuable for countries and emergency clinics that can't afford a lab pack or don't
have access to a CT scanner[8]. This is significant in view of the fact that no effective treatment option has yet
been discovered, and a strong conclusion is thus essential. Hence a neural network based system was
implemented to analyse the abnormalities present in CT X-ray images which will assist the experts with a cost
effective approach.
II. PROPOSED METHODOLOGY
A. Block diagram

Fig. 1 Proposed block diagram.


23
NATCON 21 College of Engineering Trikaripur
Conference Proceedings
13/12/21 & 14/12/21

The detailed explanation of our work is described in Fig. 1. First the input image is taken from the
database. Then pre-processing using median filter is done to remove the unwanted artifacts in complicated
medical images. The processed image is again decomposed by Discrete Wavelet Transform (DWT) to grasp out
fine details from the image. Next the important features such as entropy, contrast, homogeneity and variance are
extracted from the wavelet coefficients. Thereafter the classification of normal and COVID-19 is done with feed
forward neural network.

DISCRETE WAVELET TRANSFORM

Wavelet transforms will be useful for image processing to accurately analyze the abrupt changes in the
image that will localize means in time and frequency. Wavelets exist for finite duration and it has different size
and shapes. DWT can be calculated by consecutively passing the signal through: (i) Detail coefficients that are
generated by the high-pass filter and (ii) The approximation coefficients produced by a low-pass filter. In the
outcome of the low-pass filter, DWT decomposition is recursively (D levels) applied and thus producing D details
and one approximation coefficients. For a given function k, the DWT is given as shown in (1).
m

 f k    0m k  nb0  .
 
DWT( m.n)  f ,  m,n  0 2 (1)
k  

Where the inner product  m,n k  is given as in (2)


m

 
 m ,n k    0 2   0 m k  nb 0  0m  m, n  z . (2)

Haar, Daubechies, Morlet, Meyer, Mexican, Symlet and Coiflet are some of the commonly used wavelet
functions. Of these we used Daubechies wavelet because of its maximum flatness at certain frequencies.We have
used three level of decomposition. After decomposing the image into coefficients, the next step is to bring out the
relevant features.

FEATURE EXTRACTION
Features extracted from the decomposed coefficients, provide more detailed information about the
image for diagnosis. Here we have used four different features such as variance, contrast, homogeneity and
entropy. Equations (3-6) give out the description about the extracted features.

A. Variance
This statistic is a measure of heterogeneity that is highly associated with first-order statistical variables.
When the grey level values depart from their mean, the variance increases.
Variance (var)  i  j i   2 Pij . (3)
Where µ is the mean of coefficient Pij
A. Entropy
Entropy measures an image's disorder or complexity. When the image is not texturally uniform, the
entropy is high.
Entropy  i  j Pij log 2 Pij . (4)
A. Homogeniety
Homogeniety decreases when the contrast rises but the energy remains constant.
1
Homogeniety  i  j Pij . (5)
1  i  j2
A. Contrast
Contrast determines how many local variations are present in the image.
24
NATCON 21 College of Engineering Trikaripur
Conference Proceedings
13/12/21 & 14/12/21

contrast  i  j i  j2 Pij . (6)

NEURAL NETWORK AND CLASSIFICATION

Classification is an important part of any recognition system. Here the classification process of COVID-
19 patients from normal is done by using feed forward neural network (FFNet)[9]. It consists of three types of
layers: input, hidden layers, and output. The basic operation of the network is classified into (i) learning and (ii)
classification. The features extracted are given as input neurons. The data are fed into the input layer and are
transferred through the different network layers until they reach the output, with no feedback. Network is learned
by the given feature data and it was trained with these data to get prior knowledge about it. FFNet uses a
supervised learning algorithm and the training was performed using a gradient descent training function with
adaptive learning rate back-propagation algorithm. The initial weight and bias values for each layer were assigned.
A log sigmoid transfer function was used as the activation function in this network. The neurons are trained in the
hidden layer. Based on the input neurons, error, bias and the learning rate, the length of the learning phase varies.
Generally the network is trained for several epochs (1000) until it reaches the goal. The iteration stops when this
value is attained. The output stage consists of two classes: normal and COVID-19 patient. Here, the target vector
sets as [1 0] where ‘1’ represents the normal and ‘0’ represent the COVID-19. Based on the training phase, the
neurons are classified among the two classes.

RESULTS AND DISCUSSION

A. Database and software


The images are taken from the publically available database [10].Here we have used 10 CT x-ray images of
COVID-19 patients and 10 x-ray images of normal persons. Matlab software is used for implementation.

C. Implementation
The extracted features are fed towards a neural network. Hence four features are given as input neurons and 20
neurons in hidden layer. Fig. 2 displays the normal person CT x-ray image, median filtered image and the
coefficients. Similarly, Fig. 3 displays the COVID-19 affected persons CT x-ray image and its wavelet
coefficients.

a b

c d
Fig. 2 Normal X-ray image a)Input image b)Median filtered image c) Approximation coefficient d)Detail coefficient.

25
NATCON 21 College of Engineering Trikaripur
Conference Proceedings
13/12/21 & 14/12/21

a b

c d
Fig. 3 COVID-19 X-ray image a)Input image b)Median filtered image c) Approximation coefficient d)Detail coefficient.

On analyzing the COVID-19 patients x-ray images, the abnormalities are clearly visible in the wavelet
coefficients while comparing with normal x-ray images. Table. 1,2 gives out the different features extracted from
the wavelet coefficients for normal and COVID-19 affected patients.
TABLE I
FEATURE VALUES OF COVID-19 X-RAY
No. of COVID-19 x-ray
patients
Contrast variance Energy Homogeniety
1 9.7325 0.0328 0.205 0.6107
2 7.1265 0.0288 0.2183 0.6346
3 11.5097 -0.0121 0.1452 0.5512
4 3.05 0.1035 0.4271 0.7764
5 7.1637 0.0498 0.2382 0.6499
6 6.1401 0.0622 0.2681 0.6741
7 7.2185 0.0767 0.2747 0.6719
8 14.1918 -0.045 0.1299 0.5176
9 9.8158 -0.0387 0.155 0.5647
10 6.8297 -0.0029 0.1831 0.6083

TABLE 2
FEATURE VALUES OF NORMAL X-RAY
No. of Normal x-ray
patients
Contrast variance Energy Homogeniety
1 12.2122 -0.021 0.1532 0.549
2 11.3629 -0.0055 0.1694 0.5684
3 10.6114 -0.0152 0.1659 0.5692
4 11.262 -0.0309 0.1502 0.5515
5 9.6431 0.0193 0.2055 0.6062
6 9.3795 0.0327 0.223 0.6206
7 10.7174 -0.0171 0.1638 0.5672
8 9.8674 -0.0372 0.1418 0.5533
9 10.2589 0.0037 0.1844 0.5869
10 12.2535 -0.0102 0.1644 0.5589

These features of normal and COVID-19 patients are trained in the neural network. The performance is evaluated
26
NATCON 21 College of Engineering Trikaripur
Conference Proceedings
13/12/21 & 14/12/21
by the confusion matrix as shown in Fig. 4. In the first row of matrix, the first column (green colored) represents
the correctly classified inputs as class ‘0’ (COVID-19) and the next column represents the misclassification of
COVID-19 as normal one. Similarly in the second row, the first column represents the misclassified data of
normal as COVID-19 and the next column shows the correctly classified data of class ‘1’ (normal). Also, it can be
observed that out of 10 COVID-19 classes 8 are correctly classified and remaining two are misclassified as
normal. But, form normal class, 9 inputs are correctly classified and 1 is misclassified as COVID-19. The
specificity rate of about 85% is obtained.

CONCLUSION
DWT based decomposition was carried out to analyze the most complicated medical x-ray images of
COVID-19 patients. Significant features are extracted from the wavelet coefficients and are fed towards a neural
network. Classification of normal and COVID-19 patients x-ray images are done by means of feed forward
network classifier. A specificity rate of about 85% is achieved by our proposed method. Hence, it is believed that,
our proposal can provide a better tool in diagnosing the threaten COVID-19 patients with more accuracy. In near
future the work can be extended with different domain decomposition methods and also with more specific
relevant features. In that way, the classification rate can be increased further to reduce the mortality rate.

Fig. 4 Confusion matrix.

REFERENCES
[1] Wu F., Zhao S., Yu B. A new coronavirus associated with human respiratorydisease in China. Nature. 2020;579(7798):265–269.
[2] Huang C., Wang Y. Clinical features of patients infected with 2019 novel coronavirus in Wuhan, China. Lancet. 2020;395(10223):497–506.
[3] World Health Organization . World Health Organization (WHO); 2020. Pneumoniaof Unknown Cause–China. Emergencies Preparedness, Response,
Disease Outbreak News.
[4] Wu Z., McGoogan J.M. Characteristics of and important lessons from the coronavirus disease 2019 (COVID-19-19) outbreak in China: summary of a
report of 72314 cases from the Chinese Center for Disease Control and Prevention. Jama. 2020;323(13):1239–1242.
[5] Holshue M.L., DeBolt C. First case of 2019 novel coronavirus in the United States. N. Engl. J. Med. 2020;328:929–936.
[6] Kong W., Agarwal P.P. Chest imaging appearance of COVID-19-19 infection. Radiology: Cardiothoracic Imaging. 2020;2(1)
[7] Singhal T. A review of coronavirus disease-2019 (COVID-19-19) Indian J.Pediatr. 2020;87:281–286.
[8] Barstugan M., Ozkaya U., Ozturk S. 2020. Coronavirus (COVID-19-19) Classification Using CT Images by Machine Learning Methods. arXiv preprint
arXiv:2003.09424.
[9] Prinza, L., et al. "Denoising performance of complex wavelet transform with Shannon entropy and its impact on Alzheimer disease EEG classification
using neural network." Journal of Medical Imaging and Health Informatics 4.2 (2014): 186-196.
[10] Mostafavi, Sayyed Mostafa, 2021, "COVID19-CT-Dataset: An Open-Access Chest CT Image Repository of 1000+ Patients with Confirmed COVID-19
Diagnosis", https://doi.org/10.7910/DVN/6ACUZJ, Harvard Dataverse, V1.

27
NATCON 21 College of Engineering Trikaripur
Conference Proceedings
13/12/21 & 14/12/21

LANDSLIDE MONITORING AND EARLY WARNING SYSTEM USING


CLUSTER OF SENSORS AND NEURAL NETWORK
Ignisha Rajathi G, Prinza L and Jerald Prasad G
SRI KRISHNA COLLEGE OF ENGINEERING AND TECHNOLOGY, COIMBATORE, TAMIL NADU
TKR COLLEGE OF ENGINEERING AND TECHNOLOGY, HYDERABAD

ABSTRACT
Landslides occur when gravity overcomes the frictional forces keeping the layers of rock in place on a slope. In India, the state
Kerala has been a victim to the failure plane which holds the weak layer like shale. The disruption of the frictional balance on the
slope, creates catastrophe on human life and properties. The forecast of the probable landslide absolutely curtails its occurrence. A
novel methodology with a structural implementation is formulated in this research, by employing a sensor pole intrigued with a feed
forward neural network. The sensor pole encapsulates a strain gauge for the measure of displacement, gyroscope for change of
inclination, and a moisture sensor for hydration level of soil. These features are superintended on neural network to enunciate the
probable occurrence of landslide on the fragile zones, so as to decrease environmental impact on its downside with an alarming
alert system, using this research work.

Keywords: Landslides, Sensors, Neural Network, Strain Gauge

INTRODUCTION

Any disturbance in the frictional soil rock bondage causes upper layer to slide away. There have been many reasons
for this cause and effect of landslide catastrophe. To name a few, the reasons trail. Heavy rain accumulates extra
weight on friction and lubricates the upper layer beyond measure. Deforestation contributes to landslide disaster where
the uprooting of the tree roots which hold the land plates, weakens the root structure due to rotten roots and thence
land becomes liable to slip off the place, with an increase in risk. Earthquakes also trigger landslides, when the
underneath bed rock shakes the impact fringes on the upper layer and the land, slides off the crotch. There is also
certain existence of land slide which happens years over, while the ground beneath creeps for many years and turns
weary to hold the roads or buildings founded over its surface, due to instability. Another form of landslide happens
due to coastal erosion, where the waves collide over cliff rocks and wear it out over time. When the weight of the
resulting overhang, overcomes the stiff of the rock and it collapses causing landslides. Also, certain more striking
reasons are Muddy or debris flow, very steep sloppy terrain, land which has been encountered with wildfires etc…
Climatic variations and temperature increase are plausible triggers for more landslides. A survey states that there have
been more than 18,000 deaths among the 4.8 million people affected by the same in the tenure 1998 to 2017 [1]. Dave
Petley, an Earth scientist at the University of Sheffield, has calculated that landslide caused 32,322 fatalities between
2004 and 2010. According to data provided by the Geological Survey of India, the coastal state in the Western Ghats
has experienced 67 major landslide events and hundreds of minor ones, in half a century [2]. Kerala is “God’s own
Land” narrow strip in India which dazzles with greenery with complete saturation to its permissible level, holding on
to prominent orographic features [3]. It is noted to be densely populated state grabbing the third position in India. Yet,
it is invaded by typhoons, soil erosion and landslides as frequently remarked every year. Among the lot, landslides are
considered as outrageous natural threats to human lives, livestock and infrastructure, causing mammoth of mankind
grief and loss of belongings. Almost 55% of the disaster happens during the south west monsoon season in the months
of June to September with the frequently prolonged high intensity rainfall effecting on the climatic change in India
[4]. In Kerala, 13 of the 14 districts, except the coastal district of Alappuzha are prone to landslide. Geology,
morphology and human activity as migration from plains to high lands are the three major causes of the landslide.
Loss of life, forestland, human settlements, and damage to communication routes are the results of landslides. The
interpretation of disaster data demands the awareness in the complicated interference between natural hazards and the
human vulnerabilities. To be more specific, the mysterious tragic typhoon in the uninhabited region need not be
counted as disaster, as far as the humanity is safe. A continuous monitoring of the varied reporting and recording must
be accounted for fore casting the criticality or formulating the patterns with enumerative figures. The Existing
Technologies for landslide Monitoring systems include Remote sensing or satellite techniques, Photogrammetric
techniques, Ground-based geodetic technique, and Satellite-based geodetic technique. However, they lack some
accuracy in terms of predicting landslide in real-time. We aim to design effective hardware to predict the landslidein
28
NATCON 21 College of Engineering Trikaripur
Conference Proceedings
13/12/21 & 14/12/21
real-time and monitor it all the time. There hits an alarm as an early warning, in case of suspicion in catastrophic
landslide and can permissibly diminish the damage caused by the landslides.
LITERATURE SURVEY

Landslide is an unceasing and inexorable process that is not in our control. Such kind of disaster cannot be mitigated
according to our will but can be monitored. Our research has narrowed down, the landslide statistics disclosing the
etails of landslide impact in Kerala. A map released by the Disaster Management Authority explained that around
5,607.5 sq.km of area in Kerala was vulnerable to landslides, which is 14.4% of the total area. A narrative of vulnerable
zones is shown in figure 1. Regions in Idukki, and Palakkad districts were highly vulnerable. Nedumangad in
Thiruvananthapuram, Meenachil and Kanjirapally in Kottayam, Thodupuzha and Udumbanchola in Idukki, Chitter
and Mannarkkad in Palakkad, Nilambur and Ernad in Malappuram, Taliparamba in Kannur fell into the high-risk
category. Apart from these, there are 25 taluks also on the list. Before implementing the system for landslide
monitoring and early warning, the viability of the existing technologies [6] are discussed.

a)
b) c)
Fig.1. A map that shows regions which are highly vulnerable (red) to landslides and moderately prone(orange). a)
Wayanad. b) Malappuram c) Map showing the Flood and Landslide prone areas of Kerala. Courtesy: National Centre
for Earth Science Studies [5].

 Remote sensing or satellite techniques - With space-derived information have significant potential for
landslide hazard assessment and improved understanding of landslide processes [7]. The landslide is
monitored with the help of satellites. The satellite takes repeated images and analyses the Landslide. Similar
sensors and methods are important for seismic hazards and the management of earthquake disasters. The pale
fit of this technique is that deployment of special satellites for landslide detection is very costly and also
deploying modern cameras for satellite view is in demand. As a blurry image vision, leads to inaccurate
interpretation of data.
 Photogrammetric techniques - This can be an effective tool for monitoring actively moving landslides and
for analyzing the velocity or strain-rate fields. These techniques allow the determination of ground
displacements over long periods, by comparing the corresponding sets of aerial photographs [8]. The
accuracy of photogrammetric positioning with special cameras depends mainly on the accuracy of the
determination of the image coordinates and the scale of the photographs. This is not satellite monitoring as
special 3D cameras are installed at a height and from above the land terrain is observed.
 Ground-based geodetic techniques - This make use of many instruments and methods of measurement for
absolute displacement computations [9]. They are usually employed according to an episodic monitoring
program. In some cases, the geodetic sensors are fixed on control points and take the measurements for every
repetition. In other cases, a sensor can permanently be put on an observation point and perform measurements
to target on control points according to a computer-controlled program.
 Satellite-based geodetic techniques - This [10] make use of satellite positioning systems, such as the Global
Positioning System, GPS. There exist several methodologies applied that can guarantee high accuracy,
continuous and reliable results.
 Geotechnical techniques - This [11] make use of sensors permanently working on or in the structure or region
under consideration. They can operate on a 24-hour base and measure the change of geometrical and/or
physical characteristics of the deforming item (relative deformation). They can also use a telemetric system
for real-time transmission of measurement data to a control center.

29
NATCON 21 College of Engineering Trikaripur
Conference Proceedings
13/12/21 & 14/12/21

METHOD AND MATERIALS

Initially, the areas which are vulnerable to landslides have to be chosen using data provided by Kerala State
Disaster Management Authority. In this area, a cluster of sensor poles has been placed, keeping a specifically known
distance between each sensor.
3.1. BLOCK DIAGRAM AND CIRCUIT DIAGRAM

Figure 2 shows the block diagram of our proposed landslide monitoring and early warning system. A cluster of Sensor
pole are used to monitor the soil behavior in different location. Each Sensor pole consists of three sensors: Strain
Gauge, Moisture sensor and gyroscope so that soil behavior such as a soil displacement, change in inclination or
change in water content of the soil is measured respectively. The sensor poles are requested to send its sensor data in
a particular sequence, one at a time, so that the location of sensor pole can be traced easily. These sensor data of a
sensor pole are collected at the Computing Station using a wireless technology such as Bluetooth, WiFi etc... The
received data is then fed to the Neural Network to be trained with the feature vectors and after the analysis, the
probability of occurrence of the landslide is obtained. Finally, the warning system will be triggered if the estimated
probability is noticed to be high. The deliberations of the circuit diagram involved with the components are discussed
in this section. The circuit diagram with a wired connectivity of a sensor pole to Strain gauge module, gyroscope and
accelerometer modules establishing connectivity with the computation station using Bluetooth technology or WiFi
technology is clearly shown in figure 3.

Fig. 2. Block Diagram of landslide monitoring and early warning system.

Fig 3. Circuit Diagram of One Sensor pole wirelessly (Bluetooth) connected to Computing Station
3.3. HARDWARE IMPLEMENTATION

The components used and the specifications of the components are discussed in this section. The figures 4 to 8 have
shown the diagram.

30
NATCON 21 College of Engineering Trikaripur
Conference Proceedings
13/12/21 & 14/12/21

Table 1. Components and their specification with diagram

NAME SPECIFICATION DIAGRAM


Soil Moisture Sensor [12] Fig.4

yroscope [13] Fig. 5 MPU-6050

Strain Gauge Module [14] Fig. 6 BF350-3AA

Bluetooth Module [15] Fig. 7 HC-05

Arduino Board [16] Fig. 8 Nano

3.4. PROCEDURAL IMPLEMENTATION

The neural network is trained with the necessary feed of data. More the data used to train, the more accurate the
results of the neural network. For obtaining the data values, many experiments were conducted by using different
types of soils, changing the inclination of the soil, and adding other factors that triggered or cause soil movement
resulting in a landslide. However for this research, a particular type of soil called laterite soil, which is very
common in landslide-prone area, was used, and it was inclined at a fixed slope. The soil was collected in a LxB
box with a thickness of 25cm. The box was then inclined at a slope of 30° and one slide of the box was cut so that
the soil could fall off. Now in this block of soil, a sensor pole containing the three modules was placed. The
experimental setup is shown in figure 9. Water was spread from one side of the soil at a particular interval and
the sensor data was collected. Based on the observation of soil displacement and the movement of sensor pole,
the probability of collapse of soil block was assigned to the particular reading. The sensor data along with its
probabilities can be used as training data for training the Neural Network.

31
NATCON 21 College of Engineering Trikaripur
Conference Proceedings
13/12/21 & 14/12/21

Fig.9 Experimental Setup

RESULTS AND DISCUSSIONS

4.1. TRAINING THE NEURAL NETWORK

While conducting the soil experiment, the sensor data were recorded along with the outcome ie a note to monitor
whether the soil block collapsed or seemed stable, which was remarked as a Boolean value, with either a 1 or 0
respectively. As the sensor data (input data) and the corresponding output (target data) were available, the training
process of neural network iteratively proceeded. Figure 10 show the interface of how the neural network is developed
in MATLAB.

32
NATCON 21 College of Engineering Trikaripur
Conference Proceedings
13/12/21 & 14/12/21

a.
b.

c.
Fig.10 a,b) Input and target data obtained from the soil experiment. c) Assigning the input and targ et data for
developing the Neural Network.

4.2. SIMULATING THE TRAINED NEURAL NETWORK

Using the same setup used for the experiment, the sensor data from the sensor pole was recorded for
simulating the trained neural network. Figure 11 shows the new data obtained for simulating the trained neural network
and also shows the interface of how the trained neural network can be simulated with the new input data (Sensor data).

a. b. c.
Fig.11 a) Sensor data for simulating the trained Neural Network. b) Assigning the new data for simulation. c) Output
obtained by simulating the neural network.

a)
b)

Fig.12 a) Designing the Sensor Pole. b) Soil Experiment.


The designing and electrical soldering of the sensor pole with the 3 modules and the soil experiment done by the
research team are shown in figure 12. Figure 13a displays the strain gauge sensor data. The data are recorded for 30
days. The dielectric moisture sensor data are shown in figure 13b. This graph pictures the volumetric water content of
the soil. Once the data are collected from the sensors, the network is trained with these data. With these the possibility
of landslide can be predicted.

b)
a)
Fig.13 a) Strain gauge data b) Dielectric moisture sensor data [17]

33
NATCON 21 College of Engineering Trikaripur
Conference Proceedings
13/12/21 & 14/12/21
CONCLUSION

Landslide is defined as the movement of mass rock debris or earth down a slope. The damages caused by the
landslide can be limited if a system which can forecast and warn about the probability of occurrence as early as possible.
The two key factors of this implementation were planning and then designing. Apart from the existing technologies, this
research aimed at an effective hardware to predict the landslide in real time and monitor it all the time. Having exposed the
slope to continuous observation for the trigger purpose, this Landslide Monitoring and Early Warning System Using Cluster
of Sensors and Neural Network has its outperforming effect on the true value detection of landslide. A cluster of sensor poles
where each sensor pole consists of moisture sensor, gyroscope and strain gauge for taking data by conducting experiments.
From the simulation results, the extracted data were sourced to the Neural Network which ultimately provides the
probability of the occurrence of landslide due to surface deformation made quantitatively. The prediction of landslide in
real-time using this system lacked nothing in terms of accuracy and was found to be highly promising in serving the societal
cause in real-time environment.
REFERENCES
1. https://www.who.int/health-topics/landslides#tab=tab_1
2. https://india.mongabay.com/2018/09/kerala-landslides-gsi-advocates-land-use-and-zoning-regulations/
3. Kuriakose, Sekhar & Sankar, G. & Muraleedharan, C.. (2010). Landslide fatalities in the Western Ghats of Kerala,
India. p. 8645
4. Kuriakose, Sekhar & Sankar, G. & Muraleedharan, C.. (2008). History of landslide susceptibility and a chorology
of landslide-prone areas in the Western Ghats of Kerala, India. Environmental Geology. 57. 1553- 1568.
10.1007/s00254-008-1431-9.
5. https://www.ncess.gov.in/
6. Savvaidis, P, “Existing Landslide Monitoring Systems and Techniques.” (2003). School of Rural and Surveying
Engineering, The Aristotle University of Thessaloniki, From Stars to Earth and Culture In honor of the memory of
Professor Alexandros Tsioumis pp. 242-258.
7. Marco Scaioni, Laura Longoni, Valentina Melillo and Monica Papini, (2016) “Remote Sensing for Landslide
Investigations: An Overview of Recent Achievements and Perspectives”, Remote Sensing, MDPI
https://www.mdpi.com/2072-4292/6/10/9600/htm
8. Devrim Akca, (2013) “Photogrammetric Monitoring of an Artificially Generated Shallow Landslide”, The
Photogrammetric Record. https://doi.org/10.1111/phor.12016
9. Marco Scaioni, Maria Marsella, Michele Crosetto, Vincenza Tornatore and Jin Wang, (2018), Geodetic and
Remote-Sensing Sensors for Dam Deformation Monitoring, Sensors, MDPI https://www.mdpi.com/1424-
8220/18/11/3682
10. William Emery, Adriano Camps, (2017), “Remote Sensing Using Global Navigation Satellite System Signals of
Opportunity”, Introduction to Satellite Remote Sensing. https://www.sciencedirect.com/topics/earth-and-
planetary-sciences/gnss
11. David A. Saftner, Roman D. Hryciw, Russell A. Green, Jerome P. Lynch, and Radoslaw L. Michalowski, (2008),
“The Use of Wireless Sensors in Geotechnical Field Applications”, Proceedings of the 15th Annual Great
Lakes Geotechnical/Geoenvironmental Conference, Indianapolis, IN.
https://www.researchgate.net/publication/228549738_The_Use_of_Wireless_Sensors_in_Geotechnical_Fie
ld_Applications.
12. https://components101.com/modules/soil-moisture-sensor-
module#:~:text=Moisture%20sensor%20module%20consists%20of%20four%20pins%20i.e.%20VCC%2C
%20GND,is%20connected%20to%20Moisture%20sensor.&text=Connect%20VCC%20and%20GND%20p
ins,the%20probe%20inside%20the%20soil.
13. https://components101.com/sensors/mpu6050-module
14. https://medium.com/@encardio/strain-gauge-principle-types-features-and-applications-357f6fed86a5
15. https://components101.com/wireless/hc-05-bluetooth-module
16. https://www.arduino.cc/en/pmwiki.php?n=Main/ArduinoBoardNano
17. Ramesh, Maneesha V., and Nirmala Vasudevan. "The deployment of deep-earth sensor probes for landslide
detection." Landslides 9.4 (2012): 457-474.

34
NATCON 21 College of Engineering Trikaripur
Conference Proceedings
13/12/21 & 14/12/21

FACE EMOTION DETECTION USING CNN AND HAAR CASCADE CLASSIFIER


SANJAY SANTHOSH
COLLEGE OF ENGINEERING PERUMON

Abstract—
Facial emotion recognition is one of the promptly developing branches within the machine learning domain. In this
paper, we are presenting our model based on Convolutional Neural Networks, which is trained on FER2013 dataset.
Keywords—EmotionRecognition, ConvolutionalNeuralNetworks,Facexpressions

INTRODUCTION

Facial emotion recognition is the process of detecting human emotions from facial expressions. The human brain
recognizes emotions automatically and software has now been developed that can recognize emotions as well. This
technology is becoming more accurate all the time, and will eventually be able to read emotions as well as our brains do.
The facial expression recognition, as an important means of intelligent human-computer interaction, has a broad
application background. It has been applied in the fields of assistant medicine, distance education, interactive games and
public security. The facial expression recognition extracts the information representing the facial expression features
from the original input facial expression images through computer image processing technology, and classifies the facial
expression features according to human emotional expression, such as happiness, surprise, aversion and neutrality. In
the recent years, the development of facial expression recognition technologies has been rapid and many scholars have
contributed to the development of facial expression recognition. Although the CNN algorithm has made some progress
in the field of facial expression recognition, it still has some shortcomings, such as too long training time and low
recognition rate in the complex background. To avoid the complex process of explicit feature extraction in traditional
facial expression recognition, a facial expression recognition method based on CNN and image edge detection is
proposed. Firstly, the facial expression image is normalized, and the edge of each layer of the image is extracted in the
convolution process. The extracted edge information is superimposed on each feature image to preserve the edge
structure information of the texture image. Then, the dimensionality reduction of the extracted implicit features is
processed by the maximum pooling method. Finally, the expression of the test sample image is classified and
recognized by using a Softmax classifier .In this paper, we propose a different CNN model that recognizes 7 facial
emotions, which are sad, happy,angry, surprise,disgust,fear and neutral in a real time. The proposed modelis trained
with fer2013 datasets ofconnecting different parts of a face to facial action units.
METHODOLOGY

CNN algorithm has made some progress in the field of facial expression recognition. Facial expression recognition system is implemented using Convolution Neural
Network (CNN).To solve facial emotion recognition task, we used FER 2013 datasets to train the model. FER 2013 contains black and white pictures . The dataset has
7emotions (angry, happy, sad, fear,disgust, surprise,neutral). The facial emotion recognition extracts the information representing the facial emotion features
from the original input facial emotion images. And classifies the facial emotion features according to human emotional emotions, such as happiness, sadness,
aversion and neutrality. To avoid the complex process of explicit feature extraction in traditional facial expression recognition, a face expression recognition method
based on a convolutional neural network (CNN) and haar classifiers are used. CNN algorithm has made some progress in the field of facial expression recognition.
Facial expression recognition system is implemented using Convolution Neural Network (CNN). CNN algorithm has made some progress in the field of facial
expression recognition. Facial expression recognition system is implemented using Convolution Neural Network (CNN).

›
IMPLEMENTATION

We decided to train experimental models with FER2013 dataset. Each convolutional layer is followed by pooling layer and as a final step all input is flattened and
added to the dense layer. In all other layers we used ReLU activation function since deep convolutional neural networks with ReLU. After building our model we
started to train it.

35
NATCON 21 College of Engineering Trikaripur
Conference Proceedings
13/12/21 & 14/12/21

INPUT IMAGE

THE STRUCTURE OF CNN MODEL

36
NATCON 21 College of Engineering Trikaripur
Conference Proceedings
13/12/21 & 14/12/21

The facial expression recognition system is implemented using convolutional neural network. The block diagram of the system is shown
in following figures:

Testing Phase

RESULT

37
NATCON 21 College of Engineering Trikaripur
Conference Proceedings
13/12/21 & 14/12/21

model Training accuracy(%) Testing accuracy


CNN 92.12 90

CONCLUSION

In this project, convolution neural network is implemented to classify human facial expressions i.e. happy, sad, surprise,
fear, anger, disgust, and neutral. The proposed Model consists of 5 convolutional layers with the addition of pooling and
38
NATCON 21 College of Engineering Trikaripur
Conference Proceedings
13/12/21 & 14/12/21
dropout layers. The model gave good results on datasets.

REFERENCES

1 .X. Zhao, S. Zhang and B. Lei, “Facial expression recognition based onlocal binary patterns and local fisher discriminant analysis,”
WSEASTransactions on Signal Processing, vol. 8, no. 1. pp.21-31, 2012.

2. M. Pourebadi and M. Pourebadi, “MLP neural network-based approach for facial expression analysis,” 2016.

3. Lucey, P., Cohn, J. F., Kanade, T., Saragih, J., Ambadar, Z., & Matthews,I. “The Extended Cohn-Kanade Dataset (CK+): A complete
expressiondataset for action unit and emotion-specified expression,” Proceedings of the Third International Workshop on CVPR for Human
Communicative Behavior Analysis (CVPR4HB 2010), San Francisco, USA, 94-101, 2010.

39
NATCON 21 College of Engineering Trikaripur
Conference Proceedings
13/12/21 & 14/12/21

A RULE BASED MALAYALAM PARTS OF SPEECH TAGGER


Anjali M K 1 , BabuAnto P 2
1#
Research Scholar, Department of Information Technology,Kannur University, Kerala, India
2#Professor, Department of Information Technology,Kannur University, Kerala, India

Abstract -
Parts-of-speech tagging is the process of assigning one of the parts of speech ( POS) to the given word in a sentence. This paper
presents a rule based method for parts-of-speech-tagging in Malayalam Language with suffix stripping approach. The system uses
a dictionary containing root words, Uses the identified suffixes to find out the category and a set of orthographic rules to revert
sandhi changes. In language processing activities, POS tagging serves as a very important preprocessing task. So POS tagged
corpora is an essential tool in Natural Language Processing

Keywords: Malayalam Parts of speech tagging, tagset, rule based, suffix stripping.

INTRODUCTION

Natural Language Processing (NLP) is concerned with the development of computational models of aspects of human language
processing. Natural Languages are ambiguous, Part-of-speech tagging is one of the disambiguation techniques. Ambiguity can be in
word level with respect to its syntactic class.
E.g. book, silver, study.
This ambiguity can be resolved by part of speech tagging. Part-of-speech provide a large amount of information about a word and
its neighbors, this contributes to its significance in the field of language processing. POS tagging is considered as one of the basic
necessary tools in many Natural Language Processing applications such as parsing, word sense disambiguation, information
retrieval, question answering, information processing, and machine translation,.

MATERIALS AND METHODS


II. PARTS-OF-SPEECH TAGGING METHODS
Part-of- speech tagging methods fall under the three general categories:- Rule-based, Stochastic and Hybrid.[2]
Rule-based tagger
Most rule-based taggers use hand-coded rules to assign tags to words. These rules use a lexicon to obtain a list of candidate tags
and then use rules to discard incorrect tags. For example, consider the noun-verb ambiguity in the following sentence:
The show must go on.
The potential tags for the word show in the sentence are {VB, NN}. One can resolve this ambiguity by using the following rule.
IF preceding word is determiner THEN eliminate VB tag.
This rule simply disallows verbs after a determiner. Using this rule the word show in the given sentence can only be the noun.
Stochastic Tagger
Stochastic tagger uses a corpus to compute the probability of a given word having a given tag in a given context.
Hybrid Tagger
The Hybrid taggers shares the features of both the rule based and stochastic tagging approaches. Like rule based tagger they use
rules to determine the tags for words. Like stochastic tagger it uses machine learning technique and rules are automatically induced
from the data. Transformation-based tagger or Brill tagger is an example of Hybrid tagger.
III. POS TAGGING FOR MALAYALAM
POS Taggers proposed for Malayalam are:

40
NATCON 21 College of Engineering Trikaripur
Conference Proceedings
13/12/21 & 14/12/21
i) A parts of speech tagger based on a stochastic Hidden Markov Model is proposed by Manju K., Soumya S. and Sumam Mary
Idicula[3].

ii) Rajeev R R , Jisha P Jayan, and Elizabeth Sherly , proposed a Malayalam POS tagger based
on TnT with the Hidden Markov Model following the Viterbi algorithm[4] . The tagset used are
developed by IIIT. The tagger gives about 90.5% accuracy.

iii) Antony P.J, Santhanu P Mohan and Dr.Soman K.P of AMRITA university Coimbatore
proposed a tagger which is based on Support Vector Machine (SVM) algorithm[5], and achieves
an accuracy of 94%.

iv) CIILMysore developed a POS Tagger using hybrid approach[6], and attain a precision of 86.2%.

RESULTS AND DISCUSSIONS

IV. RULE BASED POS TAGGER FOR MALAYALAM WITH SUFFIX STRIPPING APPROACH
Malayalam has an agglutinative grammar. Even though the word order is subject–object–verb, other orders can also be used. Nouns
are inflected for gender, case and number, whilst the tense, aspect and mood information are binded with verbs. Vachakam
(declinable word) and dyothakam (indeclinable word) are the main divisions for the Malayalam words. Vachakam is the meaningful
word. The connectives which have no specific meaning by itself, but connects vachakas together is called dyothakam. nipatham and
avyayam are the classifications for dyothakam. Vachakam is divided into noun, verb and bedhakam.Malayalam is a
morphologically rich and highly agglutinative language.

Since Malayalam is a highly inflected language, this property can be used to identify the lexical category of a word.

The system uses a dictionary containing root words, Uses the identified suffixes to find out the category and a set of orthographic
rules to revert sandhi changes.

Outline for POSTagging:

 Tokenize the given sentence by locating word boundaries.

 Check whether the token is in the dictionary and apply the tag

 If not, identify the suffixes in the word and match with the suffix list, if matched apply the rules to categorize the tag

 If ambiguity persists, Identify the stem by stripping the suffixes and applying the revert morphophonemic rules.

 Do a dictionary look up and apply the tag


Ending point of a word and beginning of the next word is called word boundaries . Tokenization is performed by locating word
boundaries. If the character is equal to delimiters such as, punctuations like ‘ . ’, ‘ ,’ , ‘ ” ’, ‘?’ etc. tokenize them.
If the given token is in the dictionary then apply the corresponding tag otherwise match with the identified suffix list and can
recognize most of the verb, noun and adjectives by applying the appropriate rules. If ambiguity remains we have to strip the suffixes
and apply the appropriate morphophonemic rules to revert the sandhi changes for getting the correct stem and then proceed with a
dictionary lookup to get the correct tag. It is required when suffixes like appears.

41
NATCON 21 College of Engineering Trikaripur
Conference Proceedings
13/12/21 & 14/12/21

For noun, considering the inflections caused by the number and case markers. For number consider the
Plural Markers ‍, ‍
And consider the accusative, sociative, dative, instrumental, genitive and locative cases and their combinations.

For verb, consider the tense, aspect like , , and mood like , etc.
Ex: For VBF, , ,
For adjective consider the suffixes , and for adverb . A dictionary look up will be used for the non inflected words.
The tagset identified are

Category Tag
Noun NN Pronoun PRN
Verb Finite VBF Postposition PSP

Verb Nonfinite VNF Demonstrative DEM

Verb Infinite VINF Conjunction CON


Verbal Auxiliary VAUX Quantifiers QTF

Verbal Noun VBG Ordinals ORD


Adverb ADV Interjection INJ

Adjective) ADJ Intensifier INTF

Example Sentence:
/ADJ ‍/NN ‍/NN /VBF

CONCLUSION

POS tagging place a significant role in the field of NLP, it’s a preprocessing task for many language processing activities too. The
accuracy of many NLP applications depends on the accuracy of POS tagger. A large annotated corpora is essential in the present day
NLP. Annotated corpora serve as an important tool for investigators of natural language processing. Here we tried to make a rule
based Part-of-speech tagger in Malayalam with suffix stripping approach based on a limited tagset and a tagset is proposed. The
tagset can be elaborated involving more tags and subtags also.

REFERENCES

[1] Jurafsky, D., Martin, J. H., ‘Speech and Language Processing-An Introduction to Natural Language Processing, Computational
Linguistics and Speech Recognition’, Prentice Hall, Upper Saddle River, NewJersey, 2000
[2] Tanveer Siddiqui , U S Tiwary, ‘Natural Language Processing and Information Retrieval’, Oxford University Press.
[3] Manju K., Soumya S., Sumam Mary Idicula, ‘Developmentof a POS Tagger for Malayalam - An Experience, artcom, pp.709-
713, 2009 International Conference on Advances inRecent Technologies in Communication and Computing,2009
[4] Rajeev R R, Jisha P jayan, Elizabeth Sherly, ‘Parts of speech Tagger for Malayalam’, IJCSIT International Journal of Computer
Science and Information Technology, vol 2, No.2, December 2009,
[5] Antony P J, Santhanu P Mohan and Soman K P (2010), ‘SVM Based Parts Speech Tagger for Malayalam’, International
Conference on-Recent Trends in Information, Telecommunication and Computing (ITC 2010).
42
NATCON 21 College of Engineering Trikaripur
Conference Proceedings
13/12/21 & 14/12/21
[6] http://www.ldcil.org/workInProgress.aspx
[7] Eric Brill, ‘A Simple rule- based Part of speech tagger‘, Proceeding ANLC '92 Proceedings of the third conference on Applied
natural language processing, Pages 152-155
[8]A R Rajarajavarma, ‘Keralapanineeyam’, National book stall

43
NATCON 21 College of Engineering Trikaripur
Conference Proceedings
13/12/21 & 14/12/21

A survey on Alzheimer’s disease diagnosis and its impacts on EEG

Prinza L, 2Beena Mol M, 3Ignisha Rajathi G and 4Jerald Prasath G


1
1
TKR COLLEGE OF ENGINEERING AND TECHNOLOGY, HYDERABAD
2
LBS College of Engineering, Kasargod,
3
SRI KRISHNA COLLEGE OF ENGINEERING AND TECHNOLOGY,COIMBATORE
Abstract – Alzheimer disease (AD), an irreversible, neuro-degenerative disease reduces the brain cell growth and leads to brain death.
Worldwide, nearly 35.6 million people are believed to be living with Alzheimer’s disease or other dementias. Analyzing the electrical brain signal
in its earlier stage would prevent the death rate. Computer aided diagnosis (CAD) which acts as a second opinion to radiologist, plays a very
crucial role in early detection. In this paper, we discuss about the diagnosis of Alzheimer’s disease and its effect on EEG signal. Also we have
compared two different detection techniques based on Complex Wavelet Transform(CWT) and Empirical Mode Decomposition(EMD), where both
provides in detail about time and frequency analyses of complex EEG signal. It shows that early diagnosis of AD with an accuracy of about 95%
is achieved by CWT which paves a promising result for predicting the AD in its pre-clinical stage.

Index Terms – Alzheimer’s disease, EEG, CAD,CWT, EMD.

INTRODUCTION
Dementia is one of the neurological disorders and it affects reasoning and thinking capability of brain activity.
Several types of dementia have been categorized but AD is the foremost and big risky one. Firstly, dementia has been
discovered by Dr.Alois Alzheimer on studying the brain of Auguste Deter (1901). Neurotic plaques and neurofibrillary
tangles are the two types of abnormalities in neurons wherever neurotic plaques present outside the brain’s nerve cells
but neurofibrillary tangles are there within the nerve cell. These plaques and tangles, the signs of AD , appear to
interfere with communication among neurons in the cerebrum, consequently upsetting mental action. There is no surety
to completely eliminate the AD but plenty of treatments and medical care are there to postpone the symptoms of AD.
Usually this occurs among 75 and above years old. The volume of the hippocampal has been reduced by 10 to 12% in
the primary stage and it will grow further by 20 to 30% in the middle stage and 30 to 40 in the later stage [1].
Hippocampal area used to shrink about 0.24 to 1.73% within a year in normal human brain but in case of Alzheimer
affected people, the shrinking rate is 2.2 to 5.9%. Nearly 44 million people have Alzheimer’s or a related dementia on
worldwide .Habitually, People about 65 years old may get affected by AD and it is reported that 26.6 million cases have
found in 2006. Only 1-in-4 people with AD have been diagnosed .It is expected one in every eight men and one in every
four women get affected by AD and it is pre calculated that one in every eighty-five people may be affected on or before
2050. Once the test of AD is concluded then the survival rate is 7 years [2]. 30% of people with Alzheimer’s also have
heart disease, and 29% also have diabetes. The preclinical or Mild Cognitive Impairment (MCI) stage diagnosis of AD
supports the patients to plan their future and may reduce or fully avoid the potential hazards faced by them. In most of
the developing countries, the occurrence of AD is growing up with increase in age factor as with insufficient clinical
methods to detect AD in very earliest stage.

A. The Economic Blow on Alzheimer’s disease


The cost of AD amounts to $172 billion annually. AD patients utilize a large amount of healthcare, hospice,
and Long Term Care (LTC) facilities. The global cost is estimated to be $605 billion for Alzheimer’s and dementia,
which is equivalent to 1% of the entire world’s gross domestic product [3]. Medicare and Medicaid are expected to
pay $154 billion in 2015 for health care, long-term care and hospice for people with Alzheimer’s and other dementias.
1) Impact of Alzheimer’s disease in Worldwide:
AD is the 6th leading cause of death in America. 5.5 million Americans are living with AD [4]. More than
220 US dollar is being spent on caring the Alzheimer patients in every year. In Australia, approximately about 332000
civilians are surviving with dementia and this is the third death causing disease in women in Australia.1700 Alzheimer cases
have been detected per week and the rate is one person in each 6 minutes. The studies prove that this rate may increase up to
7400 case per week by 2050. In UK, totally 815827 persons were affected with dementia in 2013[5].Out of this, 773502
persons are above 65 years old which includes 1 in every 79 of entire UK and one in every fourteen over 65 years old
age. As many as five lakhs dementia and with Alzheimer cases are calculated in Canada and on these, seventy thousand
cases are below 65 and fifty thousand cases are below 60. One in every eleven Canadian people above 65 years old are
affected by Alzheimer and the rate is high among Canadian women
2) Impact of Alzheimer’s disease in India:
Alzheimer is fourth death causing disease in Asia compared to cancer and heart disorders. Around 3.7
million Alzheimer cases are detected in 2010, including 1.5 million men and 2.1 million women [6]. In reports, it is
clearly mentioned that the rate will increase up to 6.35 million in 2025 and up to 7.61 million by 2030, if there is no
44
NATCON 21 College of Engineering Trikaripur
Conference Proceedings
13/12/21 & 14/12/21
proper concentration on mental and medical care on elderly citizens in India are taken. Around 18 to 19 million people
are affected by dementia all over the globe and one by fourth of them are from India alone. But in India, there is not
much awareness on AD and also the appropriate facilities are not much available in India than the western part of the
world. The mortality rate of AD in India is shown in Fig. 1. Medical practitioners have founded that the identification of
symptoms of AD used to take long period because these symptoms are usually developed slowly. As a result, AD will
become a severe interference with routine activities .The mortality rate of India is shown in table 1.1.

Fig. 1 Statics in India

Table 1.1 Mortality rate in India


Mortality rate in the India
DISEASE RANK DEATH RATE
Coronary heart disease 1 1,249,587
Road traffic 10 197,135
HIV/AIDS 13 184,935
Alzheimer’s/Dementia 47 18,439
Liver cancer 48 18,043

DIAGNOSIS OF AD USING COMPUTER AIDED TECHNIQUE

Of course many proved best clinical screening methods are obtainable, serving as Computed Tomography
(CT), Magnetic Resonance Imaging (MRI), Functional Magnetic Resonance Imaging (fMRI), Magneto Encephalography
(MEG), Position Emission Tomography (PET), Nuclear magnetic resonance spectroscopy, Electrocorticography, Single-
Photon Emission Computed Tomography (SPECT), Near-Infrared Spectroscopy (NIRS), Event Related Optical
Signal(EROS) and Electroencephalogram (EEG). These technologies help the radiologists to screen the primary brain
degeneration. Thus, Computer Aided Diagnosis (CAD) acts as a second opinion to the radiologists. Neuro radiologists
claim CAD owing to lack of skilled observer or due to their tight schedule and high inter observer changeability. CAD has
boosted the accuracy and consistency by reducing the image reading time, thereby it acts as a significant standpoint to
improve the performance of radiologists [7,8 ]. Different methods have been proposed by utilizing CAD which constitute
the investigation of gray matter (GM) focus contrasts [9], decay of subcortical limbic structures [10], and general
cortical decay [11]. A few authors [12,13] have separated AD from subjects more than 80 by method for ANN. In this
work [12], Neurofibrillary Tangles (NFT) and Neurotic Plaques (NP) count in neocortex and hippocampus discoveries
are sustained towards the system. NFT have a more correlation to subjective capacity in AD, as well as in typical
maturing and MCI.

A. Role of Computer Aided Diagnosis in Reducing the Mortality Rate


Computer aided design can separate AD from controls, with 87-95% accuracy AD & fronto transient lobar
degeneration with 89% accuracy, MCI from solid controls with 90% accuracy, MCI subjects believer to AD and non-
converters with 81% accuracy[8]. On Lehmann [14] work, 89% and 88% of affectability and specificity have been
accomplished for moderate AD versus controls utilizing a few classifiers, for example, SVM, arbitrary woodland and
neural network. The results acquired with Linear Discriminant Analysis (LDA) are great with the mean precision rate of
92.30% [12]. Ref. [15]has created an exactness of around 80% affectability by utilizing wavelet based ANN classifier as
a part of picking AD from EEG. Ref. [16]shows 93% affectability and 85% specificity (HC/AD), 67%/69% (stable
MCI(S-MCI)/P-MCI) and 86%/82% (HC/P-MCI). Ref. [17] has proposed NMF-SVM technique which yields up to
91% arrangement exactness with high affectability and specificity rates (upper than 90%).Thus CAD helps in
diminishing the demise rate by separating AD from its prior stage. In this way the CAD determines the “probable
Alzheimer’s disease” or the “possible Alzheimer’s disease” very correctly. CAD helps in deciding the atrophic image

45
NATCON 21 College of Engineering Trikaripur
Conference Proceedings
13/12/21 & 14/12/21
highlights and in recognizable proof of the AD. Further grouping has been done by a few analysts [18,19].The preparatory
handling module incorporates a few noteworthy bio signal handling capacities on CAD apparatuses for inciting
(fragmenting) signal information for ancient rarity discovery, antiquity decrease and quality control. The signal processing
module goes about as an interface to standard signal handling capacities and wrapper capacities for more unpredictable
investigation.

CAD INFLUENCE ON EEG

Even though MRI, as an anatomical approach and PET, as a metabolic approach are being investigated
nowadays, unexpectedly these are the costliest ones. Whereas EEG a non-invasive neurophysiological biomarker is a
meaningful one in cost vice as well as in observational vice and also it is available in common clinics too. So at last the
Event Related Potentials (ERPs) gained from the EEG act as the perfect one and this is a potential biomarker in AD
diagnosis. Ref. [16] has shown the classification with EEG of HC over (s-MCI) and MCI to AD progresses (P-MCI) by
means of LDA and SVM based classifiers. Data from EEG, for example, relative force [20], dispersion of spectral
power over the scalp and measures of synchronization [14] including common data, stage synchrony, coherence
capacity, granger causality and Correlation coefficient [21] are recovered to enhance the nature of EEG to recognize the
distinction in AD and typical aging. Learning attest that AD has 3 main effects on EEG; reduction in complexity of EEG
signals, slowing in EEG and EEG synchrony perturbations. The normal and AD EEG plots are shown in Fig. 2. Alpa, beta
(8-30Hz) and gamma (30–100Hz) notations are in higher frequencies range and this higher frequency associated with low
power in AD. Delta and theta notations are in low frequencies and ranged between 0.5 Hz to 8 Hz. These are associated
with high power in AD. Severe dementia patients show a reduction in alpha and rise in delta activity whereas mild
dementia shows a decline in beta and an increase in theta activity. Irregularity of EEG in AD directly reflects anatomical
and functional deficit of the cerebral cortex damaged by the disease. Fourier and time frequency maps have drawn to
measure the changes in spectral power. Decline in coherence and changes in the spectral power to lower frequencies are
the features of EEG abnormalities in AD [22,23]. Moreover, recorded statistically dependent impulsive EEG signals
from many channels are usually lesser in MCI and the AD cases than in age-matched control subjects. It shows that
correlation dimensions (D2), a measure of dimensional complexity is lower in AD patient than control subjects.

Fig. 2 Normal and AD EEG

Accuracy of 70–85% has been gained by using EEG for distinguishing AD from patients with senile
depression[22].Visual markers of EEGs are identified by the neurologist and epileptologists who are well qualified.
Some markers are invisible to the eyes of the neurologists but EEGs can include markers. However, EEG records
electrical signals along with unwanted signals from the brain, including many surplus signals. Those signals include
interference from electronic equipment, EMG signals induced by muscular activity, ocular artifacts caused by eye
movements like blinking. Those unwanted signals will affect the analysis of EEG which leads to wrong inference [23].
As a result, a main problem in biomedical and clinical application in EEG is artifact removal. These artifacts
occasionally imitate EEG signals and overlays the signals resulting in distortion making analysis impossible. Also, EEG
is one of the noisiest bio signals in clinical practice [24]. If regions with artifacts are revoked, it results in loss of
information significantly, resulting in misdiagnosis. Artifacts ought to therefore be removed or attenuated to make sure
precise analysis and prediction. However, these effects are not easily obvious every time: there be liable a grand
variability along with AD patients. More studies and recent analysis have examined how to develop the sensitivity of
EEG for identifying AD. Thus the nonlinearity of EEG signals is affected by AD making it more regular and predictable
[22].

COMPARATIVE ANALYSIS : CWT VS. EMD ON AD EEG

A. Complex wavelet transform on AD EEG


EEG is separated in dissimilar scale of frequencies by the multi-resolution characteristics of wavelet. So
46
NATCON 21 College of Engineering Trikaripur
Conference Proceedings
13/12/21 & 14/12/21
the diagnosis of AD at lower frequency scales can be achievable.Applying Wavelet Transform (WT) is likewise a
compelling technique to speak to different components of non-stationary signals, for example, discontinuities, drifts and
repeated patterns. When we look onto the writing, Discrete WT (DWT) is utilized as an instrument to inspect the
nearness of AD in EEG [15,25-28]. Ref. [25] have utilized interpretation invariant WT strategy to deionize the EEG.
Because of the pitfalls in DWT such as Shift sensitivity, poor directionality and absence of phase information, Complex
Wavelet Transform (CWT) an extensive method can also be used to identify the changes in the brain. More researchers
have contributed their work on biomedical signal processing related to this. For early detection of AD, by means of EEG
recordings, Ref. [29] has proposed a new methodology. In this work, the extraction of significant spatio-temporal
components is mined by the BSS algorithm and then, with the help of the CWT, these components are transformed. For
processing several types of signals, more number of authors [30-33].In [23], CWT with tsallis entropy(TE) based
thresholding is used to detect AD from normal EEG.

B. Emperical Mode Decomposition on AD EEG


Intrinsic limitations of wavelet approach are defeated by EMD. EMD decomposes a signal into a finite number
of IMFs with well-defined IF, which paves a way in analyzing nonlinear and non- stationary signals such a one as
EEG[34,35].Rather than convolution used in wavelet, here in EMD differentiation is applied for frequency derivation.
EEG is a non-stationary signal, so EMD with its non-stationary characteristics can be used to analyze it for diagnosing
AD. The cortical activations in AD are quantified with the help of EMD[36]. AD and MCI in the MEG causes the
connectivity alterations [37]. Furthermore, the brain rhythms are extracted from the MEG. Consequently several more
works are carried out by EEG for diagnosing AD. In addition to this, more works have additionally been applied
utilizing Empirical Mode Decomposition (EMD) [38-40]. EMD with TE based method was used to detect AD from
normal EEG signal[40]. Thus by using these CAD tools, the probable case of AD can be easily detected.
The work of CWT with TE[23] and EMD with TE[40] for detecting AD are compared and
analyzed. The respective plots for TE-CWT and TE-EMD approaches for AD and normal cases are shown in Fig. 3 and
4 . From these plots, it is shown that TE-CWT obtains classy results than TE-EMD. PSNR value of about 24.41 and
RMSE value of about 0.71 are obtained for AD with TE-EMD approach. Similarly, a PSNR and RMSE values of about
16.54 and 0.72 are gained for normal cases with TE-EMD tactic. Here confusion matrix is used to bring out the
performance metric of both methods and is shown in Fig. 5. TE threshold method on CWT brings out higher
classification rate with 10 normal cases correctly classified and 9 AD perfectly classified. Thus the accuracy rate of
about 95% is achieved. For EMD with TE method, 6 are correctly classified and 4 are misclassified as AD in normal
cases wherever in case of AD, 10 are correctly classified provided an accuracy rate of 80%.

Fig. 3 PSNR & RMSE plots of AD EEG for CWT and EMD

Fig. 4 PSNR & RMSE plots of Normal EEG for CWT and EMD

47
NATCON 21 College of Engineering Trikaripur
Conference Proceedings
13/12/21 & 14/12/21

Fig. 5 Confusion matrix of CWT and EMD with TE

CONCLUSION
The effect of AD on human brain and its consequences on EEG signals are discussed .It is shown that
detecting AD in the pre-clinical stage would reduce the mortality rate. EEG signals hold the noisy and non-stationary
distinctiveness thus more obligatory information on signal for diagnosis is offered by both in time and frequency
analysing. Comparative analyses of most significant detection techniques such as CWT and EMD reveals that analyzing
the nonlinear EEG signal in time and frequency domain will provide higher accuracy rates of 95% and 80%.Thus CAD
helps the radiologists in diagnosing and treating this dreadful disease in a prominent way.

REFERENCES
[1]Neuron system health 2011. http://www.livestrong.com/article/146378-a-shrinking-hippocampus alzheimers /#ixzz24WubhZEW
[2] Alzheimer's Association. Available from: <http://www.alz. org/braintour >
[3]Bethune, K 2010, ‘Thesis: Diagnosis and Treatment of Alzheimer’s Disease: Current Challenges’
[4] Alzheimer’s Association 2017, ‘Alzheimer’s Disease Facts and Figures’, Alzheimers Dement 2017, vol. 13, pp. 325-373. Available from
<http://www.alz.org/documents_custom/2017-facts-and-figures. pdf>
[5]Alzheimer’s society, 2014. Available from https://www.alzheimers.org.uk/info/20025/policy_and_influencing/251/dementia_uk
[6]The dementia India report 2010 : <https://www.mhinnovation.net/sites/default/files/downloads/innovation/reports/Dementia-India-Report.pdf>
[7] Arimura, H, Magome, T, Yamashita, Y & Yamamoto, D 2009, ‘Computer-aided diagnosis systems for brain diseases in magnetic resonance
images’, Algorithms, vol. 2, no. 3, pp. 925-952.
[8] Ferreira, LK & Busatto, GF 2011, ‘Neuroimaging in Alzheimer's disease: current role in clinical practice and potential future applications’, Clinics, vol. 66, pp.
19-24.
[9] Frisoni, GB, Testa, C, Zorzan, A, Sabattoli, F, Beltramello, A, Soininen, H & Laakso, MP 2002, ‘Detection of grey matter loss in mild Alzheimer's disease with
voxel based morphometry’, Journal of Neurology, Neurosurgery & Psychiatry, vol. 73, no. 6, pp. 657-664.
[10] Thompson, PM, Hayashi, KM, De Zubicaray, GI, Janke, AL, Rose, SE, Semple, J, Hong, MS, Herman, DH, Gravano, D, Doddrell, DM & Toga, AW 2004,
‘Mapping hippocampal and ventricular change in Alzheimer disease’, Neuroimage, vol. 22, no. 4, pp. 1754-1766.
[11] Thompson, PM, Hayashi, KM, De Zubicaray, G, Janke, AL, Rose, SE, Semple, J, Herman, D, Hong, MS, Dittmer, SS, Doddrell, DM & Toga, AW 2003,
‘Dynamics of gray matter loss in Alzheimer's disease’, Journal of neuroscience, vol. 23, no. 3, pp. 994-1005.
[12] Grossi, E, Buscema, MP, Snowdon, D & Antuono, P 2007, ‘Neuropathological findings processed by artificial neural networks (ANNs) can perfectly distinguish
Alzheimer's patients from controls in the Nun Study’, BMC neurology, vol. 7, no. 1, p. 15.
[13] Manion, RV 2004, ‘Recognition of Alzheimer's disease using quantitative electroencephalography’ (Doctoral dissertation, Texas Tech University).
[14] Lehmann, C, Koenig, T, Jelic, V, Prichep, L, John, RE, Wahlund, L, Dodge, Y & Dierks, T 2007, ‘Application and comparison of classification algorithms for
recognition of Alzheimer's disease in electrical brain activity (EEG)’, Journal of neuroscience methods, vol. 161, no. 2, pp. 342-350.
[15] Yagneswaran, S, Baker, M & Petrosian, A 2002, ‘Power frequency and wavelet characteristics in differentiating between normal and Alzheimer EEG’,
In Engineering in Medicine and Biology, 2002. 24th Annual Conference and the Annual Fall Meeting of the Biomedical Engineering Society EMBS/BMES
Conference’, Proceedings of the Second Joint, IEEE, vol. 1, pp. 46-47.
[16] Wolz, R, Julkunen, V, Koikkalainen, J, Niskanen, E, Zhang, DP, Rueckert, D, Soininen, H & Lötjönen, J 2011, ‘Alzheimer's Disease Neuroimaging Initiative,
2011,’Multi-method analysis of MRI images in early diagnostics of Alzheimer's disease, PloS one, vol. 6, no. 10,
p. e25446.
[17] Padilla, P., López, M., Górriz, J. M., Ramirez, J., Salas-Gonzalez, D., & Alvarez, I. (2011). NMF-SVM based CAD tool applied to functional brain images for
the diagnosis of Alzheimer's disease. IEEE Transactions on medical imaging, 31(2), 207-216.
[18] Subasi, A & Ercelebi, E 2005, ‘Classification of EEG signals using neural network and logistic regression’, Computer methods and programs in
biomedicine, vol. 78, no. 2, pp. 87-99.
[19] Petrosian, A, Prokhorov, D, Homan, R, Dasheiff, R & Wunsch, D 2000, ‘Recurrent neural network based prediction of epileptic seizures in intra-and
extracranial EEG’, Neurocomputing, vol. 30, no. 1, pp. 201-218.
[20] Elgendi, M, Vialatte, F, Cichocki, A, Latchoumane, C, Jeong, J & Dauwels, J 2011, ‘Optimization of EEG frequency bands for improved diagnosis of
Alzheimer disease’, In Engineering in Medicine and Biology Society, EMBC, 2011 Annual International Conference of the IEEE, pp. 6087-6091.
[21] Dauwels, J, Vialatte, F, Musha, T & Cichocki, A 2010, ‘A comparative study of synchrony measures for the early diagnosis of Alzheimer's disease based on
EEG’, NeuroImage, vol. 49, no. 1, pp. 668-693.
[22] Jeong, J 2004, ‘EEG dynamics in patients with Alzheimer's disease’, Clinical neurophysiology, vol. 115, no. 7, pp. 1490-1505.
[23]Torrents‐Barrena, J., Lazar, P., Jayapathy, R., Rathnam, M. R., Mohandhas, B., & Puig, D. (2015). Complex wavelet algorithm for computer‐aided diagnosis of
Alzheimer's disease. Electronics Letters, 51(20), 1566-1568.
[24] Celka, P, Le, KN & Cutmore, TR 2008, ‘Noise reduction in rhythmic and multitrial biosignals with applications to event-related potentials’, IEEE Transactions
on Biomedical Engineering, vol. 55, no. 7, pp. 1809-1821.
[25] Walters Williams, J & Li, Y 2011, ‘Using invariant translation to denoise electroencephalogram signals’, American Journal of Applied Sciences, vol. 8, no. 11,
pp. 1122-1130.
[26] Han, M, Liu, Y, Xi, J & Guo, W 2007, ‘Noise smoothing for nonlinear time series using wavelet soft threshold’, IEEE signal processing letters, vol. 14, no. 1, pp.
62-65.
48
NATCON 21 College of Engineering Trikaripur
Conference Proceedings
13/12/21 & 14/12/21
[27] Prinza, L., et al. "Denoising performance of complex wavelet transform with Shannon entropy and its impact on Alzheimer disease EEG classification using
neural network." Journal of Medical Imaging and Health Informatics 4.2 (2014): 186-196.
[28]Salwani, MD & Jasmy, Y 2005, ‘November. Comparison of few wavelets to filter ocular artifacts in EEG using lifting wavelet transfor’, In TENCON IEEE
Region, vol. 10, pp. 1-6.
[29] Vialatte, F, Cichocki, A, Dreyfus, G, Musha, T, Rutkowski, TM & Gervais, R 2005, ‘Blind source separation and sparse bump modelling of time frequency
representation of eeg signals: New tools for early detection of alzheimer’s disease’, 2005 IEEE Workshop on Machine Learning for Signal Processing, IEEE, pp. 27-
32.
[30] Rioul, O & Vetterli, M 1991, ‘Wavelets and signal processing’, IEEE signal processing magazine, vol. 8, no. 4, pp. 14-38.
[31] Kingsbury, N 2001, ‘Complex wavelets for shift invariant analysis and filtering of signals’, Applied and computational harmonic analysis,
vol. 10, no. 3, pp. 234-253.
[32] Sardy, S 2000, ‘Minimax threshold for denoising complex signals with waveshrink’, IEEE Transactions on Signal Processing, vol. 48, no. 4, pp. 1023-1028.
[33] Khare, A, Tiwary, US, Pedrycz, W & Jeon, M 2010, ‘Multilevel adaptive thresholding and shrinkage technique for denoising using Daubechies complex
wavelet transform’, The Imaging Science Journal, vol. 58, no. 6, pp. 340-358.
[34] Li, S, Zhou, W, Yuan, Q, Geng, S & Cai, D 2013, ‘Feature extraction and recognition of ictal EEG using EMD and SVM’, Computers in biology and
medicine, vol. 43, no. 7, pp. 807-816.
[35] Bajaj, V & Pachori, RB 2012, ‘Classification of seizure and nonseizure EEG signals using empirical mode decomposition’, IEEE Transactions on Information Technology
in Biomedicine, vol. 16, no. 6, pp. 1135-1142.
[36] Tsai, PH, Lin, C, Tsao, J, Lin, PF, Wang, PC, Huang, NE & Lo, MT 2012’, Empirical mode decomposition based detrended sample entropy in
electroencephalography for Alzheimer's disease’, Journal of neuroscience methods, vol. 210, no. 2, pp. 230-237.
[37] Escudero, J, Sanei, S, Jarchi, D, Abásolo, D & Hornero, R 2011, ‘Regional coherence evaluation in mild cognitive impairment and Alzheimer's disease based on
adaptively extracted magnetoencephalogram rhythms’, Physiological measurement, vol. 32, no. 8, p. 1163.
[38] Boudraa, AO & Cexus, JC 2007, ‘EMD-based signal filtering’, IEEE Transactions on Instrumentation and Measurement, vol. 56, no. 6,
pp. 2196-2202.
[39] Kopsinis, Y & McLaughlin, S 2009, ‘Development of EMD-based denoising methods inspired by wavelet thresholding’, IEEE Transactions on signal
Processing, vol. 57, no. 4, pp. 1351-1362.
[40] Lazar, P., Jayapathy, R., Torrents-Barrena, J., Mary Linda, M., Mol, B., Mohanalin, J., & Puig, D. (2018). Improving the performance of empirical mode
decomposition via Tsallis entropy: Application to Alzheimer EEG analysis. Bio-medical materials and engineering, 29(5), 551-566.

49
NATCON 21 College of Engineering Trikaripur
Conference Proceedings
13/12/21 & 14/12/21

JOINT ZEROFORCING SUB OPTIMAL POWER ALLOCATION FOR COOPERATIVE


WIRELESS NETWORKS
1
Priya L R, 2 Ignisha Rajathi G and 3Allwyn Kingsly Gladston J
1
Francis Xavier Engineering College
2
Sri Krishna College of Engineering and Technology, Coimbatore
3
SCAD college of Engineering and Technology

Abstract: In cellular system scenario where QoS of multiple data streams originating from a base station (BS) targeted to multiple
mobile stations (MSs) is guaranteed by using pre-installed relay stations (RSs) with MIMO antennas. The multiple users conduct
bidirectional transmission through cooperative relay networks. We consider a two-hop relay link in which orthogonal frequency
division multiplexing (OFDM) is used on both hops. Our objective is to guarantee quality-of-service (QoS) in terms of predefined
signal-to-noise ratio at such users within the transmit power budgets at BS and RSs while minimizing total transmit power. We
propose a suboptimal solution for this case because it saves second time slot (or frequency band) by avoiding relay transmission,
thereby contributing to increased throughput while guaranteeing the QoS.
Keywords: Quality of Service, Power allocation,OFDM, Relay

INTRODUCTION
The cooperative communication may be used to enhance capacity, improve reliability, or increase coverage of an wireless
network. It may be used in the uplink or the downlink. In the communication between a base station and a mobile, the cooperating
entity may be another base station, another mobile, or a dedicated (often stationary) wireless relay node. The cooperating entity may
have various amounts of information about the source data and channel state information. Cooperation may happen in the physical
layer, data link layer, network layer, transport layer, or even higher layers.
The systematic study of relaying and cooperation in the context of digital communication goes back to the work of Van der
Meulen [1] and Cover and El Gamal [2]. The basic relay channel of [1, 2] consists of a source, a destination, and a relay node. The
system models in [1, 2] are either discrete memory less channels (DMC), or continuous-valued channels which are characterized by
constant (nonrandom) links and additive white Gaussian noise.
Due to the increase of users in wireless cellular networks, the resource-demand also increased. So cellular networks, in
particular have to designed and deployed with unavoidable constraints on the limited radio resources such as bandwidth and
transmit power [3],[4].
Two major classes of dynamic resource allocation schemes have been reported in literature [5]: margin adaptive (MA)
schemes [5-6], and rate adaptive (RA) schemes [7]. The optimization problem in MA allocation schemes is formulated with the
objective of minimizing the total transmits power while providing each user with its required QoS in terms of data rate and BER.
The objective of the RA scheme is to maximize the total data rate of the system with the constraint on the total transmit power.

LITERATURE REVIEW

There has been extensive work on resource allocation for cellular MIMO systems [8]-[13]. Most of them concentrate on
maximizing the channel capacity. In [12] consider an OFDM based network and maximize the sum rate by including sub carrier
paring technique. In [13] they designed a pre-coder to allocate power between the two hop relay networks. There has been a few
recent works considering the effect of direct link in MIMO relay based system [14]–[16]. Authors in [14] and [15] proposed
minimum-mean-square-error (MMSE)–based joint pre-coder design for such system. In [16] Singular value decomposition (SVD)–
based iterative joint pre-coder design for capacity maximization was proposed.
In paper [8]-[16] Amplified and forward regenerative cooperative communication is done between source and relay. In
which the relay nodes apply linear processing before retransmission, is becoming more of practical interest due to its low
complexity. In this work we proposed an optimal power allocation algorithm for allocating power between source and relay with
guaranteed QoS. We consider the direct link between the source and the destination to analyze the power requirements. Then we
allocate the power to the relay and the sources under per node power constraint. In [13] they proposed a novel pre-coder to improve
the network performance.
In our work we consider OFDM based two hop relay networks. Amplified and forward regenerative method is used
because of its low complexity. We formulate optimization problem of power allocation in relay and source for the two hop wireless
networks.
The remaining part of this paper is organized as follows. Section III presents the system model and describes the relay-
based transmission in a MIMO cellular system. Section IV describes the Joint Zeroforcing Suboptimal power allocation algorithm.
Section V presents the simulation setup and the results of the proposed algorithm and finally Section VI gives the conclusion.

SYSTEM MODEL
The System consists of MB number of base antennas and MR number of relay antennas. We assume that even numbers of
relays are present in our network. We consider the downlink transmission with N data subcarriers. The channel between Relay to
50
NATCON 21 College of Engineering Trikaripur
Conference Proceedings
13/12/21 & 14/12/21
destination and source to relay consist of Rayleigh fading and affected by zero mean complex Gaussian white noise. The relaying
scenario works as follows. Base Station (BS) firstly transmits the information symbols to relay station (RS) and the destination. The
transmission between BS to destination is called direct link transmission or direct hop transmission. In the second phase, the relay
station forwards the received symbols to the destination or users. In our system we assume k numbers of users are available. Upon
receiving the information symbols from BS and RS, Optimal comparing can be performed at each user. In this case maximum ratio
combining is used. The transmitted symbol can be written as,

Xs=[ S(1)(0)... S(1)(K-1)… .S(K)(0)….S(K)(K-1) ]


1st user k user ------------------------(1)

Feedback
BS hSD user

hSR feedback
hRD

Relay

Fig 1:System model


Cooperation requires two times slot to send message to destination D. In the first time slot, base station transmits its
message XS to both R and D. The received signals at R and D can be expressed as,
YR=HSR* XS+ n1 -------------------------(2)
YD=HSD*XS+n2 -------------------------(3)
respectively, Where hSR and hSD are the complex channel co efficiency for base station to relay and source to destination link. n1 and
n2 denote the independent and identically distributed circularly symmetric AWGN with zero mean and variance N0.
The transmitted signal vector at the Base Station (BS) is represented as follows,
xB =WB s --------------------------------- (4)
Where s is the modulated symbols for K th mobile station. WB is the linear combining vector of the base station. WB is the
MB X K matrix with complex elements. Then the transmit power of base station is denoted as follows.
PB = E[||xB||2] -------------------------------(5)
Similarly, The Received signal vector at the relay Station(RS) is obtained from (2) ,Then the relay transmit power is,
PR = E[||xR||2] --------------------------------(6)
Eq (5) &(6) are true when the power is allotted equally between the source and the relay station.By using the power scaling vector
WB eq (2) and eq (3) becomes
YBR=HSR WB xs+ n1 -----------------------(7)
YRD= HSR WB xs +hRD WR XS+n’---------(8)
When hSR WB ≠ 0,then the received vector YR carries the information for destination or mobile station.In order to extract the
information at destination we apply another combining vector WR .
The received signal vector at the mobile Station (MS) is also represented as follows,
xR =WR yBR ------------------------------- (9)
Here WR∊∁MR xMR is the relay station precoding matrix. The channel between RS and k th MS can be represented by a vector
HRM,k∊∁1 x MR .Therefore the received signal at k th MS due to RS transmission is given by,
yRM,k=HRM,k xR+n2 ------------------(10)
from [13]
PB = E[||xB||2] = tr(WB WBH) ---------(11)
PR= tr(WR HBRxB WBHHHBR+I) WRH -------(12)

JOINT OPTIMAL ZERO FORCING POWER ALLOCATION ALGORITHM


In this method the channel matrixes and the liner precoding matrixes need to have non zero diagonal elements. This is Zero forcing
criterion. WB , WR, HSD, HBR, HRD should have the non zero elements in the diagonal. We assume that Mb=Mr=K. This is a
reasonable assumption when RSs are pre-installed at fixed locations by the operator. Generally, each RS is
Responsible to serve a group of MSs based on implemented relay selection strategy.
Assuming the knowledge of channel matrices HSD, HBR, HRD to be available at BS and RSs and the matrices to be
nonsingular, the joint ZF can be achieved by choosing linear precoding matrixes WB , WR.
WB = HSD-1Λ ----------------(13)
WR =HRS-1QH -----------------(14)
51
NATCON 21 College of Engineering Trikaripur
Conference Proceedings
13/12/21 & 14/12/21
Where Λ,Q are the diagonal matrixes,and
ΛΛH= diag{λ1, λ2, λ3,….. λk} ------------(15)
and
QQH =diag{q1, q2 q3........ qk} ------------(16).
Using Λ, Q and channel matrixes calculate the linear precoding matrixes using SVD. After that using Maximal Ratio
Combining (MRC) technique calculate SNR.
MRC algorithm is used to find the high SNR channel among all the channels. In this method calculate SNR for separate
branches and multiply the normalizing weight ai with the calculated SNR.

ϒ k= λk+ qk λk ------------------(17)
qk||hkT||2+1
Next calculate the QoS guaranteed SNR
ϒ min=[ ϒ 1,min ,ϒ 2,min ,……….. ϒ k,min] T ---------(18)

This algorithm to minimize the total power consumption at BS and RS while guaranteeing QoS for each MS and
considering power budget constraints of both the BS and RS say PS,max and PR,max, respectively. Therefore, the total power
calculated as,
PT =PS+PR------------------------(19)
Subject to PS ≤ PS,max --------------------(20)
PR ≤ PRmax --------------------(21)
Here PT is the total transmit power. Now calculate Ps and Pr from eq.(15) and (16). From [13],we can rewrite the equation (15)
and(16) as follows,
PS = ∑ λk||gSD,k||2 ----------------------(22)
PR = ∑ λk||gRD,k||2+∑ aij+qij -----------(23)
Here gRD is the kth coloum of HRD.-1
From the above equation we know that the optimization is only depends λ and q parameter.
If Ps is goes beyond PS,max ie) PS ≥ PS,max then calculate λk,new and qk,new.
λk ≥ ϒ k,min(||hkT||2/||hkT||2+1) --------------------(24)
qk = ϒ k,min - λk
λk(||hkT||2+1)- ϒ k,min ||hkT||2-------------(25)
from eq (23)-(25) we know the power optimization is depends the linear precoding matrix and λ and q.If we minimize the all above
we can improve the power allocation.
Mimimize (λ and q)= ∑ (ai λi2 + qi2) -----------(26)

Algorithm

1: Design the system model with ϒmin =0, and Pbmax


2: Solve eq (17) to get ϒ.
3: Calculate Pt using (19)
4: If step 3 is feasible go to step 1.
5: Other wise Calculate gSD,k and gRD,k.
6: Calculate Eq (24) & (25) then
7: Minimize λ and q using eq (25) and (26).
8: Substitute Step 7 in Eq (22) & (23) to get optimal Power.

I. RESULTS AND DISCUSSION

52
NATCON 21 College of Engineering Trikaripur
Conference Proceedings
13/12/21 & 14/12/21
Tx. Power v/s distance
13
12
K=4
11
10

Tx Power in dB
9
Proposed Algorithm
8 Joint ZF power allocation

7
6

4
0 200 400 600 800 1000
Distance
Fig 2: Performance comparison of Transmit Power
The Performance comparison of transmit power has been given in the figure 2 stating the correlative concurrence in terms of
transmission power and distance. The Joint ZF power allocation shown a slight lower line than the proposed algorithmic
implementation. Thereby the proposed work outperforms the comparison as shown in the figure. The comparison of SNR value has
been shown in the figure 3, which has favored towards the proposed work rather than the Joint ZF power allocation. This figure
shows SNR values plotted against Distance, having the value of K=4.

-3
x 10 SNR Vs distance
4

3.5
Proposed Algorithm
Joint ZF power allocation
3

2.5
SNR

1.5 K=4

0.5

0
10 15 20 25 30 35 40
Distance

Fig 3: Comparison of SNR

CONCLUSION

The two-hop relay link where orthogonal frequency division multiplexing has been employed on both the hops have yielded
better results, in the cellular system. The QoS has obtained guaranteed results in terms of predefined signal-to-noise ratio within the
transmit power budgets at BS and RSs while minimizing total transmit power. This solution proves for this case, which saves
second time slot (or frequency band) by avoiding relay transmission, thereby contributing to increased throughput while
guaranteeing the QoS. This enhancement in cooperative communication has been used to enhance capacity, improve reliability, or
increase coverage of a wireless network.

References:
[1] E. C. Van der Meulen, “Three terminal communication channels,” Adv.Appl. Probab., 3, 971, 120–154.
[2] T. Cover and A. El Gamal, “Capacity theorems for the relay channel,” IEEETrans. Inform. Theory, 25, 1979, 572–584.
[3] J. N. Laneman, Cooperative diversity in wireless networks: algorithms and architectures, Ph.D. Thesis, Massachusetts Institute
of Technology, Cambridge,MA, 2002.
[4]S. Mallick, P. Kaligineedi, M. M. Rashid, and V. K. Bhargava, “Radio resource optimization in cooperative wireless
communication networks,”
Chapter 8, in Cooperative Cellular Wireless Networks. Cambridge University Press, 2011.
[5] S. Sadr, A. Anpalagan, and K. Raahemifar, “Radio resource allocation algorithms for the downlink of multiuser OFDM
communication systems,”IEEE Commun. Surveys and Tutorials, vol. 11(3), pp. 92–106, 3rd Quarter 2009.

53
NATCON 21 College of Engineering Trikaripur
Conference Proceedings
13/12/21 & 14/12/21
[6] L. Xiaowen and Z. Jinkang, “An adaptive subcarrier allocation algorithm for multiuser OFDM system,” in Proc. of IEEE
Vehicular Technology Conference(VTC’03), vol. 3, pp. 1502–1506, Oct. 2003. IEEE, 2003.
[7] Z. Shen, J. G. Andrews, and B. L. Evans, “Adaptive resource allocation in multiuser OFDM systems with proportional rate
constraints,” IEEE Trans.Wireless Commun., vol. 4, pp. 2726–2737, Nov. 2005.
[8] S. Feng, M. Wang, and L. Tingting, “Leakage-based precoding withpower allocation for multicellular multiuser MIMO
downlink,” Electronics Letters, vol. 46, no. 24, pp. 1629–1630, 2010.
[9] E. Lo, P. Chan, V. Lau, R. Cheng, K. Letaief, R. Murch, and W. Mow, “Adaptive resource allocation and capacity comparison
of downlink multiuser MIMO-MC-CDMA and MIMO-OFDMA,” IEEE Trans. Wireless Commun., vol. 6, pp. 1083–93, 2007.
[10] U. Phuyal, A. Punchihewa, V. K. Bhargava, and C. Despins, “Power loading for multicarrier cognitive radio with MIMO
antennas,” in Proc. IEEE WCNC’09, Apr. 2009, pp. 1–5.
[11] E. Calvo, J. Vidal, and J. Fonollosa, “Optimal resource allocation in relay-assisted cellular networks with partial CSI,” IEEE
Trans. Signal Process., vol. 57, no. 7, pp. 2809–2823, Jul. 2009.
[12]Yuan Liu,Meixia Tao,” Optimal channel and relay assignment in OFDM-Based Multi-relay Multi-pair two way communication
networks” IEEE Trans. Connunications,vol.60, No2,pp. 317-321,Feb 2012.
[13] Umesh Phuyal Satish C. Jha, Vijay K. Bhargava,” Joint Zero-Forcing Based Precoder Design for QoS-Aware Power Allocation
in MIMO Cooperative Cellular Network”
IEEE Journal On Selected Areas In Communications, Vol. 30, No. 2,pp.350-358, February 2012.
[14] F.-S. Tseng, W.-R. Wu, and J.-Y. Wu, “Joint source/relay precoder design in nonregenerative cooperative systems using an
MMSE criterion,”IEEE Trans. Wireless Commun., vol. 8, no. 10, pp. 4928–4933, Oct.2009.
[15] F.-S. Tseng and W.-R. Wu, “Linear MMSE transceiver design inamplify-and-forward MIMO relay systems,” IEEE Trans. Veh.
Technol.,vol. 59, no. 2, pp. 754–765, Feb. 2010.
[16] R. Mo and Y. Chew, “Precoder design for non-regenerative MIMO relaysystems,” IEEE Trans. Wireless Commun., vol. 8, no.
10, pp. 5041–5049,Oct. 2009

54
NATCON 21 College of Engineering Trikaripur
Conference Proceedings
13/12/21 & 14/12/21

SMART HOME SECURITYUSING LabVIEW

M.Lavanya1, M.Arivalagan2, P.Hosanna Princye3, S.Sivasubramanian4


Assistant Professor, Electrical and Electronics Engineering, Saveetha School of Engineering,
1

Saveetha Institute of Medical and Technical Sciences, Saveetha University, Chennai. India.
1
laviraju88@gmail.com.
2
Assistant Professor, Electronics and Instrumentation Engineering, Saveetha Engineering College, Chennai, India.
2
arivu.mit@gmail.com.
Associate Professor, Electronics and Communication Engineering, SEA College of Engineering and Technology, Bangalore, India.
3

3
hprincye@gmail.com.
4
Professor, Electrical and Electronics Engineering, Karpagam College of Engineering, Coimbatore, India. siva.ace@gmail.com

ABSTRACT
Smart home security is a home that uses an intelligent information technology to monitor the house
environment to succeed more and comfort safety life. A smart home security is a modernization system has
been developed automatically. In this paper we present the home security system based on LabVIEW. This
system is completely based on the LabVIEW software, and it is act as security protection of home. This
system can monitor the different types of security like temperature, gas, smoke and fire etc. it represents the
review on home security. This paper presents the software implementation of control system for home
security using LabVIEW.
Keywords: Monitor, LabVIEW, home security, modernization, automatic.

1. INTRODUCTION
Smart home security is a home technology which gains the information about home and
communicates the information. This technology can be used to monitor and warn the condition of home.
Smart home expertise is the one awareness of home automation ideals using set of specific set of
technologies. The smart home automation can interface basically using a computer interface. This paper
presents a home security using different methods. The smart home technology has computer interface and
manual interface.
Figure 1 shows the computer device that accommodate with LabVIEW software is the main
controller unit for all system in the house. It collects the information from the house sensor and process data
for the different system. It transfer the control signal to house system and switching output device sensors,
remote control is interfacing is possible to control the smart house in hardware implementation. Here it
represents the software implementation of home security.

55
NATCON 21 College of Engineering Trikaripur
Conference Proceedings
13/12/21 & 14/12/21

Fig 1 Block diagram of smart home security


1.1 LabVIEW
NI LabVIEW software is used for a wide variety of exhibition and industries. LabVIEW is a highly
imaginative development environment for manufacturing custom application custom application that
amalgamate with the real world records or signals in fields such as science and engineering. The net
consequence of using a tool such as LabVIEW is that sophisticated quality projects can be complete in less
time with fewer people difficult.
So efficiency is the key assistance, but that is broad and general account LabVIEW is unique because it
makes this wide variety of tools available in a single location, ensuring that compatibility is as simple as
drawing wires stuck between function. LabVIEW itself is a software development environment that contains
numerous apparatuses.
The same initialize-configure read/write close pattern is not repeated for a wide variety of hardware
devices, data is always returned in format compatible with analysis and reporting functions. In the rare cases
that LabVIEW driver does not already exist, we have to use tools to create our own.

2. DESIGN OF SMART HOME


Design of smart home is classified into so many methods here it represents the better method of home
security using LabVIEW software.

2.1 ALARM SYSTEM


Alarm system of home, shows the security of home in everywhere, in the windows, doors . in the
doors and windows the sensors may fixed to sense the signals. It will send the message to the users when any
harm occurs. Here Fig 2 shows the block diagram of alarm system

56
NATCON 21 College of Engineering Trikaripur
Conference Proceedings
13/12/21 & 14/12/21

Fig 2 Block diagram of alarm system


The Figure 3 shows the front panel of alarm system shows the indicators at various level. Here the
message is send touser. The switch is on the alarm is on the indicators is off when no disturbance to security.

Fig 3 Front panel of alarm system

57
NATCON 21 College of Engineering Trikaripur
Conference Proceedings
13/12/21 & 14/12/21

Fig 4 Front panel when front door indicates


Figure 4 represents the front door occur any type of signal or message the alarm should ON.

Fig 5 Front panel when all indicators is ON


Here the figure 5 represents when the all sensor senses signals the indicators are on and alarm ON.

2.2 TEMPERATURE SYSTEM


The basic element in the temperature system is reading the temperature from temperature sensors In this
system when the input value is applied output is shown through from sensor with LED.LM35 or wireless
temperature indicators are used in hardware implementation . The main use of LM35 temperature sensor is
that it is the easiest of all temperature sensors because it is on integrated circuit that output voltage

58
NATCON 21 College of Engineering Trikaripur
Conference Proceedings
13/12/21 & 14/12/21
proportional to the temperature in degree Celsius and the sensor itself takes care of non-linear effects. In
hardware it is directly coupled with DAQ. The LabVIEW delivers the signals from LM35 sensor as variable
analog value. The PWM system is used to control the heating and cooling the system. The fig 6 shows the
block diagram of temperature control system which consists of input dial connected to temperature system.

Fig 6 Block diagram of temperature systemFront panel of temperature control system

Fig 7 Front panel of temperature system is shown the output process of the system when temperature is
at medium level
Here the figure 7 shows the front panel of temperature control system displays at different temperature
levels in the system the cooler and heater is off position the figure shows the medium level of temperature.
Figure 8 re p r e s e nt s th e temperature at low level up to 0 to 20 degree Celsius. The response is at
below 20degree Celsius. The LED luminosity and the heater in on automatically by the LabVIEW software.

59
NATCON 21 College of Engineering Trikaripur
Conference Proceedings
13/12/21 & 14/12/21

Fig 8 Temperature at low level


Here the blue colour indicates that heater is on and fan speed is low.
Figure 9 shows the LED indicating the temperature is above 50 degree Celsius, cooler will on
automatically. Here red colour indicates that cooler is ON and green colour indicates that heater is OFF and
fan speed increases automatically.

Fig 9 Temperature at high level


2.3 TEMPERATURE, SMOKE AND LPG GAS INDICATION SYSTEM
The figure 10 shows the block diagram of high temperature, smoke and LPG gas system. The block
diagram consist of smoke, gas, temperature sensor indicators with alarm .With the help of above block
diagram we can overcome the fire accidents in the home.

60
NATCON 21 College of Engineering Trikaripur
Conference Proceedings
13/12/21 & 14/12/21

Fig 10 Block diagram of smoke, gas and temperature system

2.4 TEMPERATURE INDICATION SYSTEM

Fig 11 Temperature sensor


In Figure 11 it shows the system when the temperature is increased above the normal level the
temperature sensor senses and led will glimmer and alarm becomes vigilant.

2.5 SMOKE INDICATION SYSTEM


When any short circuits, any failure in electrical appliances of home slow fire will start with smoke.
Figure 12, In this smoke indication system if the above cases arises smoke sensor sense the smoke raised due
to failure of electrical system. The LED increases its luminosity and alarm will vigilant. Photoelectric smoke
detector or fire alarm detector can be used in this system.
61
NATCON 21 College of Engineering Trikaripur
Conference Proceedings
13/12/21 & 14/12/21

Fig 12 Smoke Indicator

2.6 LPG GAS INDICATION SYSTEM

Fig 13 Gas indicator


For domiciliary purpose LPG gas is customarily using. In houses people mostly used to forget to turn
off the gas regulator this will foremost to the leakage of gas. Due to this fire hazards will occur. From the
above fig 13 the sensor senses the leakage LPG gas and indicates the LED with alarm. In hardware
implementation LPG gas sensor MQ-6 is used for intellect the gas, it is suitable for sensing the LPG
composed of mostly propane.

3. CONCLUSION
This paper presents the importance of home security. The different methods of home security are
shown in this paper. Like alarm system, temperature system, gas indication system. This paper is only
implemented in software.
REFERENCES
62
NATCON 21 College of Engineering Trikaripur
Conference Proceedings
13/12/21 & 14/12/21
[1] Basil Hamed (2012), Design & implementation of smart House Control Using LabVIEW. International
Journal of Soft Computing and Engineering, 1(6); pp 98-106.

[2] Samit Kumar Ghosh, Sankata Prusty(2018), Intelligent Smart Home Automation System based on
LabVIEW. International Journal of Pure Mathematics, 120(6):339-348.

[3] S.Jermilla, S.Kalpana, R.Preethi(2015), Home Automation using LabVIEW. International Journal of
Science Technology & Engineering, 3(9); pp 141-144.

[4] K.Haribabu, S.V.S.Prasad, M.Satish Kumar (2018), An IOT Based smart home automation using
LabVIEW, Journal of Engineering and Applied Sciences, 13(6) : pp 1421-1424.

[5] Shreenidhi H.S, Ravikumar AV, Nagarajun Gowda(2017), Implementing Home automation System using
LabVIEW and GSM. Journal of Emerging Technologies and Innovative Research, 4(5), pp 2349-5162.

[6] Prashant Kumar(2017), Design and implementation of Smart Home control using LabVIEW, d in: 2017
Third International Conference on Advances in Electrical, Electronics, Information, Communication and Bio-
Informatics (AEEICB).

[7] S.Ghosh, P.B.Natarajan(2018), Intelligent smart home automation system based on LabVIEW, Journal of
Emerging Technologies and Innovative Research, 10(4), pp 108-116.

[8] LabVIEW for everyone :Graphical Progamming Made Easy and Fun, Jeffrey Travis, Jim Kring, Third
Edition.Prentice Hall professional,2007 ISBN-10:0131856723.

[9] Akshatha N gowda (2013) , Control 4 Smart Home System using LabVIEW. International Journal of
Engineering Science and Innovative Technology (IJESIT), 2(3), pp 200-205.

[10] http://www.ni.com/LabVIEW.

63
NATCON 21 College of Engineering Trikaripur
Conference Proceedings
13/12/21 & 14/12/21

SENSORS FOR STRUCTURAL HEALTH MONITORING

-AN EXPERIMENTAL EVALUATION


1PRADEEP KUMAR P, M.Beenamol
1
Department of Civil Engineering, Noorul Islam University,
Thukkulai, Kanyakumari, Tamilnadu India.
2
LBS College of Engineering Kasaragod

ABSTRACT

The performance of a Civil Engineering structure is greatly influenced by service condition, ageing, the type of material used, and the
structure’s layout.Apart from performance the important aspects of any structure must be the serviceability, safety and reliability. Therefore it is
essential to employ a credibletechnology by complete examination and analysis for monitoring the structure. One of the globally valid technologies
is the Structural Health Monitoring (SHM) that is utilized for numerous applications. Progression of SHM aids in raising the service life of the
structure by damage detection and analysis approach.For the SHM system, sensors are a crucial aspect. The structures generally fail owing to
specific geometric characteristics and material damage that impose an adverse effect on their performance. The SHM's primary objective is to
notify the system in the early phase of damage initiation and prevent yet more dissemination of disaster through constant monitoring with
structurally embedded sensors. The SHM monitors the structure by means of displacement, strain estimation, impact, load, pH rate, crack
appearance, signatures of vibration, humidity, and crack size.The paper analyzes the experimental evaluation of two types of sensors such as fiber
optic-based sensorsand Piezoceramic-based sensors that are widely used in most applications. The future metrics and challenges in sensor
advancement and SHM technology are emphasized in this paper.

KEYWORDS:Structural Health Monitoring, Wireless Sensor Networks, Fiber Optic-Based Sensors,Piezoceramic-Based


Sensors,Damage Detection, Future Trends.

INTRODUCTION

As a novel technology, SHM aims to assess structural efficiency and detect early-phase damage through testing and sensor
structural response (Rainieri et al., 2011). Beginning with damage identification, location calibration, damage evaluation, and life
estimation, a centralized framework is considered in the SHM system.Many researchers have performed a reference to various
techniques for SHM. SHM's applications have been effectively infiltrated in specific engineering industries and have previously
received significant recognition across globally, particularly of the coming Fourth Industrial age.Utilizing sensors for monitor the
health of structures is vital since owing to the hindrance, serious problems can occur. To differentiate from the abnormal behavior of
normal sensors there is a requirement of durable sensors which is of low cost and low power supply. Based on the implementation,
various types of sensors are selected. An SHM system's performance is accurately reliant on a reliable sensor failure detection
mechanism.Installation of sensors is a crucial consideration to improve precision and to reduce the number of sensors placed. Indeed,
the testing of sensors has recently being a particular area of research in the sector of SHM. The data rate in different types of sensors
will vary between minimal to extreme.Wireless sensor networks (WSNs) have been evolving in recent years, with an increasing
focus on SHM deployments. In order to achieve the target of WSNs in SHM, the system must be capable of communicating with a
higher data rate across each sensor. Figure 1 illustrates the process of achieving SHM using wireless network.

.In the fourth industrial age, designers and engineers are shifting further towards innovating a smart habitat in which
systems are digitized, trying to learn, and make decisions. An SHM system includes sensors, platforms for data acquisition and

64
NATCON 21 College of Engineering Trikaripur
Conference Proceedings
13/12/21 & 14/12/21
transmission, a database for active data management and diagnosis of health.An effective SHM system should be capable of
differentiating variations specified owing to natural weather fluctuations, like changes in temperature, and changes caused due to
damage. Temperature is typically observed as the common primary determinant impacting structural response since it influences a
structure's stiffness, even if it may also alter a structure's initial parameters (Peeters & De Roeck, 2001). Huge costs were the major
disadvantage for conventional wire-based SHM systems, mainly due to sensors and cables. Relative to conventional wireless SHM
systems, wireless connectivity eliminate the necessity for wires and often mean a huge decrease in price and comfort in
implementation.A WSN-based SHM system could obtain better monitoring content, dramatically rising the system's reliability and
performance. A WSN is virtually a computer network made up of several minor, intercommunicating pcs of one or more sensors.
Larger bandwidth often makes the system more precise. Table 1 shows the WSN specification for a certain SHM system. A specific
objective for the modern sensor is to develop a sensor that can communicate to external variables including temperature, chloride,
and humidity. It must also react to various stress circumstances. Furthermore, SHM's applications are yet to be explored at the
standard of damage detection.

Table 1: Specifications of Wireless Sensor Networks Employed for SHM


(Alves et al., 2017)

Data rate 250 kbps

Radio CC2420

Frequency Band 2.4 GHz

Resolution of A/D
8-bit
Converter

LITERATURE REVIEW

Figure 1: Wireless SHM System Architecture

The SHM technology is employed not merely for detecting structural failures but also offering better advance physical
damage indication. The early alerts given by an SHM program would only be exploited until the structural damage contributes to the
failure to identify remedial strategies.It looks very promising to be accepted as worthy of tracking the structural ailment over its
lifetime of service by the aviation industry (Ihn and Chang, 2004). SHM offers performance insight and removes faults that
contribute to unexpected situations and help boost a sustainable degree of trustworthiness over their service life.The major benefits
identified with the application of SHM in systems are the ability to ignore malfunction as well as to guarantee the structure's
protection and to avoid countless human casualties owing to incidents. Previously, for screening structures, nondestructive
evaluations (NDE) have been used. NDEs-related drawbacks are that it is not ideal for larger structures and demands sophisticated
tools and expertise labor.Experts established the SHM methodology to enhance the monitoring system by incorporating sensors
within structures. In addition, the SHM application decreases downtime for monitoring relative to NDE. However, after a disaster, it
involves a considerable time to assess the health of the systems in the disaster area through traditional methods of NDE (Chang et al.,
2004).Wireless Sensor Network (WSN) software incorporation into SHM applications offers several advantages with respect to
price, interoperability, fast implementation, and durability (Aygun and Gungor, 2011).

65
NATCON 21 College of Engineering Trikaripur
Conference Proceedings
13/12/21 & 14/12/21
Nonetheless, massive-scale infrastructure system’s health monitoring is a significant concern. For large-scale structures,
standard examination utilizing X-ray methods is costly and often inefficient.Alternatively, involvement in detecting material failure
utilizing various sensors of minimal-power microelectronics and miniaturizing as well as wireless virtualization has expanded (Otto
et al., 2006). (Yan et al, 2017) suggested a smart wireless sensor network focused on aggregates as a realistic means for identifying
large-scale cracks in concrete.Whereas SHM systems, furthermore, provide technological benefits such as lowered maintenance
expenses including prolonged service lifespan, present technologies might remain inadequate in terms of fault characteristics. It’s
because intense circumstances may facilitate degradation, as well as inadequate data on the failure features of special composite
materials.When a structure experiences multiple patterns of failure prior to ultimate fractures like polymer deformity, stacked-up
disruption, grain shifting, and cracking formation and dissemination, multiple sensors need to be implemented while loading such
that certain patterns of failure may be investigated in particular. In addition, it may be helpful to have a remotely regulated device as
supervisors may not often monitor the materials and structures (Yuan et al., 2008).For determining the failure features of composite
materials, specifically static and dynamic strains, as well as the development of cracks, a remotely operated SHM platform with
different sensors, is therefore suggested.
1. CURRENT TRENDS IN SHM'S NEW SENSOR TECHNOLOGY

The application of fiber optic-based sensors (FOS) is an emerging technological advancement in SHM. Comprehensively,
FOS sensors perform based on the intensity of light flowing across the fiber optic, which is influenced by changing the core
reflection index or producing distortion signals, notably, becoming the Fiber Bragg Grating (FBG) quite widely used. One of FOS's
most significant benefits to other sensing systems is that they can be employed concurrently for both sensing and transmitting
applications. The FBG sensor is integrated into a composite of fiber-reinforced polymer (FRP) to defend toward spillage of the
fragile optic fiber (Sun et al., 2008). However deflection metrics, velocity, acceleration, corrosion, and displacement
implementations have also been investigated, they were extensively used for strain assessment. FOS has often employed to develop
distributed sensors that have been used to detect strain and crack. It must be noted that almost all FOS sensors are not influenced by
environmental factors, but FBG necessitates temperature compensation in most scenarios.

Figure 2: PZT and FBG sensors

Piezoceramic-based sensors are an advanced version of conventional piezoelectric materials, such as quartz. These are
designed with one of two components: Barium Titanate and Titanates Lead Zirconate. These substances do not have the
piezoelectric influence in the natural state; however, they develop those effects that mostly are greater than the natural one, by
enforcing an electric field with significant wavelength. The damage identification strategy has been established through PZT patches
(Zhu, 2009). As an acoustic emission (AE) sensor, the PZT patch is employed that collects the stress wave signals created from

66
NATCON 21 College of Engineering Trikaripur
Conference Proceedings
13/12/21 & 14/12/21
damage resulting in a structure. The PZT patches are connected to the fiber-reinforced polymer (FRP) cables for capturing the AE
signal while fatigue screening. Depending on the AE signals received by PZT patches, the damage index may be determined. For
the detection of cracks, bolt examination, and vibration and force calibration, among many others, implementations of such a form
of the sensor were confirmed. Because they can be integrated in the material, a type of "self-sensing" structure can be created. The
applications described display impressive results in experimental frameworks; therefore in adjustable contextual environments,
more tests in actual-life structures are required to evaluate their effectiveness in real-life conditions. The FBG and PZT sensor is
shown in figure 2.
2. HARDWARE SYSTEM

The present framework is designed to detect cracks by installing fiber bragg grating sensors and piezoceramic sensors mounted
in the structure to track the precise rate of moving load physical quantities and empty load in the structure and evaluating strain If
the sensors find any abnormal details, they will eventually notify by sending signals. Since the sensor signal contains initial structure
information the damage in the structure can be identified before and after damage by comparing the sensor signals.

Figure 3: Schematic of specimen with sensors and measuring equipments.

FBG employs an interrogator for optical sensing that contains a light source. Impedance Analyzer is used to obtain the PZT
signal. Standard experiments on an aluminum beam are conducted. Respectively PZT and FBG sensors have been installed on the
aluminum beam's top surface. The PZT used here is a 6.35 mm diameter 0.25 mm thick disk. The typical properties of PZT and
FBG are cited in Table 2. Figure 3 shows the schematic of the test specimen.

Table 2: Material Characteristics of Sensors


Characteristics of PZT Values Characteristics of FBG Values

Density (ρ) 7700kg/m3 Wavelength-strain sensitive 0.67με−1pm


factor (α)
Diameter×thickness 6.35 mm×0.25mm Bragg wavelength (λB) 870 nm
Poisson’s ratio 0.31 Grating size 1 cm
Young’s modulus 6.3×1010N/m2
Piezoelectricconstant (d) 1750 V
67
NATCON 21 College of Engineering Trikaripur
Conference Proceedings
13/12/21 & 14/12/21
Dampingcoefficient 0.34

EXPERIMENTAL EVALUATION

The evaluation objective involves identifying the most appropriate and effective sensor for structural health monitoring
applications by designing test samples to confirm and validate the performance of the sensors chosen.Sensor evaluation is
performed to determine the performance parameters of the sensor when embedded in concrete structures that involve measurement
distance, size, precision, energy consumption, etc. The technique includes positioning sensors on the sample surface and
administering test loads in order to assess the outputs.Initially the both sensors that are the fiber bragg grating sensor and
piezoceramic sensors are mounted over the sample along the same axis, retaining the same distance from each other and from the
edges and the preliminary measurements are taken. Later specific loads are applied along the surface and the values of the strain are
measured. The calculated strain is then compared to the conceptual strain.The sample used is 80 cm mounted at both ends, and the
load was applied in the core of the beam of the specimen. Figure 4 shows the schematic of the experimental setup with cracks
locations, while Figure 5 shows the actual experimental setup. At positions C1-C5 additional cracks were rendered on to one and the
readings were collected.

Figure 4:Experimental Setup schematic with Cracks Locations

Figure 5 LaboratorySetup
68
NATCON 21 College of Engineering Trikaripur
Conference Proceedings
13/12/21 & 14/12/21
For each crack scenario, the strains calculated by FBG are shown in figure 6. From initial measurement i.e. no crack to C1, C2,
C4, and C cracks, the load versus strain lines are almost overlapping as the cracks are at positions that are too far for FBG to
identify.Nevertheless, when a crack is made at position C3 (centralized location) within the FBG detection range, the slope of the
load versus strain line increases significantly. Therefore, in the central region of the beam, FBG tests strain which provides crack
information only in the central part of the beam.

Figure 6:FBG-measured strain for various crack scenarios

Figure 7:Gross strain for various crack scenarios measured by PZT

For PZT, the voltage measured by the unit is registered for each crack with improved load.It varies as the load progresses
showing that PZT patches calculate the total strain produced throughout the specimen beam's along its length. Therefore, PZT can
be used to measure over a certain length the maximum strain. In figure 7, the maximum strain is plotted against rising load for
various crack scenarios. The load versus non-cracking strain axes, C1, C5 are almost overlapping.The slope of the load versus strain
line increases significantly as additional cracks are caused in the section of the sample crack position C2, C3 and C4. Therefore, for
the sample being considered, PZT tests strain in its entire length, resulting in broad strain monitoring. A PZT patch tracks the
69
NATCON 21 College of Engineering Trikaripur
Conference Proceedings
13/12/21 & 14/12/21
mechanical impedance changes.

CHALLENGES AND FUTURE TRENDS IN THE DEVELOPMENT OF SENSORS

While innovative sensing technologies and sensors have been sufficiently established and implemented effectively in
exercise, it is believed that the subsequent obstacles will be significant for future research and production. Nano-technology-based
sensors comprise of nano-materials, including carbon black particles, carbon nano-fibers, and carbon nano-tube were suggested to
be blended with cement or epoxy resin, establishing modern self-sensing materials and sensors (Li et al., 2008). Aforesaid nano-film
substances are ideally considered for shielding including sensing purpose in civil structures and therefore accepted as embedded
sensors in concrete laminates for either production phase or structure health monitoring. Nano engineering presents a fresh chance to
build innovative sensors that can be deployed to establish a decentralized sensor network. In the creation of new sensing techniques
and sensors, learning from the nature would also be beneficial. For obtaining various signals generated from a structure, series of
sensors are devised that is comparable to tangible processes, for instance, some specimens are sensitive to noise, some are sensitive
to illumination, others are sensitive to electromagnetic signal.Bio-inspired sensing technologies and sensors are creating a whole
new opportunity in the field of SHM.

CONCLUSIONS

The SHM system may often be employed to monitor the long-term reliability of structures and their performances in the
event of natural disasters. In recent decades, major progress has been made in SHM systems technology including installing
embedded sensors during production, and quantitative evaluation of the structure to assess the structure's current health.From the
test results it is evident that the FBG sensors providestructural information only for limited area whereas the PZT patches provides
data for global monitoring of the structure. Significant innovations in electronic, sensor and non-conventional approaches contribute
to SHM advancement. SHM can be employed with several kinds of sensors. In order to achieve multidimensional structure
measurements, it is sometimes essential to assimilate various kinds of sensors in an integrated SHM system as each individual
sensor has its own advantages and limitations. There are several issues and challenges that still demands to be addressed, such as
ways to create a broad sensor network and incorporate a variety of sensors together, means to develop the communication channel
between the sensor networks and processing facility, techs to interpret all the data from a variety of sensors.Wireless Sensor
Networks recently emerging as a successful low-cost framework to link large sensor networks. SHM technologies will be evolved in
the foreseeable future from a single variable local region monitoring by a couple of sensors to multi-parameter massive structure
monitoring including large sensor networks.
REFERENCE

1. Rainieri C, Fabbrocino G, Song Y, Shanov V. Cnt composites for SHM: a literature review. 2011.
2. Peeters, B. & De Roeck, G. 2001 One-year monitoring of the Z24-bridge: environmental effects versus damage events.
Earthq. Eng. Struct. Dyn. 30, 149–171. (Doi: 10.1002/1096-9845(200102)30:2<149: AID-EQE1>3.0.CO; 2-Z).
3. Alves M M, Pirmez L, Rossetto S, Delicato F C, de Farias C M, Pires P F, dos Santos I L and Zomaya A Y 2017 Damage
prediction for wind turbines using wireless sensor and actuator networks J. Netw. Comput. Appl. 80 123–40.
4. Ihn, J.B.; Chang, F.K. Detection and monitoring of hidden fatigue crack growth using a built-in piezoelectric
sensor/actuator network: I. Diagnostics. Smart Mater. Struct. 2004, 13, 609.
5. P. C. Chang, A. Flatau, and S. C. Liu, “Review Paper: Health Monitoring of Civil Infrastructure,” Structural Health
Monitoring: An International Journal, vol. 2, no. 3, pp. 257–267, Sep. 2003.
6. B. Aygun and V. C. Gungor, “Wireless sensor networks for structure health monitoring: recent advances and future
70
NATCON 21 College of Engineering Trikaripur
Conference Proceedings
13/12/21 & 14/12/21
research directions,” Sensor Review, vol. 31, no. 3, pp. 261–276, 2011.
7. Otto, C., Milenković, A., Sanders, C., Jovanov, E.: System architecture of a wireless body area sensor network for
ubiquitous health monitoring. J. Mob. Multimed. 1, 307–326 (2006).
8. Yan, S., Ma, H., Li, P., Song, G., Wu, J.: Development and applicationof a structural health monitoring system based on
wirelesssmart aggregates. Sensor 17, 1–16 (2017).
9. Yuan, S., Liang, D., Shi, L., Zhao, X., Wu, J., Li, G., Qiu, and L.:Recent progress on distributed structural health
monitoringresearch at NUAA. J. Intell. Mater. Syst. Struct. 19, 373–386(2008).
10. Sun R. J., Sun L. M., and Sun Z. (2008), Application of FBG sensing technologies to large bridge structural health
monitoring, Journal of Tongji University (Natural Science), 36(2): 149–154.
11. Zhu, H.P. (2009). Smart Method of Structural Damage Detection, Beijing: China Communications Press (in Chinese).
12. Li, H., Xiao, H.G. and Ou, J.P. (2008). Electrical property of cement-based composites filled with carbon black under long-
term wet and loading condition. Composites Science and Technology, 68(9), 2114–2119.

71
NATCON 21 College of Engineering Trikaripur
Conference Proceedings
13/12/21 & 14/12/21

A Detailed study on CAD based Mammogram Techniques


Jerald Prasath,
L. Prinza
TKR College of Engineering and Technology

Abstract-
The aim of this paper is to study about mammogram which helps to detect the breast disorders. Early detection of breast abnormalities can
improves the survival rate. This can be done through mammogram screening. Various mammogram techniques and various abnormalities of
mammogram have discussed in this survey. This paper also discusses various aspects of existing enhancement algorithms and segmentation
algorithms. In this study, we have listed the importance and scope of enhancement techniques and segmentation techniques. This detailed survey
will also serve as motivational tool for the researchers who are involved in mammogram study.

Keywords – Mammogram,CAD, enhancement, segmentation

Introduction
Cancer is a term used for diseases in which abnormal cells divide without control and are able to invade other tissues.
Cancer cells can spread to other parts of the body through the blood and lymph systems. Cancer is not one disease but many
diseases. There are more than 100 different types of cancer [1]. All cancers begin in cells, the body’s basic unit of life. To
understand cancer, its helpful to know what happens when normal cells become cancer cells. The body is made up of many types of
cells. These cells grow and divide in a controlled way to produce more cells as they are needs to keep the body healthy. When the
cells become old or damaged, they die and are replaced with new cells. However, sometimes this orderly process goes wrong. The
genetic material(DNA) of the cell can become damaged or changed, producing mutations that affect normal cell growth and division.
When this happens, cells do not die when they should and new cells form when the body does not need them. The extra cells may
form a mass tissue called tumor. Not all tumors are cancerous; tumors can be benign or malignant. Benign tumors are not cancerous.
They can often be removed, and in most cases, they do not come back. Cells in benign tumors do not spread to other part of the
body. Malignant tumors are cancerous. Cells in these tumors can invade nearby tissues and spread to other parts of the body. The
spread of cancer from one part of the body to another is called metastasis. Breast cancer represents the second leading cause of
cancer death in women, exceeded only by lung cancer. Breast cancer is a malignant tumor that has developed from cells of the
breast [2]. A malignant tumor is a group of cancer cells that may grow into (invade) surrounding tissues or spread( metastasis)to
distant areas of the body. The diseases occurs almost entirely in women but men can get it too. There are many types of breast
cancer. Sometimes, a single tumor can have a combination of many types.

STATISTICAL REPORTS ON BREAST CANCER

Breast cancer is the well-known disease in ladies both in advanced and progressing countries. Breast malignant growth
is the most frequently analysed disease among ladies in 140 of 184 countries around the world. It is assessed along worldwide that
more than 508 000 ladies kicked the bucket in 2011 because of breast malignancy (Global Health Estimates, WHO 2013). Almost
1.7 million new breast cancer cases were analysed in 2012. In 2012, it raised to around 12 percent of all new malignancy cases and
25 percent of all diseases are in ladies. In 2018, almost two million women were subjected to be diagnosed with cancer. All around,
breast cancer currently speaks to one of every four of all tumors in ladies[3].
A. Impact of breast cancer in Europe
It is estimated that, in Europe 130.9/100000 men and 82.9/100000 ladies malignant growth deaths in 2019. The
anticipated rate for breast cancer was 13.4. Patterns in breast malignant growth mortality were good in every one of the six nations
considered, aside from Poland. Nearly,16.4% of ladies suffered from breast cancer between in the age group of 50-69.Additionally,
observed at age 20-49 (13.8%), while increasingly unobtrusive at age 70-79 (6.1%). When contrasted with the pinnacle rate in 1988,
more than
5 million deaths due to cancer have been lodged in the EU over the 1989– 2019 period[3].

B. Impact of breast cancer in US


As of January 2019, there are more than 3.1 million ladies with a past filled with a breast disease in the U.S.In 2019,
an expected 268,600 new instances of obtrusive breast cancer are relied upon to be analysed in ladies in the U.S., alongside 62,930
new instances of non-intrusive breast malignant growth. Around 2,670 new instances of an intrusive breast malignant growths are
relied upon to be analysed in men in 2019. A man's lifetime danger of breast cancer is around 1 out of 883 [3].
C. Impact of breast cancer in India
The statistical reports of breast cancer in India are presented in Figure 1.1. This graph is drawn between the age
gatherings and the cancer occurrence. The blue shading indicates the occurrences of 25 years back, and maroon shading indicates
the circumstance of today. Presently the people with the age group of 20-40years are greatly affected by breast cancer when
compared to previous years. Nowadays cancer occurrence rate is reduced due to self-awareness and health development training
program. Obviously, one specific explanation behind high quantity of young patients in the populace pyramid which is expanded at
72
NATCON 21 College of Engineering Trikaripur
Conference Proceedings
13/12/21 & 14/12/21
the base, centre and limited at the top indicates an enormous populace in the young age and a lot lesser in seasoned age.

Figure1.1 Breast cancer in India


D. Screening Methodology for Breast Cancer
Screening is utilized to search for breast cancer before the individual has any manifestations or signs. The objectives of
malignancy screening are to reduce the quantity of individuals who build up the infections and to lower the number of individuals
who die from the ailment. There are various screening methods to diagnose breast ailment.
i) Breast self-exam
Examining the breasts by self is one of the basic screening methods. Women should observe the changes in their own
breast frequently. If they notice any unusual or anonymous change, they must consult the clinical physicians.
ii) Breast exam by a health care provider
Screening of breast can also be done through healthcare providers. Normally it happens by periodical health check-ups. Well
trained health care providers can exactly identify the unusual changes in breast caused by various health issues. If we ask how to
detect breast abnormalities, then the answer is mammography.

Mammography
Many methods are there to screen the breast like Ultra sound imaging, Magnetic resonance imaging, Positron Emission
Mammography (PEM), Breast-specific gamma imaging. Even though this many screening methods are there, Mammography is the
best tool for screening the breast cancer. The image taken by the mammography technique is called digital mammogram. It is an
image which displays the abnormalities of breast. Mammogram is like an X-ray image but particularly designed to review the breast.
It exhibits minimum radiation exposure. Moreover unlike other screening methods, digital mammography is able to scan the breast
in both horizontal and vertical directions. Digital images taken by this technology can be stored and shared electronically. This
enables the remote clinical consulting with experts in different locations. Mammography is suitable for under aged women as well
as aged women with high precision. The procedure of digital mammography is given in Figure 1.5. Screening mammography is a
type of mammogram that checks the individual when they have no manifestations. It can help lessen the quantity of death from
breast malignant growth among ladies ages 40 to 70. This method has some limitations. Mammograms can discover something that
looks strange but it is not malignant growth. This prompts further testing and can cause nervousness. Unfortunately, mammograms
can miss malignant growth when it is there.
E. Mammogram
There are two reasons to take mammography. The first reason is to screen and the second is to diagnose. Screening is done
inwomen have no symptoms of any breast defects. Diagnostic mammograms are done with women who have symptoms of breast
cancers. Diagnostic mammograms consume more time than the screening, the reason behind this is that more images are taken.
Mammography provides minimal radiation exposure[4].

73
NATCON 21 College of Engineering Trikaripur
Conference Proceedings
13/12/21 & 14/12/21

Figure 3.1 Mammogram

G. Mammogram Technologies
There are various types of mammograms available to detect the breast disease which includes Conventional
Mammography, 3D Mammography, Film-screen mammography, and Digital Mammography [5].
1. Conventional mammography
Conventional mammography is an extraordinary instrument for identifying breast malignant growth. This low-dose X-
ray framework is utilized to make a symptomatic image that will not cause any inconvenience to the cancer patient. During this
procedure, the uniquely qualified radiologic technologist will position breast in the mammography unit. The chest will be set on an
extraordinary stage and compacted with a reasonable plastic paddle. The technologist will check the breast. At that time, X-ray
images of each breast are taken from two distinct edges. These images are then spared either on film or in the advanced space.
2. 3D mammography
The 3D mammography technique is like Conventional mammography. It allows the breast to be seen in a progression
of layers. This enables radiologists to view the breast in
1-millimeter cuts instead of the full thickness from the top and from the side. It might be especially compelling for ladies with thick
breast tissue or those at high hazard for creating breast malignant growth. The benefit of 3D mammography is, it essentially
diminishes false positive call backs and to be progressively exact in distinguishing breast cancer growths early. Amid the technique,
the lady is situated before a 3D mammography machine and her breasts are held set up by two pressure plates. The weight put on the
breasts by the pressure plates can cause inconvenience but goes on for a couple of moments. Whenever prepared, the radiologic
technologist will begin the 3D mammography machine and an automated arm will move in a circular segment over the lady's breast
as different X-ray images are taken. The portion is like film mammography and is just somewhat higher than in standard 2D
computerized mammography. The sweep itself takes under a few seconds for every view. The whole method takes around 10 to 20
minutes.
3. Film-screen mammography
Film-screen mammography is a breast radiographic strategy that utilizes a low-dose X-ray framework that is used to
make an analytic picture that will not make any inconvenience to breast cancer patients. It comprises of extraordinary single-
emulsion film and high-detail escalating screens which are utilized to obtain high contrast images. This system gives a fine image at
radiation presentation dimensions of under 1 rad, contrast and more established techniques that create radiation dimensions of as
much as 16 rad. This strategy is truly solid and the moves are prepared and pursued by the specialist who pursues x-beams.
4. Digital mammography
Digital Mammograms (DM) transmit around three-fourths of the radiation than film-screen mammograms do. At first,
an expert positions the breast between two plates, levels and packs it, then takes images of breast. This method is a little
uncomfortable for cancer patients; however, the whole procedure takes around 20 minutes. The images are recorded directly in a PC
and they would be seen on a PC screen and explicit zones can be broadened or featured. The images which are taken from the
machine additionally can be transmitted as electromagnetic signals from one place to another place [5].
5. Computer Aided Detection (CAD)
Computer Aided Detection (CAD) is an advanced technology that performs a multistep process that incorporates the
recognition and arrangement of variations from breast tissue on mammograms[6]. CAD system digitizes mammographic images for
sporadic territories of fixation or calcification that may determine the presence of malignant growth. Initially, during a couple of
years, CAD had inadequate preparing power that has to be settled by a more prominent registering intensity of the present machines.
So man-made reasoning could help in identifying the little level of breast malignancies that are missed by radiologists by zooming
into territories of visual deficiency and see what can't be seen by the unaided eye.
6. Breast tomosynthesis
Breast tomosynthesis is a cutting-edge innovation regularly utilized for identifying breast disease that catches
numerous, low-dose images from various points around the chest and then delivered into a 3-D reproduction. Breast tomosynthesis
can likewise be performed in advanced mammography instrument consequently called the Digital Breast Tomosynthesis (DBT)[7].
Rather than utilizing computerized mammography alone, the utilization of the joined (3D + Digital Mammography) test result
increments analytic exactness for every single partaking radiologist and distinguishes the rate of disease tissues in the breast. This
image diminishes false positive review rates.
7. Three-compartment breast (3CB) imaging
74
NATCON 21 College of Engineering Trikaripur
Conference Proceedings
13/12/21 & 14/12/21
Another procedure called three-compartment Breast (3CB) imaging, which decides the organic tissue arrangement of a
tumor by utilizing mammography[7]. It might assist the patient by reducing superfluous breast activities, costs. In this strategy, 3CB
images are obtained from the double vitality mammograms and investigated with mammography radionics. These images are
examined with the assistance of Artificial Intelligence(AI)algorithms. One primary advantage of this strategy is that it can
undoubtedly be added to mammography without requiring broad changes of existing gear and requires a 10 percent extra portion of
radiation.
8. Contrast-based MRI
The contrast-based methodologies comprise of MRI and contrast-enhanced digital mammography. This method can
uncover little malignant growth tissues, even in thick breast and is effectively incorporated into the facility. Be that as it may, breast
MRI is not new, it is far-reaching. In this way, the upgraded type of MRI was built up and is called a contracted MRI, this test can
be performed in under 10 minutes and spare the expense. The fundamental preferred position of this technique is, it is yielding
progressively precise analysis and sparing time.

MAMMOGRAM ABNORMALITIES
Mammography is used to diagnose and screen the breast abnormalities including asymmetry between breasts,
architectural distortion, microcalcifications and masses[4]. These abnormalities are discussed in details below.
H. Mass
Cysts with cluster of non-cancerous fluid may be recognized as mass in mammogram. When compared to
microcalcification, the detection of mass is a challengeable task due to the intensity similarity of mass with healthy tissue and
morphology of the healthy textures in breast. The size, margins,density, shape, and location of the mass are essential consideration
in the detection of cancer. Also, these help the radiologists to assess the presence of cancer in breast anatomy. Majority of the
benign masses are compact, circumscribed, circular or elliptical in their shapes [8]. Commonly malignant lesions are irregular in
appearance and blurred in their boundary. Sometimes it may be bounded by means of radiating pattern of linear spicules [8]. Few
benign lesions may have speculated look or blurred edge. The figure is given below in Figure 4.1.

Figure 4.1 Mass in mammogram


I. Microcalcification
Calcifications are tiny deposits of calcium in the breast and it is visible as high intensity regions in the mammogram
[8]. Calcifications are normally classified into two types, microcalcification and macrocalcification. The breast regions spotted with
scattered; coarsely calcium deposits are identified as macro calcification. Macrocalcifications are related to benign types which call
for a biopsy in rare cases. Where as microcalcifications may be spotted in clusters or entrenched in mass or isolated deposits as
given in Figure 4.2.

Figure 4.2 Microcalcification in mammogram

75
NATCON 21 College of Engineering Trikaripur
Conference Proceedings
13/12/21 & 14/12/21
Size of the microcalcification is typically ranged from 0.1-1.0 mm with diameter around 0.5 mm approximately. Any
region spotted with at least three micro calcification and around 1-2cm is identified as cluster. Cluster is an important cue in
mammography when the reading is suspicious. Approximately 30-50% of doubtful cancers are detected by the presence of
microcalcification clusters. Likewise, calcification clusters are present in most of the Ductal Carcinoma In Situ (DCIS) cancers.
J. Architectural Distortion
Architectural Distortion (AD) is utilized by radiologists to identify breast malignancy when the mammogram
demonstrates a locale where the chest typical appearance resembles an irregular state. Most of the time, AD causes a few doubts
related to the presence of malignancy and radiologist has not affirmed whether a genuine mass is available yet. Indeed, AD is a
typical finding in retroactive and may distinguish the appearance of breast cancer[8].
Q. Bilateral Asymmetry
Bilateral asymmetry is one of the sign which helps the radiologists to identify the existence of breast cancer.

Figure 4.3 Bilateral asymmetry


While investigating the relevant mammograms, the right and left breasts vary from each other in their look. The BI-RADS
definition of asymmetry states that the existence of larger volume or density of breast tissue with no dissimilar mass or more
prominent duct in one breast is matched up to the related area in other breast [5].

MAMMOGRAM ENHANCEMENT TECHNIQUES BY HISTOGRAM MODIFICATION


IV.
Histogram based Mammogram Enhancement
Even though many histogram based algorithms are available in the imaging research platform, the histogram based
contrast enhancement is still a rising field in research. Contrast enhancement is widely used in medical imaging [9]. In histogram
based contrast enhancement, the original histogram is modified by transformation in order to obtain the desired histogram [10].
Even though HE is a wealthy contrasting technique, it does not fit in for certain cases due to few distortions such as noise, excessive
brightness and sometimes loss of information. [10]. To fix the pitfalls of basic HE, researchers have introduced many advanced
techniques. Brightness preserving bi-histogram (BBHE) [11] techniques divide the image histogram into two sub-histograms by
considering the mean brightness. The usual HE is then applied on these two sub histograms separately. Dualistic sub-image
histogram equalization (DSIHE) was introduced by [12]. It splits the histogram into two sub-portions by the means of median value.
Many algorithms have been suggested and extended from the BBHE method in the near past. Authors [13] have suggested
Minimum Mean Brightness Error bi HE(MMBEBHE), and also has presented Recursive Mean Separate HE(RMSHE). Ibrahim and
Kong [14] has proposed brightness preserving dynamic HE (BPD). [15] has offered brightness preserving weight clustering
algorithm for enhancement. These techniques [10] improve only the visual quality of the image but do not reduce noise. [16] have
introduced image contrast enhancement method for preserving mean brightness (ICEPMB). This method not only preserves the
mean brightness but also the image features by automatic histogram separation and intensity transformation. It fixes few pitfalls of
HE approach but do not potentially suit to overcome the drawbacks like histogram spikes and pits. These reasons have hence pulled
the researchers towards histogram modification based methods. Researchers [17] have introduced Weighted Thresholded Histogram
Equalization (WTHE). It modifies the histogram by means of weighting and thresholding prior to the HE process. It serves the
controllability over contrast enhancement but generates noises on few images. An Adaptive Gamma Correction with Weighting
Distribution (AGCWD) has been introduced by [18] to achieve contrast enhancement on images and videos. [19] Have suggested a
new histogram modification method. In this approach, they have counted the pixels that have enough contrast with the
neighborhoods and hence the extreme contrast has been avoided. They have then applied Weighted Histogram Approximation
Method (WAHE) to improve the final histogram and finally have involved the gray level adjustments. But this technique needs
several adjustable parameters. Automatic Robust Image Contrast Enhancement (RICE) [20] and General Histogram Modification
Framework (GHMF) [21] are the improved schemes of WAHE. The RICE approach has used sigmoid transfer function whereas
GHMF has utilized the S-shaped transfer mapping.
Researchers have also discussed about the unconventional contrast enhancement technique. Gray Level Grouping (GLG)
[22] is an unconventional contrast enhancement approach. It divides the histogram into subgroups and these are routinely
76
NATCON 21 College of Engineering Trikaripur
Conference Proceedings
13/12/21 & 14/12/21
redistributed over the gray values. This approach effectively deals with the histogram spikes and adjusts the level of contrast to
achieve good enhancement. But it has high computational complexity and is only fit for still images. [23] introduced Gaussian
Mixture Modelling(GMM) for enhancement. Results have proved that this method preserves the mean brightness of the image and
also protects the spontaneity of images but is weak in computational complexity. [24] has established two dimensional histogram
equalization algorithm by the means of neighborhood’s co-occurrent pixel pairs. Lately, [25] outperformed this method by
demonstrating 2DHEHVS scheme and this obtained good perceptual similarity compared to 2DHE.

EXISTING WORKS ON SEGMENTATION TECHNIQUES OF MAMMOGRAM


IV.
Mammogram Segmentation Techniques
Double Thresholding (DT) partitioning technique is a basic and fundamental route for managing malignant growth
cells image segmentation [26]. DT technique has been applied in [27] and[28]for Mammograms image segmentation. Advancement
is carried out by implementing few morphological tasks after DT. The fringes are post processed in the final segmented image and
used as a model in the mammogram image thereby supporting doctors to more readily analyse the breast malignant growth in
mammograms. In [29], the authors have introduced modified FCM dependent on Hausdor distance and a versatile choice of the
neighbor district of every pixel for separation estimation and centroid refreshing. The drawback of FCM is an extended
computational time and moderately increased affectability to the underlying estimates. In, Force Field Segmentation (FFS) was
applied to partition the image into incoherent locales representing the catch scope of the external force. Two-step Evidential Fusion
Method has been advanced to increase the segmentation accuracy of Mammograms. This method provides information of the
sources to improve the efficiency of the skin-air interface [30]. Deep Classifier Learning Convolution Fully Complex-valued
Relaxation Neural Network approach has been proposed to produce 99% of segmentation. This method detects the boundary of
injured breast region by segmenting the input Mammogram images. This has been demonstrated by [31]. The Filter Response
Patches Technique was proposed to classify the anomalies of the Mammogram images. Using this technique, the Mammogram
anomalisms is divided into two categories like cancerous and benignant[32]. An End-to-End Accusatorial FCN-CRF network
method has been designed to segment the Mammogram images which exhibit the performance of the FCN-CRF networks [33]. The
Multiple Instance Learning Algorithm has discovered the condition of the mammogram images whether the area of interest is
cancerous or not. This method was used to distinguish the cancerous region from malignant region with the development of CAD
systems [34]. Automated Analysis of Unregistered Multi-View Mammograms with Deep Learning approach was introduced to find
out the danger behind the breast malignance. These deep learning algorithms are supportive to classify the multi-view and multi-
modal input mammograms and segmentation maps [35]. A novel based Textons and Local Configuration Pattern was proposed to
find the malignancy of the breast region. In this method, there is no need to use any segmentation methods to differentiate the
malignant and benign region. 3-D Breast Ultrasound Using Adaptive Region Growing technology was introduced for the premature
exposure of breast cancer. The segmentation of this method carried out by three steps as de-speckling, free segmentation and fine
segmentation [36]. DLPE-based level set method was evolved to distinguish the affected area using SRs segmentation. This method
was used to find out the exact region using the SRs are segmentation [37].

Conclusion

This study gives a detailed review on various mammogram screening tools and its techniques. Moreover, this work describes
the importance of Mammogram enhancement and Segmentation algorithms. This work also narrates how CAD system distinguishes
the cancerous region from malignant region. Even though mammogram is used to screening the mass present in the breast. In some
cases radiologists have missed the mass lesion. So that computer aided detection improves the mortality rate. Through this study, it
has been found that the efficiency of segmentation is completely depends on the effectiveness of enhancement algorithm.

References
1. SA.Feig,”Clinical evaluation of computer-aided detection in breast cancer screening” Seminars in Breast Disease,vol.5,No.4,p.p 223-230,2002.
2. Altekruse SF, Kosary CL,Krapcho M. et al.,editors.SEER cancer statistics review,1975-2007:fast stats.Bethesdsa, MD:National cancer Institute.
http://seer.cancer.gov/csr/1975-2007/index.html,2010
3. Jerald prasath g, “ Development of mammogram segmentation algorithm to detect microcalcification and mass: a hyper-elastic model based approach A
thesis Submitted to Anna University Chennai, 2020.
4. Feig, SA 2002, ‘Clinical evaluation of computer-aided detection in breast cancer screening’, Seminars in Breast Disease,vol.5,no.4,
pp. 223-230
5. 5. Jinshan Tang,Rangaraj M.Rangayyan, ”Computer-Aided detection and Diagnosis of Breast cancer with Mammography: Recent advances”, IEEE
Transaction on Information Technology in Biomedicine,vol.13,No.2,March 2009
6. Like,Nanan YanKang,”Mass computer-aided Diagnosis method In mammogram based on texture features”2010 3rd International conference on
Biomedical Engineering and informatics IEEE2010
7. wikipedia (cancer & breast cancer)
8. Jinshan Tang & Rangaraj M Rangayyan 2009, ‘Computer-Aided detection and Diagnosis of Breast cancer with Mammography:
Recent advances’, IEEE Transaction on Information Technology in Biomedicine,vol.13,no.2
9. Sundaram, M, Ramar, K, Arumugam, N & Prabin, G 2011, ‘Histogram modified local contrast enhancement for mammogram images’, Applied soft computing, vol.

77
NATCON 21 College of Engineering Trikaripur
Conference Proceedings
13/12/21 & 14/12/21
11(8), pp.5809-5816
10. Wei Wang & Chuanjiang He 2016,‘A fast and effective algorithm for a Poisson denoising model with total variation’, IEEE signal processing letters, vol.
24(3), pp.269-273
11. Kim, SW, Choi, BD, Park, WJ & Ko, SJ 2016,‘2D histogram equalisation based on the human visual system’, Electronics Letters, vol. 52(6), pp.443-445
12. Wang, Y, Chen, Q & Zhang, B 1999,‘Image enhancement based on equal area dualistic sub-image histogram equalization method’, IEEE Transactions
on Consumer Electronics, vol.45(1), pp.68-75
13. Chen, SD & Ramli, AR 2003,‘Contrast enhancement using recursive mean-separate histogram equalization for scalable brightness preservation’, IEEE
Transactions on consumer Electronics, vol. 49(4), pp.1301-1309
14. Ibrahim, H & Kong, NSP 2007,‘Brightness preserving dynamic histogram equalization for image contrast enhancement’, IEEE Transactions on
Consumer Electronics, vol. 53(4), pp.1752-1758
15. Sengee, N & Choi, HK 2008,‘Brightness preserving weight
clustering histogram equalization’, IEEE Transactions on Consumer Electronics, vol. 54(3), pp.1329-1337
16. Huang, SC & Chen, WC 2014,‘A new hardware-efficient algorithm and reconfigurable architecture for image contrast enhancement’, IEEE
Transactions on Image Processing, vol. 23(10), pp.4426-4437.
17. Wang, Q& Ward, RK 2007,‘Fast image/video contrast enhancement based on weightedthresholded histogram equalization’, IEEE transactions on
Consumer Electronics, vol. 53(2), pp.757-764
18. Huang, SC & Yeh, CH 2013, ‘Image contrast enhancement for preserving mean brightness without losing image features’, Engineering Applications of Artificial
Intelligence, vol. 26(5-6), pp.1487-1492
19. Arici, T, Dikbas, S& Altunbasak, Y 2009,‘A histogram modification framework and its application for image contrast enhancement’, IEEE Transactions
on image processing, vol.18(9), pp.1921-1935.
20. Gu, K, Zhai, G, Yang, X, Zhang, W & Chen, CW 2014,‘Automatic contrast enhancement technology with saliency preservation’, IEEE Transactions on
Circuits and Systems for Video Technology, vol.25(9), pp.1480-1494
21. Gu, K, Zhai, G, Wang, S, Liu, M, Zhoi, J & Lin, W 2015, ‘A general histogram modification framework for efficient contrast enhancement’, In 2015
IEEE International Symposium on Circuits and Systems (ISCAS) pp. 2816-2819.
22. Chen, Z, Abidi, BR, Page, DL & Abidi, MA 2006, ‘Gray-level grouping (GLG): an automatic method for optimized image contrast Enhancement-part I:
the basic method’, IEEE transactions on image processing, vol. 15(8), pp.2290-2302
23. Tjahjadi, T & Celik, T 2012, ‘Automatic Image Equalization and Contrast Enhancement using Gaussian Mixture Modelling. IEEE Transactions on
Image Processing, vol. 21(1), pp.145-156.
24. Celik, T 2012,‘Two-dimensional histogram equalization and contrast enhancement. Pattern Recognition, vol. 45(10), pp.3810-3824.
25. Kim, SW, Choi, BD, Park, WJ & Ko, SJ 2016,‘2D histogram equalisation based on the human visual system’, Electronics Letters, vol. 52(6), pp.443-445.
26. Xu,JW, Pham,TD & Zhou, X 2011, ‘A double thresholding method for cancer stem cell detection’, 7th International Symposium on Image and Signal
Processing and Analysis (ISPA, Dubrovnik, Croatia, 4-6 Sept. pp. 695-699.
27. Mohamed TarekGadAllah& Samir Badawy 2013, ‘Diagnosis of Fetal Heart Congenital Anomalies by Ultrasound Echocardiography Image
Segmentation after Denoising in Curvelet Transform Domain’, Online Journal on Electronics and Electrical Engineering (OJEEE, ISSN 2090-0279), vol.
5, no. 2, pp. 554-560
28. Mohamed TarekGadAllah& Samir Badawy 2013, ‘Diagnosis of Fetal Heart Congenital Anomalies by Ultrasound Echocardiography Image
Segmentation after Denoising in Curvelet Transform Domain’, Online Journal on Electronics and Electrical Engineering (OJEEE, ISSN 2090-0279), vol.
5, no. 2, pp. 554-560.
29. Feng, Y, Dong, F, Xia, X, Hu, C.H, Fan, Q, Hu, Y, Gao, M& Mutic, S 2017,‘An adaptive fuzzy c-means method utilizing neighboring information for
breast tumor segmentation in ultrasound images’, Medical Physics vol. 44, pp.3752-3760.
30. RihabLajili, KarimKalti, AsmaTouil, Basel Solaiman& NajouaEssoukri Ben Amara; Two-step evidential fusion approach for accurate breast region
segmentation in mammograms; IET Image Process, 2018, vol. 12,issue 11, pp. 1972-1982.
31. Saraswathi Duraisamy & SrinivasanEmperumal 2017,‘Computer-aided mammogram diagnosis system using deep learning convolutional fully complex-
valued relaxation neural network classifier’, IET Comput. Vis, vol. 11,issue 8, pp. 656-662.
32. ZobiaSuhailAzam, Hamidinekoo Reyer & Zwiggelaar 2018, Mammographic mass classification using response patches; IET Comput. Vis, 2018, vol.
12,issue 8, pp. 1060-1066.
33. Wentao Zhu, Xiang Xiang Trac, D & Tran D Hager Xiaohui Xie 2018,‘Adversarial deepstructured nets for mass segmentation from Mammograms’,
IEEE 15th InternationalSymposium on Biomedical Imaging (ISBI), pp. 847-850
34. Abdelali Elmoufidi, Khalid El Fahssi,Said Jai-andaloussi, AbderrahimSekkaki&MathieuLamard 2018,‘Anomaly classification in digital mammography
based on multiple-instance learning’, IET Image Process, vol. 12,issue 3, pp. 320-328.
35. Gustavo Carneiro, Jacinto Nascimento&Andrew P Bradley 2017,‘Automated Analysis ofUnregistered Multi-View Mammograms with Deep Learning’,
IEEE transaction onmedical imaging, vol. 36, no. 11.
36. Kozegar, E, Soryani, M, Behnam, H, Salamati, M & Tan, T 2017, ‘Mass Segmentation in Automated 3-D Breast Ultrasound Using Adaptive Region
78
NATCON 21 College of Engineering Trikaripur
Conference Proceedings
13/12/21 & 14/12/21
Growing and Supervised Edge-Based Deformable Model’, IEEE Transactions on Medical Imaging, vol. 37, no. 4,
pp.918-928.
37. Sourav Pramanik,Debapriya Banik & DebotoshBhattacharjee 2015,‘Suspicious-regionsegmentation from breast thermogram using DLPE-based level set
method’,IEEE transactions on medical imaging, vol. 38(2), pp.572-584.a

79
NATCON 21 College of Engineering Trikaripur
Conference Proceedings
13/12/21 & 14/12/21

AN IMAGE SUPER RESOLUTION DEEP LEARNING


MODEL LapSRN WITH TRANSFER LEARNING
Athirasree Das, Dr.K.S Angel Viji, Linda Sebastian
College of Engineering ,Kidangoor Kerala, India

Abstract—

Super Resolution (SR) aims to convert low resolution images into high resolution images. SR methods can be categoried intotwo
categories namely Single Image Super Resolution (SISR) and Video Super Resolution (VSR). First, SISR is to create high resolution
image from low resolution image. Second, VSR comes from image super resolution and goal is to restore low resolution videos into
high resolution videos. In deep Learning, Convolutional Neural Networks(CNN) are the special type of Deep Neural Networks.
There are many deep learning methods for images and video super resolution. CNN are used for high-quality image super-
resolution reconstruction. CNN based SR model deep Laplacian Pyramid Super-Resolution Network (LapSRN) is the existing
method. It requires a large number of network parameters and heavy computational loads at run time for generating high-accuracy
super resolution results so LapSRN with transfer learning (LapSRN-TL) is proposed. We have analysed and compared the
quantitative and qualitative results of LapSRN-TL with LapSRN deep learning model.

Keywords— Super-resolution, Convolutional neural networks, Single image super resolution, Laplacian pyramid network, Transfer
learning.

Introduction

Super Resolution (SR) is the process of lower resolution input image upgrade to higher resolution image. Image super resolution
achieves good result with deep learning techniques. Super Resolution in images and video mainly have two methods Single Image
Super Resolution (SISR)[2] and Video Super Resolution(VSR)[7]. A Single Image Super Resolution aims to change low resolution
image to high resolution image. Based on the number of input LR images, SR methods can be classified into single image super
resolution (SISR) and multi-image super-resolution (MISR). SISR is single image as input and output and SISR are mainly divided
into interpolation-based methods, reconstruction-based methods and learning-based methods. Video Super Resolution is the
combinations of SISR and aims to upsampling low resolution videos to high resolution videos. Super Resolution have many
applications in real world.
A single image super-resolution (SR) that combines with deep convolutional neural networks (CNNs) is deep learning super
resolution Model. Convolutional neural network (CNN) are the special type of deep neural network widely used in object
recognition, segmentation, optical flow and super resolution. here are many deep learning methods for image and video super
resolution. One of the method is Deep Laplacian Pyramid Super Resolution Network (LapSRN)[1],[4] used to reconstruct residual
image and upscale to HR images. This model consists of a feature extraction branch and an image reconstruction branch. The
feature extraction branch consist of convolutional layers and transposed layers. Convolutional layers to extract non linear feature
maps from LR input images and transposed layers for up sampling the feature maps to a finer level and also use a convolutional
layer to predict the sub-band residual image. The image reconstruction branch is used to upgrade the low resolution images and
takes the sub-band residual image from the feature extraction branch as input of this stage to reconstruct high resolution images.
This model is trained to handle outliers with powerful charbonnier loss functions. Construct the LapSRN-TL to improve the
performance. CNN based methods has three aspects such as accuracy,speed and progressive reconstruction. Accuracy is the
percentage of accurate predictions of test data. This can be easily calculated by dividing the number of correct predictions by the
total number of predictions. This model has a large potential to learn complicated mappings and effectively reduces the undesirable
objects. Speed of LapSRN contains both fast processing speed and high potential of deep networks. Survey results demonstrate that
LapSRN method is faster than several CNN-based super-resolution models, such as Single Image Convolutional Neural Network
(SICNN), Deep Dual Attention Network (DDAN),Video Super Resolution network (VSRnet)[7], Super-Resolution Convolutional
Neural Network (SRCNN)[3]. To make good results of this existing model by parameter sharing, local skip connections, multi-
scale training and proposed model by transfer learning. Featured Extraction within the sub-network is to share parameters across the
pyramid levels in the parameter sharing network architecture. Analyze systematically in local skip connections with three different
methods to applying local Skype connections on a specific model. By using it properly to avoid connections to minimize gradient
disappearance explosion problems, we can train 84-layer network for achieving model performance. In Multi scale training, we can
train three different models for handling 2×, 4× and 8× SR. Here, We are training a single model to handle multiple upsampling
scales. Multi-scale learns and improves model with inter-scale interactions and reconstructive accuracy against single-scale models.
Transfer learning [5],[8] is a useful approach in deep learning for super resolution where pre-trained models are considered as
the starting point of a model in the second task. It is also used on predictive and classification modeling problems. Nowadays
images are very helpful for transmitting information, so high resolution is very important. Evaluation of this model by evaluation
metrics which are peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM) and are used to compare with different
image in SR networks.
80
NATCON 21 College of Engineering Trikaripur
Conference Proceedings
13/12/21 & 14/12/21
Wenming Yang et.al [1] says that Laplacian Pyramid Super-Resolution Network (LapsRN). It is an innovative super-resolution
model that upscale low-resolution images in LapSRN model. This deep learning model is fast and achieves better

performance. LapSRN has two modules which are feature extraction and image reconstruction. LapSRN for fast and accurate
image super-resolution and performs favorably against the another deep learning models in terms of run-time and image quality. In
existing deep learning LapSRN model trained with monochrome images and by applying transfer learning into LapSRN and
measure the accuracy of model with color images.

LAPLACIAN PYRAMID NETWORK FOR SR

Laplacian Pyramid Super-Resolution Network (LapsRN) is an innovative super-resolution model that upscale low-
resolution images. It is a CNN based model and so it provides superior accuracy as compared with another models and it
achieves fast speed for practical and restoration quality. The advantages of CNN based models are simplicity and
robustness.

A. Network Architecture

LapSRN-TL is a pyramid like architecture. Fig 1 shows that model takes LR images as input and HR images as output. There are
two branches of this model feature extraction and image reconstruction. Feature extraction branch consists of two blocks which are
convolution block and residual block. Image reconstruction branch contains upsample block.

B. Feature extraction branch

The feature extraction branch consist of convolutional layers and transposed convolutional layers. Convolutional layers to extract
non linear feature maps from LR input images and transposed layers for up sampling the feature maps to a finer level and also use a
convolutional layer to predict the sub-band residual image. The convolutional block contains feature embedding sub network. The
LR image input takes into convolutional block and model with additional CNN layer (Con in) used to extract features from the input
image. The model with convolutional layer (Con res) is used to predict the sub-band residual image. The output of feature embedding
task consider as the input of feature upsampling. The feature embedding and upsampling are the two different pyramid levels. The
number of network parameters increases with increasing image scale of resolution.

C. Image reconstruction branch

The image reconstruction branch used to upgrade the low resolution images and takes the sub-band residual image from the feature
extraction branch as input of this stage to reconstruct high resolution images. The upsampled image and predicted residual image are
combine to form high resolution image. The residual block consists of element wise addition operation with residual image and
upsampled image.
In LapSRN-TL model reduce the network parameters by sharing parameters across the pyramid levels in that model and
sharing parameters within pyramid levels. The rectified linear activation function or ReLU is a linear function to deep learning
model for achieves better performance. The feature embedding subnetwork has recursive blocks. Each recursive block has distinct
convolutional layers, which controls the number of parameters in the entire model.
The total depth becomes

depth = (D × R + 1) × L + 2 (1)

Fig. 1. LapSRN Network architecture[1]

D. Loss function

Loss functions or error function provides a static representation of model performance. Loss is used to evaluate and model

optimization. LapSRN-TL using Charbonnier loss function. Mean squared error is calculated as the average of the squared
81
NATCON 21 College of Engineering Trikaripur
Conference Proceedings
13/12/21 & 14/12/21
differences between the predicted and actual values.
IMPLEMENTATION

LapSRN model takes low resolution monochrome image as input and high resolution monochrome image as output.The
monochrome image in single wavelength. The LapSRN model trained with monochrome images and LapSRN-TL
model with color low resolution input image to high resolution output image. In 2 scale resolution of LapSRN-TL
model with input image size is 128 ×128 and output image size is 256 × 256.
LapSRN model uses convolutional layers and so model trained easily with monochrome images helps to achieve
good result. LapSRN-TL model use 64 filters in convolutional layers, in residual layers and in upsampling layers. The
filter size is 3×3. Model Trained and tested by DIV2K dataset. DIV2K dataset is DIVerse 2K resolution high quality
images. The DIV2K dataset is divided into train data and test data. LapSRN-TL model takes 600 images for training
and 200 images for testing. The model use a batch size of 64 and size of HR patches to 128 ×128. An epoch has 600
iterations of back propagation.

Fig. 2. LapSRN implentation model

EXPERIMENT RESULTS
The LapSRN model trained with monochrome images and tested with monochrome images and LapSRN-TL trained in
colour images.

A. Dataset Exploration

Fig. 3. DIV2K: Dataset sample images

Fig. 4. DIV2K: 128 ×128 size images

Fig. 5. Sample 2 scale input monochrome image of LapSRN model. DIV2K dataset images downsampled to form low

82
NATCON 21 College of Engineering Trikaripur
Conference Proceedings
13/12/21 & 14/12/21
resolution input image.

Fig. 6. LapSRN model sample 2 scale output with monochrome images

Fig. 8. LapSRN model Analysis.

Fig. 7. low resolution and high resolution image of LapSR-TL model trained and tested in colour images.

Table 1 : Comparison and Results


Model Scale Dataset PSNR SSIM

83
NATCON 21 College of Engineering Trikaripur
Conference Proceedings
13/12/21 & 14/12/21

LapSRN 2 scale resolution DIV2K 33.82 0.8014

LapSRN with
Transfer 2 scale resolution DIV2K 37.52 0.9760
learning(LapSRN-
TL)

Quantitative Evaluation by objective metric peak signal-to-noise ratio (PSNR) and subjective metric Structural
similarity (SSIM). peak signal-to-noise ratio is inversely proportional to the logarithm of the Mean Squared Error (MSE)
between the ground truth image and the generated image. Structural similarity for measuring the similarity between two
images.

CONCLUSION

The review presents the facts that there are large number of deep learning video and image resolution models. Deep
learning is useful in the classical computer vision problem of super-resolution, and can achieve good quality and speed.
In this proposed model CNN based LapSRN trained and tested with monochrome images and also LapSRN-TL model
trained and tested with colour images with transfer learning then acheives better performance with transfer learning.
LapSRN-TL is a single model for handling 2 scale resolution and which is applied to multiple up sampling scales. This
model reconstruct sharper and accurate images. Model performs favorably with DIV2k dataset against 2 scale super
resolution.

REFERENCES

[1] Lai, Wei-Sheng, Jia-Bin Huang, Narendra Ahuja, and Ming-Hsuan Yang. ”Fast and accurate image superresolution
with deep laplacian pyramid networks.” IEEE transactions on pattern analysis and machine intelligence 41, no. 11 (2018):
2599-2613.
[2] Yang, Wenming, Xuechen Zhang, Yapeng Tian, Wei Wang, Jing-Hao Xue, and Qingmin Liao. ”Deep learning for
single image super-resolution: A brief review.” IEEE Transactions on Multimedia 21, no. 12 (2019): 3106- 3121.
[3] Wang, Zhihao, Jian Chen, and Steven CH Hoi. ”Deep learning for image super-resolution: A survey.” IEEE
transactions on pattern analysis and machine intelligence (2020).
[4] Dong, Chao, Chen Change Loy, Kaiming He, and Xiaoou Tang. ”Image super-resolution using deep convolutional
networks.” IEEE transactions on pattern analysis and machine intelligence 38, no. 2 (2015): 295307.
[5] Zhang, Yang, Ruohan Zong, Jun Han, Daniel Zhang, Tahmid Rashid, and Dong Wang. ”Transres: a deep transfer
learning approach to migratable image super-resolution in remote urban sensing.” In 2020 17th Annual IEEE International
Conference on Sensing, Communication, and Networking (SECON), pp. 1-9. IEEE, 2020.
[6] D. Glasner, S. Bagon and M. Irani, ”Super-resolution from a single image,” 2009 IEEE 12th International Conference on
Computer Vision, Kyoto, Japan, 2009, pp. 349-356, doi: 10.1109/ICCV.2009.5459271.
[7] A. Kappeler, S. Yoo, Q. Dai and A. K. Katsaggelos, ”Video Super-Resolution With Convolutional Neural Networks,” in
IEEE Transactions on Computational Imaging, vol. 2, no. 2, pp. 109-122, June 2016, doi 10.1109/TCI.2016.2532323.
[8] Huang, Yongsong, Zetao Jiang, Rushi Lan, Shaoqin Zhang, and Kui Pi. ”Infrared Image Super-Resolution via
Transfer Learning and PSRGAN.” IEEE Signal Processing Letters 28 (2021): 982-986.

84
NATCON 21 College of Engineering Trikaripur
Conference Proceedings
13/12/21 & 14/12/21

AMPLITUDE CHARACTERISTICS OF STRONG GROUND MOTION ACCELEROGRAMS - A REVIEW


M. Beena Mol 1, L. Prinza 2, Jerald3, Johny Elton4 and Ignisha Rajathi5
1
Assistant Professor, Civil Engineering Department, LBS College of Engineering (A Government of Kerala undertaking)

ABSTRACT

This literature review discusses the characteristics of the strong ground motion accelerograms. The characteristics of
accelerograms are essential to understand the basic loading pattern of the earthquake forces on the structure. This paper reviews
the amplitude characteristics namely the peak ground acceleration, peak ground velocity and peak ground displacement, and
duration parameters namely the significant duration and the bracketed duration.

INTRODUCTION

The strong ground motion (SGM) during an earthquake (EQ) is recorded as the discrete time series acceleration
magnitude of ground shaking called the accelerogram. Accelerograms are the only available source of most important detailed and
critical information about the EQ random cyclic forces affecting the structures. The accelerograms are very often influenced by the
size and the source mechanism, travel path, local site conditions etc., (Tsai 2002). The every recorded accelerograms are made up of
several wave packets corresponding to the different types of waves travelling through the earth’s interior and propagating along the
earth’s surface. Every observed EQ accelerograms contain information about the nature of the ground shaking in the form of
amplitude, frequency, energy content, duration and phase characteristics (Solomos et al 2008). The parameters used to characterise
the SGM for the purpose of determining the seismic demand in structural engineering analysis and design are the peak ground
acceleration (PGA), peak ground velocity (PGV), peak ground displacement (PGD), response spectrum (RS), frequency amplitude
spectrum (FAS) etc., (Joynes & Boore 1988, Colunga et al 2009).

The effect of strong EQs on the engineering structures is a universal problem. Efforts are always made to minimize
these impacts through better planning, improved construction and early warning capabilities. But these efforts cannot succeed
without a thorough understanding of the ground motion characteristics (Iwan 2012, Kawashima et al 1984). Seismic ground
acceleration plays an important role in accessing effects of earthquakes on the built environment, persons and the natural
environment (Ocola 2008). It is the parameters of seismic wave motion recorded as accelerograms, on which EQ resistant building
design and construction are based. The level of damage is, among other factors directly proportional to the severity of the ground
acceleration. It is the most important information for disaster risk prevention and mitigation programs. Also, it is a well known fact
that among all possible options to define the seismic input for improved structural analysis and design, the natural EQ force acting at
the base of the structure that is recorded in the form of three component ground surface accelerograms has emerged as the most
attractive option (Iervolino et al 2008). The characteristics that are directly observed from the time domain accelerogram by visual
inspection are the PGA, the bracketed duration and the significant duration of shaking history (Hudson 1979). The most complete
description of EQ ground motion is given by the accelerogram which expresses the full time history of the ground acceleration
(Khemici & Chiang). The accelerogram characteristics used in this research work are shown in figure 1.

Amplitude characteristics

The most common amplitude parameters that characterise the EQ accelerograms are the PGA, PGV and PGD (Alavi
& Gandomi 2011, Douglas 2001, Boore, 2008). The maximum amplitude values are the most important indices for any applications
in earthquake engineering (Shoji et al 2004, Bozorgnia et al., 2010). The PGA is the maximum acceleration amplitude value that can
be directly viewed from the recorded ground accelerogram time history. The PGV is the maximum velocity amplitude value that can
be viewed from the velocity time history resulting from the single integration of the ground acceleration time series (Lam et al 2003).
PGD is the maximum displacement amplitude value that can be viewed from the displacement time history. Displacement time
history is the resultant of double integration of the ground acceleration time history.

85
NATCON 21 College of Engineering Trikaripur
Conference Proceedings
13/12/21 & 14/12/21

Peak ground acceleration


Peak ground velocity
Peak ground displacement

Amplitude
characteristics
Time domain
acceleration
characteristics
Bracketed duration
Duration Significant duration
characteristics Arias intensity

Figure 1. Time domain acceleration characteristics

Peak ground acceleration

The importance of PGA is revealed in the development of seismic zoning maps and the construction of design
response spectrum used in earthquake resistant construction rules (Derras & Bekkouche 2011). Knowledge about the peak seismic
ground acceleration and its distribution due to strong EQ shaking is important for territorial planning, urban development, risk
management, implementation of disaster prevention measures, community emergency preparedness and other structural engineering
applications (Ocola 2008). The most common intensity measures (IMs) used in ground motion prediction equations or attenuation
relationships are the PGA characteristics (Bozorgnia et al 2010, Gunaydin & Gunaydin 2008, Pailoplee 2012, Selentis & Danciu
2008, Gandomi et al 2011). The application of the PGA in various EQ and seismological engineering applications is shown in figure
2. The PGA (usually as a fraction of the peak) is the EQ ground motion parameter that is used in the seismic coefficient method of
seismic structural analysis (Khemici & Chiang 1984, Leshchinsky et al 2009, Molas & Yamazaki 1992). The effective peak ground
accelerations are needed in the early stages of project development, since they can be used as a starting point for preliminary seismic
designs and evaluations (Matheu et al 2005, Omine 2004).

For the seismic analysis of buildings and equipments, the acceleration time histories are needed (Maurel 2008).
Knowing the PGA characteristic of ground motion in a specified region is vital for the design of engineering structures (Gunaydin &
Gunaydin 2008, Omine 2004). An important parameter for accessing the EQ effects at a given location is the PGA (Derras &
Bekkouche 2011). The estimation of SGM parameters is vital for the purpose of seismic hazard analysis and seismic design of the
structures in seismic zones (Bagheri et al 2011, Alavi & Gandomi 2011, Vipin et al 2009, Galluzzo 2004, Makropoulos & Burton
1985, Molas & Yamazaki, 1992).

Seismic hazard assessment depicting the intensity and the peak time domain ground motion characteristics are
increasingly being taken into consideration by different agencies involved in planning, design and construction of structures (Lisa,
2004). For structural engineering applications, the most widely used SGM amplitude parameter is the PGA as it is traditionally and
immediately related to the induced seismic forces which form the basis of the current structural seismic codal design procedures
( Solomos et al 2008). Ahmad (2005) has used the PGA for seismic hazard analysis in a selected area of Margala Hills, Islamabad.
Shoji et al (2004) has used the PGA to relate the local site effects of an EQ.

86
NATCON 21 College of Engineering Trikaripur
Conference Proceedings
13/12/21 & 14/12/21

Local site
effects Shoji 2004

Pailoplee 2012
Earthquake Selentis 2008
intensity Linkimer 2008
David 1999
Wu 2003

Seismic Khemici
structural Leshchinsky 2009
analysis Molas 1992

Peak ground Seismic Matheu 2005


acceleration structural Omine 2004
(PGA) design Maurel 2008

Seismic Ahmad 2005


hazard Alavi 2011
analysis Vipin2009
Galluzzo2004

Damage Segou
assessment Molas 2004

Liquefaction Siyahi 2008


effects

Earthquake Mohraz 1976


classification

Figure 2. Applications of peak ground acceleratation (PGA) in the field of


earthquake and seismological engineering.

The maximum ground acceleration of an EQ serves as an important seismic parameter tool for quick and adequate
assessment of potential damage during an earthquake (Segou, Molas et al., 2004). PGA is used as one of the design parameters for
the input layer in evaluating the liquefaction induced lateral spreading during an EQ by implementing the artificial neural network
(Siyahi et al 2008). The PGA is used in the regression analysis model to determine the intensity of an EQ (Linkimer 2008, Wald
1999, Wu et al 2003). The limit generally used to classify the strong motion records are the ground accelerograms having peak
acceleration > 0.05g (Mohraz 1976). The damaging acceleration to weak construction is based on the PGA and is taken as 0.1g
(Mohraz 1976). PGA is useful to express the relative intensity of the ground motion (i.e., small, moderate, high and very high)
(Freeman 2007). Also PGA is a good intensity measure for short period ground motions (Masumi Yamada, 2009).

Peak ground velocity

The PGV is determined from the SGM velocity time series of an EQ resulted from the one time integration of the
acceleration time series using the trapezoidal method (Khemici & Chiang, Chopra). The PGV evidently plays an important role in
the problem of RS velocities to produce a family of curves called maximum relative velocity response spectrum (Hudson, Akkar &
87
NATCON 21 College of Engineering Trikaripur
Conference Proceedings
13/12/21 & 14/12/21
Ucuoglu 2003, Trombetti et al 2008). Absolute velocity is the measure most closely correlated to the structural damage subjected to
EQ base excitation (Read et al 1990, Akkar & Bommer 2007, Murata 2004). If the velocity pulse is of very long duration, the
corresponding acceleration demand could be significantly low so as to cause few or no structural damage to the structures (Mollaioli
et al 2012). PGV is also used to predict the ground motion prediction equation (Gandomi et al 2011, Skarlatoudis et al 2003).

Seismic
hazard Ahmad 2005
analysis
Read et al 1990
Structural Akkar & Bommer 2007
damage Murata 2004
detection Mollaioli et al 2012

Ground Gandomi et al 2011


motion Skarlatoudis et al 2003
prediction Wald 1999
equation Wu et al., 2003

Peak ground Velocity Hudson


velocity response Akkar & Halukucuoglu
(PGV) spectrum 2003
Trombetti et al 2008
Structural
damage Bommer & Alaecon 2006
assessment

Seismic Yu & Jin 2008


structural
analysis

Performance Omine 2008, 2004


based design

Seismic Romeo 2000


hazard maps

Figure 3. Applications of peak ground velocity (PGV) in the field of


earthquake and seismological engineering.
PGV is used as an important SGM amplitude characteristic to relate and estimate the modified Mercalli intensity of an
EQ in the statistical regression analysis (Wald 1999, Wu et al 2003). Ahmad (2005) has used PGV for seismic hazard and risk
analysis in a selected area of Margala Hills, Islamabad. PGV is one of the very important parameter to analyse some larger
structures and buried pipelines subjected to cyclic seismic forces (Yu & Jin 2008). PGV is used to predict and estimate the structural
damage during an EQ (Bommer & Alarcon, 2006). PGV has been widely used in the risk analysis and the performance based design
in EQ prone and sensitive areas of seismic zones (Omine 2008, Omine, 2004). The PGV is also used to develop the seismic hazard
maps (Romeo 2000). The application of the PGV in various EQ and seismological engineering applications is shown in figure 3

88
NATCON 21 College of Engineering Trikaripur
Conference Proceedings
13/12/21 & 14/12/21

Seismic
hazard Amiri et al 2011
analysis
Bommer and Elnashai 1999
Structural Ye et al
seismic Yamada 2009
analysis Lam et al 2007

Ground
motion Gandomi et al 2011
prediction Skarlatoudis et al 2003
equation
Displacement Priestley 2000
Peak ground response Trombetti et al 2008
displacement spectrum
(PGD)
Seismic risk Galluzzo et al 2004
analysis Esposito et al 2012

Performance Ye et al
based design Corchete 2010

Damage
prediction Song & Heaton 2012
model
Intensity of
Broglio et al
earthquake

Seismic Romeo et al 2000


hazard maps

Figure 4. Applications of peak ground displacement (PGD) in the field of


earthquake and seismological engineering.
Peak ground displacement
The PGD is determined from the SGM displacement time series of an EQ resulted from the double integration of the
acceleration time series or one time integration of the velocity time series using the trapezoidal method (Khemici & Chiang, Chopra).
PGD is considered to be one of the important input parameters used in many of the EQ engineering and seismological applications
(Moustafa 2011, Esposito et al 2012). The intensity of ground shaking during an EQ can be measured with the application of PGD
(Broglio et al). PGD has been widely used to predict and formulate the Ground motion prediction equation of a region (Gandomi et
al 2011, Skarlatoudis et al 2003). The application and use of PGD is also extended to develop the seismic hazard maps of regions
prone to earthquakes (Romeo et al 2000). PGD is one of the factor governing the seismic demand in the time history analysis tool in
the performance based seismic design (Ye et al) and in the estimation of the seismic risk (Galluzzo et al 2004, Esposito et al 2012).
PGD was very helpful in the displacement hazard analysis (Amiri et al., 2011). The application of the PGD in various EQ and
seismological engineering applications is shown in figure 4

89
NATCON 21 College of Engineering Trikaripur
Conference Proceedings
13/12/21 & 14/12/21
In displacement based seismic design PGD is used to characterize the structure by a SDOF system representation of
performance at peak displacement response and in the construction of tripartite curves (Priestley 2000, Trombetti et al 2008). PGD
is used in the development of the structural collapse prediction model (Song & Heaton 2012). PGD is one of the most important
parameters to be considered for earthquake resistant design of structures (Corchete 2010). The spectral ordinates for all damping
levels increases with period from zero to some maximum value and then descent to converge at the value of the PGD at long periods
(Bommer & Elnashai 1999).

The behaviour of the long period structures is governed by the PGD (Ye et al) and therefore it is a good intensity
measure for long period ground motions (Yamada 2009). The maximum elastic drift demand and the displacement demand of
inelastic responding structures of the building is also determined with respect to the PGD (Lam et al 2007).

REFERENCES

Alavi, AH & Gandomi, AH 2011, ‘Prediction of principal ground motion parameters using a hybrid method coupling artificial
neural networks and simulated annealing’, Computers and Structures, vol. 89, no.1, pp. 2176- 2194.
Boore, DM & Akkar, S 2003, ‘Effect of causal and acausal filters on elastic and inelastic response pectra’, Earthquake engineering
and structural dynamics, vol. 32, no. 4, pp. 1729-1748.
Bozorgnia, Y, Hachem, MM & Campbell, KW 2010, ‘Ground motion prediction equation (attenuation relationship) for inelastic
response spectra’, Earthquake Spectra, vol. 26, no. 1, pp. 1–23.
Colunga, TA, Hernandez, UM, Perez, LE, Aviles, J, Ordaz, M & Vilar,JI 2009, ‘Updated seismic design guidelines for model
building code of Mexico’, Earthquake Spectra, vol. 25, no. 4, pp. 869 – 898.
Douglas, J & Boore, DM 2011, ‘High-frequency filtering of strongmotion records’, Bulletin of Earthquake Engineering, vol. 9, no.1,
pp.395–409.
Hudson DE 1979, ‘Reading and Interpreting strong motion accelerograms’, Earthquake Engineering Research Institute, vol. 112,
no.4, pp. 78-85.
Iervolino, I & Cornell, CA 2008, ‘Record selection for nonlinear seismic analysis of structures’, earthquake Spectra, vol. 21, no. 3,
pp. 685–713.
Iwan, WD 2012, ‘COSMOS at the frontier of strong motion monitoring’, COSMOS newsletter, vol.1, no19, pp. 1-15.
Joyner, WB & Boore, DM 1998, ‘Measurement, Characterisation and Prediction of strong ground motion’, Proceedings of
Earthquake Engineering and Soil Dynamics II, GT Div/ASCE, Utah, pp. 43 – 102.
Kawashima, K, Aizawa, K & Takahashi, K 1984, ‘Attenuation of peak ground motion and absolute acceleration response spectra’,
Proceedings f the eight world conference on earthquake engineering, vol.2, pp. 257– 264
Khemici, O & Chiang, W 1984, ‘Frequency domain corrections of earthquake accelerograms with experimental verifications’
Proceedings of the eight world conference on earthquake engineering, vol.2, pp. 103– 110.
Lam, NTK, Gaull, BA & Wilson, JL 2007, ‘Calculation of earthquake actions on building structures in Australia’, EJSE special
issue : loading on structures, vol.2, no.3, pp. 22-40
Ocola, L 2008, ‘Procedure to estimate maximum ground acceleration from macroseismic intensity rating: application to the Lima,
Peru data from the October 3-1974- 8.1 Mw earthquake’, Advances in Geosciences, vol. 14, no.1, pp.93-98.
Solomos, G, Pinto, A & Dimova, S 2008, ‘A review of the seismic hazard zonation in National Building Codes in the context of
Eurocode 8’, EUR 23563 EN-2008, JRC, European commission.
Shoji, Y, Tanii, K & Kamiyama, M 2004, ‘The duration and amplitude characteristics of earthquake ground motions with emphasis
on local site effects’, Proceedings of the thirteenth world conference on earthquake engineering, paper No: 436.
Tsai, CCP 2002, ‘Characteristics of Ground Motions’, International Training Programs for Seismic Design of Building Structures,
National Center for Research on Earthquake Engineering.

90
NATCON 21 College of Engineering Trikaripur
Conference Proceedings
13/12/21 & 14/12/21

A REVIEW ON POWER GENERATION ENHANCEMENTS IN A PUMPED STORAGE POWERHOUSE BY


USING APPROPRIATE GUIDE VANE SEALING MATERIAL

Dr.V.Sampathkumar1, Dr.P.Sridharan2, Dr.Parthiban KP3


1
Professor, ECE, Christ College of Engineering, Irinjalkuda, Thrissur , Kerala
2
Associate Professor, VJEC, Kannur, Kerala-670632
3
Assistant Professor, VSB Technical Campus, Coimbatore

Abstract

One of the essential needs of the growth of a country is generating and utilizing electricity. The increasing
population has increased the demand for electricity for houses, industries, hospitals, educational institutions, offices, and
agriculture. Generally, electricity is generated by the thermal power plants, hydropower plants, wind power plants, solar
power plants, nuclear power plants and biodiesel. India has made a powerful contribution to electricity generation. India is a
naturally wealthy country with mountains and water resources. There are plans to build more hydroelectric power plants in
the future. It is particularly important to establish Pumped Storage Hydroelectric Power Plants. This research describes that
the water leakage occurring in guide vane end sealing rubber material due to damage, does not enter the turbine runner
resulting in the reduction of water flow through the turbine runner, so the required amount of electricity is not generated in a
Pumped Storage Power House. This water leakage problem also affects the pumping mode operations, which reduces plant
efficiency. Hence Pumped Storage Power House generates less targeted electricity generation, and more power is also
required for pumping mode operations. This research is carried out with the use of four new guide vane end sealing rubber
materials such as Hydrogenated Nitrile Butadiene Rubber, Ethylene Propylene Diene Monomer, Polyurethane and Filled
Polytetrafluoroethylene to assess their life and to find out the best one which improves the power generation and reduces
p o w e r c o n s u m p t i o n i n p u m p i n g m o d e o p e r a t i o n .

Keywords: Pumped storage Power plants, Guide vane, Sealing, Optimization, Nitrile Butadiene
Rubber
1.1 INTRODUCTION
India is the world's third-largest Power Generation and the third-largest consumer of electricity. The national
electric grid in India has an installed capacity of 364.96 GW in 2019. Renewable power plants, which also include large
hydroelectric plants, constitute 34.86% of India's total installed capacity. During the 2018-19 financial years, the gross electricity
generated by utilities in India was 1,372 TWh, and the total electricity generation (utilities and non-utilities) in the country was
1,547 TWh. The gross electricity consumption in the year 2018-19 was 1,181 kWh per capita. (Powermin.nic.in 2019)

Table 1.1 Total Sectors Installed Power Generation Capacity 2019

Sector Installed Power Generation (MW) Percentage %


State 103750.43 29
Central 91109.75 25
Private 168394.61 46
All India 363254.79 100

91
NATCON 21 College of Engineering Trikaripur
Conference Proceedings
13/12/21 & 14/12/21
The Indian power generation installed capacity is mainly divided into three sectors State, Central, and Private. In
India, the total Power Generation Installed Capacity is 363254.79 MW. The state sector generated 103750.43 MW with 29 % of
installed power generation, the central sector generates 91109.75 MW with 25 % of installed power generation, and the private
sector generates 168394 MW with 46 % of installed power generation contributing to the total installed power generation. The
installed power generation capacity in India in 2019 is shown in table 1.1.
The electricity generation has been gradually increasing in the last five years depending upon the population
growth; the power generated is used for different consumptions, domestic, commercial, industrial, traction and agriculture. In the
last five years, 2015 – 2019 the population has increased and power consumption has also increased. In the year 2019, the
population is 1345 million, and power consumption is 1,196,309 GWh. The power consumption for different uses is domestic
24.76 %, commercial 8.24 %, industrial 41.16 %, traction 1.52
%, agriculture 17.69 % and miscellaneous 6.63 %. The growth of electricity consumption in India during the last five years is
shown in table 1.2. (Indiastat.com 2019)

Table 1.2 Growth of Electricity Consumption in India

% of Total

Consumption
(millions)

Consumptio
Population

Commercial

(in kWh)
Per-Capita
Agriculture
Industrial
Domestic
n (GWh)

Traction

Misc
Power
Year

Power
2015 1,267 938,823 23.53% 8.77% 42.10% 1.79% 18.45% 5.37% 1010
2016 1,283 1,001,191 23.86% 8.59% 42.30% 1.66% 17.30% 6.29% 1075
24.32% 9.22% 40.01% 1.61% 18.33% 6.50%
2017 1,299 1,066,268 1122
24.20% 8.51% 41.48% 1.27% 18.08% 6.47%
2018 1,322 1,130,244 1149
2019 1,345 1,196,309 24.76% 8.24% 41.16% 1.52% 17.69% 6.63% 1181

1.1 PUMPED STORAGE SCHEME (PSS) IN INDIA

Pumped Storage Scheme is a type of hydroelectric energy storage used by electric power systems for load
balancing. The basic principle of PSS is to store energy by pumping water from a low- level reservoir downstream to powerhouse
(lower reservoir) into a high-level storage reservoir (upper reservoir) at times when the demand for power is low and then by
utilizing the stored water of upper reservoir to generate hydroelectric power during peak load periods.

The reservoir-based hydropower plants generally utilize the water of the reservoir in a controlled manner to
generate electricity, and the water discharged from the turbine is passed to the tail trace from where it joins the river. In the
pumped storage scheme, the water from the tail-trace is stored in the lower reservoir. During the off-peak period, this water is
pumped to the upper reservoir, and during peak load hour, this water is again used for power generation. Power for pumping is
supplied either by an onsite conventional steam power plant or from the remote generating plant through the electric grid.

92
NATCON 21 College of Engineering Trikaripur
Conference Proceedings
13/12/21 & 14/12/21

Figure 1.2 Pumped Storage House Operations

The pumped storage power plant under construction in the northern region is with 1000 MW power generation
capacity in one pumped storage power plant, in the western region with 80 MW power generations in one pumped storage power
plant and totally 1080 MW power is to be generated in two pumped storage power plants under construction. The installed
pumped storage power generation capacity in India is shown in table 1.3. (Power Technology in India 2018)

Table 1.3 Status of Pumped Storage Potential in India

Probable Installed Capacity Capacity Under


Region
Capacity (MW) Developed (MW) Construction (MW)
Northern 13065 (7 Nos) 0 1000 (1 Nos)
Western 39684 (29 Nos) 1840 (9 Nos) 80 (1 Nos)
Southern 17750 (10 Nos) 2005.6 (3 Nos) 0
Eastern 9125 (7 Nos) 940 (2 Nos) 0
North-Eastern
16900 (10 Nos) 0 0
Total 96524 (63 Nos) 4785.6 (9 Nos) 1080 (2 Nos)

It is planned to design 96524 MW power generating capacity in 63 Units. In India, there is 4785.6 MW power
generating capacity in 9 pumped storage powerhouses. In India, design to constructing with 1080 MW power generating
capacity in 2 pumped storage powerhouses.

This chapter contains several research articles on hydropower plants and pumped storage hydropower plants, the
problems and their problem-solving methods used to repair the plants. These research articles explain the impact of the leakage
water on power generation at the hydropower plants and how to solve it. Several research articles have also been collected on
sealing materials to analyze the nature of the seal material, and this helped in selecting the correct guide vane sealing material for
pumped storage powerhouse.

93
NATCON 21 College of Engineering Trikaripur
Conference Proceedings
13/12/21 & 14/12/21
 LITERATURE REVIEW FOR HYDROPOWER PLANTS

Aditya Gupta et al. (2016) in their paper, try to provide problems arising due to water scarcity in a country. 50% of
the world population is going to be under high water scarcity, according to the World Water Development (UN) report. Countries
of Africa and Asia like Cambodia, Bangladesh, China, and India, which are still developing are likely to face water scarcity more.
It is expected that in 2050, 70% of the population will live in the cities of India. With the shrinking of a water reservoir, low
rainfall, etc. it is hard to feed and provide resources like water, electricity to such a high population. Using the sensor, Information
and Communication Technology (ICT), water resources can be managed and be saved for future use. Sensors provide real-time
monitoring of hydraulic data with automated control and alarming in the case of events such as water leakages etc. Analysis of
data will help in taking meaningful actions. Smart water systems reduce non-renewable water losses and reduce water
consumption in the field of agriculture. This paper tries to provide problems arising due to water scarcity in India and how
technology will help in finding out the solution. The paper also provides a review on smart water technology currently available
that can be utilized by the Indian citizens to save the nation from scarcity. Finally, a conclusion is drawn that there is a
requirement of low-cost devices working on non-renewable energy.
Armando Carravetta et al. (2017) in their research article state that managing of the water distribution network
(WDN) is performed by the valve and pump control, to regulate both the pressure and the discharge between certain limits. The
energy that is usually merely dissipated by valves can instead be converted and used to supply the pumping stations partially. Pumps
used as turbines (PAT) can be used in order to both reduce pressure and recover energy, with proven economic benefits. The
direct coupling of the PAT shaft with the pump shaft in a PAT-pump turbocharger (P&P plant) allows the transfer of energy from
the pressure control system to the pumping system without any electrical device. Based on experimental PAT and pump
performance curves, P&P equations are given, and P&P working conditions are simulated concerning the operating conditions of
a real water supply network. The annual energy saving demonstrates the economic relevance of the P&P plant.
Bakken T H et al. (2013), in their review paper, shows that the water footprint of hydropower is used synonymously
to water consumption, based on gross evaporation rates. Since the report from IPCC on renewable energy (IPCC, 2012) was
published, more studies on water consumption from hydropower have become available. The newly published studies do not,
however, contribute to a more consistent picture of what the "true" water consumption from hydropower plants is. The dominant
calculation method is the gross evaporation from the reservoirs divided by the annual power production, which appears to be an
over- simplistic calculation method that possibly produces a biased picture of the water consumption of hydropower plants. This
paper documents and discusses several methodological problems when applying this simplified approach (gross evaporation
divided by annual power production) for the estimation of water consumption from hydropower projects. It appears to be a
paradox that a reservoir might be accorded a very high water consumption/footprint and still be the most feasible measure to
improve the availability of water in a region. They argue that reservoirs are not always the problem; instead, they may contribute
to the solution of the problems of water scarcity.
Booma J et al. (2018), in their article, present an overview of the availability of PSS in India and highlight the need
to create PSS in different regions to meet the demand for power. The increase in electricity demand is more challenging to the
electrical industry of developing countries. In India, the rise in peak power demand necessitates energy savings plans above and
beyond conventional power plants to ensure the stability of the power system. In utility energy storage schemes, the pumped
storage system (PSS) is gaining more attention with unique operational flexibility than other energy storage systems, even in
developed countries. PSS is the most reliable and commonly used option for large-scale energy storage purposes worldwide. As
the contribution of renewable energy to total energy is rapidly increasing, considering its intermittent nature, it is more
appropriate to build PSS plants to save energy, which can disrupt the stability and safety of the electrical system.
Cuthbert Z et al. (2012) reviewed the world energy scenario and how hydropower fits in as the solution to the global
sustainable energy challenge. Issues of hydropower resource availability, technology, environment and climate change have also
been discussed. Hydropower is sensitive to the state of the environment and climate change. With global climate change, though
globally, the potential is stated to increase slightly, some countries will experience a decrease in potential with increased risks.
Adaptation measures are required to generate hydropower sustainably. These are also discussed in the paper.
Gianfredi Mazzolani et al. (2017) estimated the current real losses in water distribution networks (WDN) is crucial
to plan investments for rehabilitation, assess the rise of leakage over time, and possibly drive procedures for failure identification
and repair. However, many WDNs worldwide do not have flow/pressure monitoring within the system yet, and the inlet water
volume or flow rate is the only recorded data. Developing reliable procedures to estimate real losses in such circumstances is
essential to assess the leakage phenomenon and eventually drive the upgrade of existing monitoring systems. This work proposes
a simple bottom-up methodology to estimate leakage using WDN inflow data series while only exploiting the seasonal fluctuation
of water consumptions. It resorts to a data-assimilation strategy whose formulation is consistent with the physical behaviour of
WDNs and requires the estimate of only a few numerical parameters. Additionally, the methodology allows the estimation of the
user’s daily and night water consumptions, thus being useful to verify or integrate other leakage estimate methods. The
methodology is discussed and demonstrated on both synthetic and real WDNs.
Gideon Johannes Bonthuys et al. (2019) tried to bridge the gap between the awareness of the potential for energy
recovery within Municipal Water Distribution Systems and the lack of knowledge of the extent and location of such potential to
increase the sustainability and resilience of South African cities. This is done by leveraging asset management data, contained
within municipal infrastructure asset registers and asset management plans, to identify energy recovery and leakage reduction
potential. Data from asset registers and customer profiling within the municipal asset management plans have been used to
develop a hydraulic model for a municipal water distribution system. The customer service charter within the asset management
94
NATCON 21 College of Engineering Trikaripur
Conference Proceedings
13/12/21 & 14/12/21
plans describes the level of service, which is used in evaluating minimum operating pressures within the system. Comparing this
to a pressure profile from the hydraulic analysis of the model identifies excess pressure areas, exploitable for energy recovery.
The novelty of this research is the exploitation of asset management data from Infrastructure Asset Management Plans and Asset
Registers for the development of a hydraulic model to analyse energy recovery and leakage potential within a municipal water
distribution system. Asset management data are used to identify an average annual preliminary energy recovery potential within
the Polokwane Central District Metered Area of 2.3 GW h, resulting in an average annual leakage reduction potential between 3.3%
and 4.2% of potable water, adding to the asset management value chain.
Giovanna Cavazzini et al. (2017) in their article describe some current trends and future challenges related to
pumped-hydro energy storage (PHES) with particular emphasis on the mechanical aspects of hydraulic machinery, power
electronics devices used for variable speed operation, and utilities’ operation strategies. After a brief introduction and historical
background, the new generation of PHES is presented with particular focus on those equipped with variable speed technology.
Typical configurations of pumped-storage hydropower plants (PSHPs) are also briefly described. The next section focuses on
reversible pump-turbines, discussing their operating limits and presenting the state-of-the-art of the research on their unstable
behaviour. The operating principle and some fundamental aspects of the electrical machines most widely used in PSHPs are
described next. Power electronics devices typically used in PSHPs for variable speed operation are described in detail, along with
some recent developments in variable speed drives. Finally, utilities’ operation strategies are reviewed in detail, and some future
challenges to make the best possible use of PHES assets according to their new role are identified.
Gizem Okyay (2010) suggested a method for parameter optimization by combining Matlab codes and commercial
computational fluid dynamics (CFD) codes in the design process. Francis type turbines are commonly used in hydroelectricity
generation. The main components of the turbine are the spiral case, stay vanes, guide vanes, turbine runner and draft pipe. The
dimensions of these parts mainly depend on the design discharge of water, head and rotor speed of the generators. The design
process begins by selecting the initial dimensions from the empirical curves, reworking them to improve the overall hydraulic
performance and obtaining a detailed description of the final geometry of the product with complete visualization of the
calculated flow field. A Francis turbine designed by the creation process is manufactured and installed for energy production.
Hans Kristian Hofsrud (2017) developed a different sealing material to prevent water leakage in hydro turbines.
The various load acting on the seal material, their FEA (finite element analysis), the seal material results, and its performance
characteristics are collected experimentally by using ABAQUS 6.14 software. The results are evaluated recommendations for the
best material to avoid water leakage in a hydropower plant are given.
Hao Zhang et al. (2018) analyzed the dynamic response of a pumped- storage hydropower plant in generating mode.
Considering the elastic water column effects in the penstock, a liberalized reduced order dynamic model of the pumped-storage
hydropower plant is used in this paper. As the power load is always random, a set of random generator electric power output is
introduced to research the dynamic behaviour of the pumped-storage hydropower plant. Then, the influence of the PI gains on the
dynamic characteristics of the pumped-storage hydropower plant with the random power load is analysed. Besides, the effects of
initial power load and PI parameters on the stability of the pumped-storage hydropower plant are studied in depth. All of the
above results will provide theoretical guidance for the study and analysis of the pumped-storage hydropower plant.
Hari Prasad et al. (2011) state that sediment erosion not only reduces the efficiency and durability of water turbines
but also causes problems in operation and maintenance. Many factors contribute to this corrosion in hydro- turbine components.
This article presents some recommended methods to reduce sediment erosion in turbine components. The alternative design of a
Francis turbine in sedimentary water is also briefly discussed.
Ingunn Norang (2012) in his paper described the potential changes in the European power system, the critical
challenges with increased generation of wind and solar for grid stability, and evaluated the different technological alternatives to
low-cost PSP and pump storage design to balance solar and wind power generation power and energy. He identified the
development cost estimates and income estimates, the environmental issues related to pump storage water electricity, and the
challenges of balancing power and auxiliary services at the pumping plant.
J G P Andrade et al. (2014) have investigated the operation and scheduling of pumped storage plants (PSP) to store
water in the headrace (upper reservoir), this stored water is used after peak energy demand of power generation. During off-peak
and low demand hours, the execs power generated from wind, solar photovoltaic, thermal and nuclear electrical energy is used to
pump back the water from the tailrace (lower reservoir) to head race.
Javier Menéndez et al. (2020) in their paper, presented a novel method to determinate the round trip energy efficiency
in pumped storage hydropower plants with underground lower reservoir. Large- scale energy storage systems, such as
underground pumped-storage hydropower (UPSH) plants, are required in the current energy transition to variable renewable
energy sources to balance the supply and demand of electricity. Two Francis pump-turbines with a power output of 124.9 and
214.7 MW (turbine) and a power input of 114.8 and 199.7 MW (pump), respectively, have been selected to investigate the overall
operation of UPSH plants. Analytical models and two-phase 3D CFD numerical simulations have been carried out to evaluate the
energy generated and consumed, considering a typical water mass of 450,000 t and a maximum gross pressure of 4.41 MPa. The
results obtained in both analytical and numerical models show that unlike conventional pumped-storage hydropower plants, the
round trip energy efficiency depends on the pressure inside the underground reservoir. The round trip energy efficiency could be
reduced from 77.3% to 73.8% when the reservoir pressure reaches -100 kPa. In terms of energy balance, the energy generation
decreases to 3,639 MWh −1, and the energy consumption increases
up to 4,606 MWh year−1 compared to optimal conditions.

95
NATCON 21 College of Engineering Trikaripur
Conference Proceedings
13/12/21 & 14/12/21
Jianru Yan et al. (2019) studied the variation of the leakage and efficiency of flat ring seal and labyrinth seal are on a
pump-turbine when the width of clearance is 0.2 mm and 0.5 mm. The result showsthat the effect ofleakage flow cannot be neglected.
The pump-turbine performance affected by leakage in turbine mode is more than that in pump mode at the same sealing structure
and width of clearance. Each component’s proportion of total pressure loss hardly varies with flow rate at pump mode, which is
opposite to that at turbine mode. Leakage does not change proportionally with the system flow rate. When the width of clearance
decreases to 0.2 mm, the leakage is reduced obviously because the maximum entropy occurs in the front pump chamber. The
mixing of leakage flow and mainstream at impeller inlet at pump mode will increase the total pressure and decrease the flow
angle and relative flow angle. Finally, it reduces the impeller’s work capacity.
Jonathan G.Sanch (2012) investigated the way to minimize the water leakage in the Francis hydro turbine. A proper
seal and material is placed on guide vane seal rings and wears plates to prevent water leakage. During water leakage time, the
leakage water is not flowing through the turbine runner, thus conserving massive water wastage and less power generation. By
replacing better seal rings, wear plates and wicket gates in Francis turbine to control the water leakage and improve the efficiency
and capacity of hydro turbines.
Nand Kishor et al. (2007) in their review paper, the authors have tried to broadly categorize the research work done
so far based on hydro plant model development and its controller design under different sections. The recent increased number of
blackouts in the power system has been mostly due to growing competition and deregulation among the power industry. Power
systems are complex nonlinear systems and often exhibit low-frequency electromechanical oscillations due to insufficient
damping caused by severe operating conditions. This needs an advanced modelling and control techniques for effective control of
power plants. In the case of a hydroelectric plant, the hydro turbine is a non- linear, non-stationary multivariable system whose
characteristics vary significantly with the unpredictable load on it and this presents a difficulty in designing an efficient and
reliable controller. A conservatively designed control fails to perform as expected. Keeping this in mind, hydro plant control is an
application area with a new set of problems for control engineering people. Mainly some of these problems focus on the
regulation of the turbine with considerable load variation in the power system. These problems have not been adequately solved
and continue to pose challenges to the control community. A substantial number of relevant research papers can be found on the
plant modelling, design aspects of control methodologies and their performance study.
Nicola Frau (2017) proposed the water leakage on guide vane (GV) and covering plate clearance gap due to erosion,
and this mainly affects the performance of Francis turbine power generation and efficiency. The Guide vane (GV) is the central part
functioning on Francis turbine to control and direct water flow in the turbine runner, this leakage water reduces the performance of
the turbine efficiency, in day to day the leakage water rate increases from the GV and covering plate, also increasing clearance gap.
This guide vane and covering plat design is modified to improve turbine performance results analysis is done with experimentally
and computational data.
Ravi Koirala and Baoshan Zhu et al. (2016) have proposed the Guide Vane clearance gap between two parallel
face plates top and bottom surfaces water leakage occurs in the clearance gap. These leakage flows interrupt the main flow path, so
energy loss in hydropower plants and also affect the Francis turbine performance and efficiency. Its result and performance can be
analyzed through computational methods.
Samuel Ayanrohunmu Olusegun Ilupeju (2015), in their review article, looks at how the pumped storage
hydropower plants generate electricity for all sectors according to the power requirements. Whenever there is a need for electrical
power, a large quantity of electricity is generated at the Pump Storage Hydro Power Plant. The demand for electricity varies
greatly from minute to minute. The guide vane is used to control the flow of water through the turbine runner to generate the
necessary power. The more electricity is produced, the more expensive it is. These functions are monitored by SCADA to
generate electricity without interruption.
Sunil Murlider Panicker (2010) proposed the seals mostly used for different turbo-machines like turbine, pump, and
compressors. It is used to arrest the water flow of the working fluid. This flow of water is due to leakage of the seal, and this loss
affects the efficiency of the turbo-machine. So it is essential to assess the leakage of the seals under the given operating condition.
The seal behaviour for different load and leakage conditions can be simulated by the CFD tool, and the results evaluate to effect
different labyrinth seal performance and select the best material to arrest the water leakage.
W Zhao et al. (2015) have investigated the leakage flow of water rate analysis by computational fluid dynamics
(CFD). These leakage flows arises from guide vane facing plates. This leakage flow is studied numerically, theoretically and
experimentally using CFD. The leakage water flow rate can be calculated numerically by using the empirical formula, and then
the results compared. It has been proven that experimental data is beneficial in terms of quickly predicting the amount of leakage
water in the guide vane facing plates for Francis turbine.
Yongdong Meng et al. (2019) proposed a field detection method to detect dam leakage inlets. It is vital to
determine the placement of the leakage inlets in the dam upstream face because dam leakage usually results in engineering
hazards such as the increase in uplift pressure and aggravating frost- heaving and freeze-thaw damage. According to the numerical
simulation of the constant electric field present around the concrete dam body, they analyzed the relationship between the local
potential difference (current density) and leakage areas, and the influence of low-resistance material on the detection of leakage
areas. Numerical simulations show that under the on-site data of potential difference, inlets can be detected in the dam's upstream
face successfully. This new technique and theoretical basis in operational testing and safety management can be applied to
concrete dams. The practical application proves that leakage areas can be recognized accurately, and the results agree well with
existing hydraulic observation data and geological information. They suggest that the electrical current field detection of reservoir
water leakage areas can be used as a reasonable and effective method to detect concrete dam leakage.
Z Y Yu and W Zhang (2012) developed the magnetic fluid seal for the main shaft and guide vane sealing
96
NATCON 21 College of Engineering Trikaripur
Conference Proceedings
13/12/21 & 14/12/21
application in the hydraulic turbine. The advantages of the magnetic fluid seal have low wear, low contact, and self- healing, long
life, this material is used to seal hydraulic turbines to control the wastewater, due to different load on the seal and flow operation
analysis by ANSYS software. This material used for the main shaft and guide vane sealing application to avoid water leakage
problems and increase the efficiency of the hydraulic turbine.

LITERATURE REVIEW FOR SEALING MATERIALS

(c) throwing agents. , And (f) a counterpart of subsidiary agents and combinations the reaction is mixed to form a mixture,
and the reaction mixture is allowed to form the polyurethane to complete the reaction. The polyurethane seal has a hardness
of less than 90 and a density of at least 850 g / L. Polymeric compounds with groups of reactions to isocyanate P1) polymers
with 2 to 4 functionalities and hydrophobic polyols with a hydroxyl number of less than 20 to 100 mg KOH / g, a hydroxyl
number of less than 180 mg KOH / g, and a 2 to 3 And P3) the function of chain extensions.
Andrea Deaconescu et al. (2020), in their article, discussed a theoretical and experimental study focusing on the
tribological behaviour of coaxial sealing systems fitted on pistons of hydraulic cylinders. Current trends regarding hydraulic
cylinder sealing systems are aimed at reducing energy consumption, which can be implemented by reducing leaks and
reducing friction.
Anja Kommling et al. (2014) investigated the change in seal material properties during ageing, a comprehensive ageing
program on uncompressed and compressed EBDM and an HNBR seal has been launched. In order to obtain results that are
closely related to the working conditions, the O-rings have a full shaft diameter of 10 mm. However, this mechanism can
lead to multifactorial ageing as a result of diffusion-limited oxidation (DLO) effects. These effects depend on material,
dimensions, time and temperature. Differential ageing results in distorted aggregate properties, such as compressive stress
relaxation and compression synthesis (CS), suggest that HNPR performs better than EPDM at 150 ° C, but not at 100 ° C.
Hardness measurements across the seal cross-section showed the presence of multi- component properties. Extensions of the
CS data using time-temperature changes and Arrhenius maps are possible if the DLO- affected data are excluded.
Arnellsend et al. (2016) present the nonlinear tension tests of three different elastomer compounds commonly used in the
oil and gas industry. The experiments were carried out at five different temperatures ranging from - 20 to 150o C. Optical
measurements were used to confirm high-quality stress-strain data. The material samples were exposed to a rotational
deformation history, which helped to investigate viscoelastic behaviour. A significant effect of temperature changes was
found, increasing stiffness and viscosity to shallow temperatures. The stress-strain curve was observed for one of the
hydrogenated nitrile butadiene rubbers tested at low temperatures. Matrix-particle trafficking simulations have qualitatively
described this stress. For tests performed at high temperatures, a significant number of materials failed.
Bin Lin et al. (2020) used PTFE reinforced with a variety of ceramic particles, and the physical properties have
been tested using a variety of methods. Polytetrafluoroethylene (PTFE) exhibits excellent efficacy under marine lubricant due to
its low water absorption, low friction and excellent corrosion resistance. However, its weak wear resistance restricts its use.
Tribal behaviour of slurry PTFE composites against super duplex steel was investigated under deionised water and seawater
lubrication. The worn surface was investigated by electron microscopy, energy-dispersive spectroscopy, and scanning the 3D
terrain. Four types of ceramic fillers promoted the crystalline of the PTFE matrix. Particle- reinforced PTFE showed better
tribological performance under seawater than deionised water. A weakness of ceramic particles played a significant role in the
wear resistance performance of PTFE composites. The breaking and shearing of Si3N4 or SiO2 particles can create and aggravate
abrasive wear between the friction pairs. With excellent gravity resistance of SiC particles and the effect of seawater lubrication,
the wear rate of the mixture in seawater was reduced by 80% compared to pure PTFE. The H-PN-filled PTFE showed a good
and stable tribological performance in the marine and offshore waters, as the inter-shear strength of the H-BN was low.
Bo Tan & Lyndon Scott Stephens (2019) in their paper, focused on the viscoelastic response of PTFE-based
materials. PTFE-based materials are widely used in tribal areas, especially in seal and bearing applications because of their
excellent self-lubricating properties. Using the Dynamic Mechanical Analyzer (DMA), the storage modulus, loss modulus, and
vane are measured and discussed in the frequency domain. The relaxation modulus and creep harmonics are measured and
studied in the time domain. Finally, the compositions of the materials are studied using an energy dispersive x-ray spectrometer
for composite measurement. The test data are then plotted using the Brony series with the best fitting curves and compared with
the experimental data.
Girişta V (2019) developed a test method to quantify the interference of the row effect on the sealing element
lifetime. A set of tests are carried out with three different inner diameter thermoplastic polyurethane radial lip seals, varying in
lip surface tolerance (40, 39.80 and 39.60 mm), and EP gear oil as a lubricant. In order to measure the life cycle of the array, the
mass of spill oil is measured. Meanwhile, the sealing element of the friction efficiency in the array, the four load cells are
mounted in the cylinder housing the system and the friction force values are monitored by the data acquisition terminal, and the
friction torque values are calculated. As a result, the intersection of friction moment values with the seal increases. Furthermore,
an increase in interference leads to a longer service life.
Gupta M K (2018) analyzed the effect of water absorption behaviour and its effect on the tensile, flexibility and impact
properties of sisal composites. Apart from the many applications of natural fibres reinforced polymer composites, they are
not used in external applications because they absorb excess water and cause deformation in mechanical properties. Water
absorption characteristics parameters, absorption, diffusion and permeability coefficients were investigated. The composites
were prepared by hand lay-up method, followed by constant loading with different fibres (i.e. 5, 10, 15 and 20 mm). Apart
97
NATCON 21 College of Engineering Trikaripur
Conference Proceedings
13/12/21 & 14/12/21
from these, the fibre content in each mixture was stable (i.e. 20 wt %). The results showed a significant effect of water
absorption and fibre length on the mechanical properties of sisal composites. The mechanical properties were found to
increase up to a specific fibre (15 mm) length, then decrease. On the other hand, the mechanical properties of sisal
composites were reduced after water absorption. Due to water absorption, sisal alloys had (10–18) %, (10–21) % and (16–
30) % tensile strength, flexural strength and impact strength respectively.
Harish G et al. (2019) proposed the properties, including self- lubrication, cooling in the process, and low weight,
polymers play an essential role in the traditional bearing materials currently being used. Polytetrafluoroethylene (PTFE), widely
known as Teflon, is a miracle product in the class of polymers that are well-suited to withstand applications. The present work is
concerned with the abiotic behaviour of hybrid polymers. Taguchi orthogonal array is used to design the test system. The wear
and friction coefficients were measured with a pin on the disc tribometer. The design of the tests developed the test layout. The
difference between the wear and friction values obtained by the experimental and mathematical models is found to be less than
5%. From the regression table, it is inferred that the wear ratio depends on the load, whereas the sliding distance is the most
critical factor affecting the coefficient of friction. The SEM images were taken to study the surface morphology of the PTFE
composites, and the results show that the PTFE + graphite + MoS2 type compound is more wear-resistant compared to PTFE
+ graphite and virgin PTFE.
Ismail J et al. (2016) present an approach to derive material properties of the elastomer under compressive loading, based on
the hyperplasic strain formulation. The paper focuses on the isotropic compressive behaviour exhibited by elastomers and
derives the strain energy functions that satisfy the characteristic properties of a hyperplasic model. Data obtained from abstract
tests on nitrile rubber (NBR) samples were used as material input in ABAQUS® as defined in the Input Analysis software. The
least-squares fitting technique was used to determine the coefficients of various standard hyperplasic models based on the tracker
stability criteria within the software. The strain energy functions obtained in practical applications focus on material parameters
related to the physical size of the material molecular network. The approach benefits from mathematical simplicity and has the
property of the decay method dependence. Furthermore, a sample verification procedure using a stepwise method for parameter
estimation is illustrated. The task here is a linear non-finite element modelling process that leads to an optimal solution and can
be applied to not only elastomeric seals but also to similar engineering properties.
Juergen Hieber et al. (2013) proposed thermoplastic polyurethane isocyanate obtained with (a) a polyol component (b) with
at least one polyester diol (P1), at least one polyether diol (B2) and at least one polycarbonate diol (P3). With a molar mass of
500 to 5000 g / mol and at least one diol with a molar mass of 62 to 500 g / mol. Thermoplastic polyurethane mouldings, in
particular, can be used to manufacture seals, attachment stars, valves and profiles. Polyurethane has exceptional mechanical and
chemical properties.
Junho Bae & Koo-Hyun Chung (2017) investigated polyurethane (PU) hydraulic seal tester and a pin- on-plate reciprocating
tribo-tester, and the results were compared to field data to develop an accelerated wear test method for hydraulic seals. Tests
using a hydraulic seal tester and a pin-on-plate reciprocating tribo-tester were found to reproduce abrasive wear of PU from the
field. However, a significant compression set was observed from the test using the hydraulic seal tester. Motivated by the
occurrence of abrasive wear from the field, the discoloured lubricant and the lubricant with alumina particles were further used for
testing using the pin-on-plate reciprocating tribo-tester. The height decrease data of the sealing surface showed that the wear was
accelerated by factors of 2.1–3.4 using these degraded lubricants. The outcomes of this work are expected to aid in the design of
reliable accelerated life testing for hydraulic seals.
Kasey L. Campbell et al. (2019) investigated the role of water in the tribo chemical mechanisms of ultralow wear.
Polytetrafluoroethylene (PTFE) composites were investigated by studying PTFE composites filled with 10 and 20wt % polyether
ether ketone (PEEK) and five wt% lAl2O3. These compounds were directed against steel substrates in humid, water and dry
nitrogen environments. The results showed that the wear behaviour of the two compounds was significantly affected by the
sliding environment. Both composites achieved remarkably low wear rates in moisture due to the carboxylate end groups formed
as tribo chemically anchored polymer transfer films to the steel substrate. In nitrogen, PTFE-PEEK performed better than
PTFE l lAl2O3 because of the polar carboxyl groups in PEEK,which increased the surface energy of PEEK, leading to adhesion
to the substrate and consequently a transfer film. Both compounds in the water exhibited more wear. Water exaggerated the
functional groups at the end of the polymer chains and prevented the formation of the transfer film.
Meenalochani K S & Vijayasimha Reddy B G (2017) investigated water absorption behaviour of composites that are
superior to conventional materials due to their high strength to weight ratio. Of late, because of environmental concerns,
natural fibres are finding their places in composites. However, their hygroscopic nature affects the mechanical properties of
the composites adversely. Applications of composites for decking, flooring and outdoor facilities have made it necessary to
evaluate the water uptake characteristics of natural fibre composites. The objective of this paper is to review the water
absorption behaviour, its effect on mechanical properties and the efforts to reduce the absorption of various natural fibre
reinforced composites.
Mofidi M & Prakas M (2008) investigated frictional behaviour of four sealing elastomers, including an acrylonitrile-
butadiene rubber (NBR), a hydrogenated acrylonitrile-butadiene rubber (HNBR), an acrylate rubber (ACM) and a
fluoroelastomer (FKM), sliding against a steel surface under unidirectional lubricated conditions have been studied. The
lubricant used in this study was paraffinic oil with no additive and the experiments were conducted under a block- on-ring
test configuration. The friction coefficients of the elastomers have been measured at different sliding velocities in boundary
and fluid film lubrication regimes. In the first part of each test, the sliding velocity varied from low to high
Recent advances in materials and seal system geometry, as well as new simulation possibilities, allow for the maximum
efficiency of hydraulic cylinders. Hydro- dynamic separation of sliding and sealing points already at very low speeds and the use

98
NATCON 21 College of Engineering Trikaripur
Conference Proceedings
13/12/21 & 14/12/21
of materials such as plastomers from polytetrafluoroethylene (PTFE) (Virgin PTFE and Filled (PTFE) reducing is possible. This
presents a method for theoretically determining the lubricant thickness between the cylinder piston and the seal. The test
installation used to measure fluid film thickness is provided, and the results obtained under different working conditions are
compared with theoretical results and analyzed working conditions related to pressure, speed, and temperature, the paper
concludes with the criteria for selecting the optimal seal material to maximize energy efficiency.
Sachin Salunkhe & Pavan Chandankar (2018) has investigated about Poly tetra fluoro ethylene (PTFE) is the most
important polymer-based engineering material. PTFE has many applications such as the aerospace, food and beverage industry,
pharmaceuticals and telecommunications. In this paper, they investigated the effect of sliding distance, differential load, and filler
content on PTFE, and velocity elasticity by applying a pin on the disc test rig. Usually a comparative analysis of three different
composites (PTFE + 30% carbon, PTFE + 30% bronze and PTFE + 30% glass) is provided. Commercially, Pure PTFE has a high
wear rate which can be reduced by using a pin on the disc test rig under constant sliding speed and a 15 minute fixed time. The
results revealed that pure PTFE has a higher wear rate than composite PTFE materials.
Shiguang Peng et al. (2019) investigated self-lubricating coating of polymer-based composite coatings with a low
coefficient of friction (COF) and high wear resistance. The core-shell system was introduced via a seed emulsion polymerization.
The effect of polymethylmethacrylate (PMMA) shell encapsulated polytetrafluoroethylene (PTFE) on its wear resistance and
lubrication properties under dry friction was investigated. The experimental results revealed interesting examples that the wear
ratio of PTFE coatings decreased from 232 × 10−6 mm3 / nm to 1.04 × 10−6 mm3 / nm, and the COF decreased from 0.081 to
0.069. The results show that during the sliding tests, a continuous, uniform and thin PTFE / PMMA composite film is transferred
to the GCr15 steel ball surface. It has been demonstrated that the enhanced lubrication efficiency is related to the dispersion effect
from the core-shell structure, which has an appropriate size in the nanometre range, and that the composite has a higher
uniformity and more excellent dispersion of the reinforcement phase.
Shuaishuai Zeng et al. (2020) tried to improve the wear resistance of PTFE-based material and
ultrasonic motors; 1-dimple and 3-dimple structures were fabricated based on the observation of transfer materials on the stator
surface. Wear surface and wear debris were observed to investigate the impact of textures on wear behaviour. Meanwhile, the
performance of the motor was tested. The PTFE material sliding rate against the 3-dimple stator decreased by 52.5%, the average
speed of the motor improved by 20.6%, and the maximum efficiency of the 3-dimple stator reached 32.7%. The beneficial effect of
dimples is to interrupt the contact surface, thereby reducing the adhesive force between the stator and PTFE material, which
prevents the hard adhesive wear of the PTFE- based material and improves motor performance.
Sneha Remena and Arthesh Basak (2017) proposed the hyper-elastic materials for various machinery seal and different
kinds of field operations. This rubber-like properties of this material are exceptional stress-strain bond and independent strain
rate, high elasticity, resistance, breaking strength, excellent wear, and elongation breakage. It is also used for dampers, conveyor
belts, and impact absorbers, material action and results in stress and strain relationship, elongation to be inspected by ANSYS
16.2.
Stefan Bokern et al. (2015) proposed a method of making polyurethane seals, which include (a) aliphatic polysocyanate, (b)
at least two isocyanate- reactive groups, (c) catalysts, (d) antioxidants and light stabilizers and, optionally,
values and then, in the second part, the sliding velocity varied from high to low values repeating the same conditions in
reverse order. The results show that the friction coefficients at low speeds are different for the two parts, which can be due to
the oil absorption or possibly dissolution of some elastomer constituents in the oil. The NBR and the ACM were the least and
the most affected elastomer by the lubricant, respectively. The friction coefficients of NBR and ACM at low speeds
decreased in the second part of the tests, but the friction coefficient of HNBR and FKM increased in the second part of the
tests.
Wang C et al. (2018) investigated seals in dynamic applications and found that fretting was one of the most
common types of failure. The fretting behaviour of thermoplastic polyurethane (TPU) was investigated. The hysteresis
behaviour, wear scar and coefficient of friction were analyzed in detail. Various wear mechanisms and their influences on fretting
behaviour were examined. A new method was introduced to calculate the coefficient of friction for late cycles in gross slip regime. It
took the surface damage and the powerful influence of the test system into consideration. Besides, the dependency of fretting
behaviour on displacement amplitudes, normal loads was investigated.
Wataru Aoki et al. (2018) proposed polyurethane elastomer foam material consisting of a polyisocyanate
component (A) and a polyoleon component (B), of which the polyisocyanate component is (A) 1,4-bis (isocyanadomethyl)
cyclohexane (A1), and the polyether diol (A2) is the leading chain 3 A straight-chain oxyalkylene group consisting of the first
four carbon atoms, and the macromolecule (P1) in the polyol component (P), of which 90 A straight-chain or branched alkaline
group with 2 to 6 carbon atoms.
Xuming Chena et al. (2019) investigated the relationship between the outer spacing and discharge resistance of two
grades of HNBR elastomers.Elastomer seals for oil and gas equipment for high-pressure high-temperature (HPHT) applications
are essential for the mechanical design of the contact between tear pressure and discharge space. Other fundamental properties,
such as the hardness of the two elastomers mentioned above, the modulus and the creep resistance, were studied to understand
better the effect of external resistance on these fundamental properties. The relationship between critical tear pressure and
discharge gap was then obtained, with a combination of the relationship between extrusion resistance and discharge gap and
critical tear pressure measured in terms of the API high-pressure test. The testing method is simple and is reasonable compared to
the archived contact result literature.
Yajian Wang et al. (2019) investigated the viscoelasticity of ethylene- propylene-diene monomer (EPDM) during

99
NATCON 21 College of Engineering Trikaripur
Conference Proceedings
13/12/21 & 14/12/21
its service life, which is essential for assessing and predicting its waterproof performance in the underground infrastructure. The
viscoelasticity of the polymer is closely related to its free volume, and these two properties depend on many factors such as
temperature, stress level and strain level. Using Free Scale as a Proxy for Viscoelasticity to explore the underlying viscoelastic
behaviour of EPDM, this article examines the impact of temperature, stress level and strain, and their combined effect, on
molecular dynamics (MD) simulations. An EBDM cross-linked molecular model was constructed and validated by comparing the
simulation values of the glass transition temperature, the mechanical properties and the gas difference with the experimental results
reported in the literature. Subsequently, the dependence of the partial free volume of EPDM on temperature, strain and their
combined effect was investigated by MD simulations, based on the applicability of various superposition principles.
Yakovlev S N (2019), in his article, provides a detailed description of a test device to study the problems of
wearing lip seals in connection with a rotary shaft. Empirical dependencies are given over the life span to estimate the size of the
wear of rubber and polyurethane lip seals.
Yi Han et al. (2019) investigated wear behaviour and coefficient of friction for nano-SiO2- filled
polytetrafluoroethylene (PTFE) under dry conditions. The ball indentation hardness, tensile strength, and elongation at break were
determined. Both the ball indentation hardness and tensile properties were found to be strongly strengthened by nano-SiO2
particles. The tribological properties of the nano-SiO2 PTFE composite and conventional SiO2 PTFE composites were compared
for several application loads. The worn surfaces were investigated by scanning electron microscopy (SEM) and 3D propylometry.
It was determined that the wear rate of PTFE composites filled with nano-SiO2 particles was significantly reduced. The frictional
forces were found to be lower for nano-SiO2- filled composites than for conventional particle composites for the same
experimental conditions.

CONCLUSIONS

In this literature review, the research articles describe the impact of water leakage in the hydropower plant and
pumped storage hydropower plants. Also, the characteristics of sealing materials are given in the research articles about
sealing materials properties. While using Nitrile rubber as a guide vane end sealing material, the power generation loss was
14,872 MW, and the water loss was 4,597 m3/s per annum. Nitrile Butadiene Rubber usage incurred a loss of 9,492 MW and
water loss of 3,164 m3/s. Similarly the usage of Hydrogenated Nitrile Butadiene Rubber seal also gave a loss of 1,490 MW
and water loss of 634 m3/s. The usages of Ethylene Propylene Diene Monomer and Polyurethane end sealing materials used
in guide vanes also yield a minimum loss of 976 MW and 57 MW respectively. The water losses also calculated as 507 m3/s
and 57 m3/s respectively for the above materials used in guide vane operation.

REFERENCES

 Aditya Gupta, Sudhir Mishra, Neeraj Bokde & Kishore Kulat 2016, ‘Need of smart water sysyen in India’, Intrenational
Journal of Applied engineering Research, vol.11, no.4, pp.2216-2223.
 Andrea Deacones Cu & Tudor Deacones Cu 2020, ‘Tribological behavior of hydraulic cylinder coaxial sealing systems
made from PTFE and PTFE compounds’, Journal os Polymer, vol.12, no.1, pp.155-161.
 Anjakommling, Matthias Jaunich & Dietmarwolff 2016, ‘Effects of neterogeneous aging in compressed HNBR and
EPDM-Oring seals’, Journal of Polymer Degradation and Stability, vol.126, pp.39-46.
 Armando Carravetta, Lauro Antipodi, Umberto Golia & Oreste Fecarotta 2017, ‘Energy saving in a water supply
network by coupling a pump and a pump as turbine (PAT) in a turbopump’, Journal of Water, vol.6, no.8, pp.158- 167.
 Arnellseng, Bjorn, H, Skallerud, Arild, H & Clausen 2016, ‘Tension behavior of HNBR and FKM elastomers for a wide
range of temperatures’, Journal of Polymer Testing, vol.49, pp.128-136.
 Bakken, T, H, Killingfreit, A, Engeland, K, Alfredsen, K & Har by, A 2013, ‘Water consumption from hydropower plants-
review of published estimates and an assessment of the concept’, Journal of

100
NATCON 21 College of Engineering Trikaripur
Conference Proceedings
13/12/21 & 14/12/21
Hydrology and Earth System Sciences, vol.17, no.10, pp164-170.
 Bin Lin, Anying Wang, Tianyi Sui, Chibin Wei, Jinhua Wei & Shuai Yan 2020, ‘Friction and wear resistance of
polytetrafluoroethylene-based composites reinforced with ceramic particles under aqueous environment’, Journal of
Surface Topography: Metrology and Properties, vol.8, no.1, pp.142- 149.
 Bo Ton & Lyndon Scott Stephens 2019, ‘Evaluation of viscoelasitic characteristics of PTFE-based materials’, Journal of
Tribology International, vol.140, no.10, pp.198-209.
 Booma, J, Mahadevan, K & Kanimozhi, K 2018, ‘Perspectives of hydropower plants and pumped storage system in
Tamilnadu’, International Journal of Chem Tech Research, vol.11, no.4, pp145-152.
 Central Board of Irrigation and Power 2018, Annual Power House Maintenance Information in India. Available from:
<http://www.cbip.org //>. [April 2018].
 Central Electricity Authority of Indian 2019, Electricity generation in India information. Available from:
<http://www.cea.nic.in/>. [December 2019].
 Central Electricity Board 2019, Renewable Energy Details Information in India. Available from:
<http://ceb.mu//>. [March 2019].
 Central Electricity Regulatory Commission 2019, Electricity Demand Information in India. Available from:
<http://www.cercind.gov.in//>. [September 2019].
 Cuthbert, Z, Chiyembekezo S Kaunda, Kimambo & Torbjorn K Nielsen 2012, ‘Hydropower in the context of sustainable
energy supply: A review of Technology and Challenges’, Journal of International Scholarly Research Notices, vol.20,
pp.462-477.
 Delhi Electricity Regulatory Commission 2019, Northern Region Power Generation information in India. Available from:
<http://www.derc.gov.in //>. [August 2019].
 Gianfredi Mazzolani, Luigi Berardi, Daniele Laucelli, Antonitta Simone, Riccardo Martino & Orazio Giustolisi 2017,
‘Estimating leakge in water distribution network based only on inlet flow data’, Journal of Water Resources Planning
and Management, vol.143,no.6, pp.186-193.
 Gideon Johannes Bonthuys, Marcovan Dijk & Giovanna Cavazzini 2019, ‘Leveraging water infrastructure asset
management for energy recovery and leakage reduction’, Journal of Sustainable Cities and Society, vol.46, DOI:
10.1016/j.scs.2019.101434.
 Giovanna Cavazzini, Juanl Perez-Diaz, Francisco Blazquez, Carios Platero, Jesus Fraile-Ardanuy, Jose Asanchez &
Manuel Chezarra 2017, Chapter 2: Pumped-Storage Hydropower Plants, The New Generation. Available from: Energy
Storage Book. [June 2007].
 Girista, V, Kaya, I & Parala, Z 2019, ‘The effect of interference on the leakage performance of rotary lip seals’,
International Journal of Environmental Science and Technology, vol.16 no.9, pp.5275-5280.
 Gizem Okyay 2010, Utilization of CFD Tools in the Design Process of a Francis Turbine. Master of Science thesis, The
Graduate School of Natural and Applied Sciences of Middle East Technical University, Turkey.
 Gupta, M, K 2018, ‘Water absorption and its effect on mechanical properties of sisal composite’, Journal of the Chinese
Advanced Material Society, vol.6, no.4, pp.561-572.
 Hans Kristian Hofsrud 2017, Load deflection characteristics of guide vane end seals. Ph.D Thesis, Norwegian University
of Science and Technology.
 Hao Zhang, Diyi Chen, Beibei Xu, Edoardo Patelli & Silvia Tolo 2018, ‘Dynamic analysis of a pumped-storage
hydropower plant with random power load’, Journal of Mechanical Systems and Signal Processing, vol.100, pp.524- 533.
 Hariprasad Neopane, Ole Gunnar Dahlhaug & Mictelcervantes 2011, ‘Sediment erosion in hydraulic turbines’, Global
Journal of Researches in Engineering, vol.11, no.6. pp.17-26.
 Harish, G, Harsha Vardhan, P & Deepthi YP 2019, ‘A statistical analysis to optimize wear properties of hybrid polymer
PTFE composites’, Proceedings of Trends in Mechanical Engineering Conference, pp.613-617.
 Ingunn Norang 2015, Pump Storage Hydropower for Delivering Balancing power Ancillary Services.

101
NATCON 21 College of Engineering
Trikaripur
Conference Proceedings
13/12/21 & 14/12/21

Master of Science thesis, Norwegian University of Science and Technology, Norway.


 Ismail, J, Abubakar, Peter Myler & Erping Zhou 2016, ‘Constitutive modelling of elastomeric seal material
under compressive loading’, Journal of Modeling and Numerical Simulation of Material Science, vol.6, no.6,
pp.28-40.
 JGP Andrade, PSF Barbosa, E, Luvizotto Jr, s, Zuculin, MARRC Pointo & GLTiago Filno 2014, ‘Analysis
of pumped storage plants (PSP) viability associated with other types of intermittent renewable energy’,
International Journal of Earth and Environmental Science, vol.15, DOI:10.1088/1755-1315.
 Jianru Yan, Zhitao Zuo & Wenbin Guo 2019, ‘Influences of wear-ring clearance leakage on the performance
of a small-scale pump-turbine’, Journal of Applied Geophysics, vol.160, pp.245-253.
 Jonathan G Sanchz 2012, Improving Efficiency and Capacity of Hydro- Turbines in the Western United
States. Master of Science thesis, University of Nevada.
 Jueren Hieber, edgar Freitag, martin franz Goerres, Mathias Burket, Gonzalo Barillas & Juergen Jaeckel 2013,
Thermoplastic Polyurethane for Seal Application, US Patent 10227443B2. Junho Bae& Koo-Hynchung 2017,
‘Accelerated wear testing of polyurethane hydraulic seal’, Journal of Polymer Testing, vol.63, pp.110-117.
 Juvier Menendez, Jesus, M, Femandez-oro, Monica a Galdo & Jorge Lordo 2020, “Efficiency analysis of
underground pumped storage hydropower plants’, Journal of Energy Storage, vol.28, pp.1965-1974.
 Kasey, L, Campbell, Mark A Sidebottom, Cooper C Atkinson, Tomas F Babuska, Claudia A kolanovic, Brian J
Boulden, Christopher P Junk & Brandon A Krick 2019, ‘Ultra-low wear PTFE-based polymer composites-
The role of water and tribochemistry’, Journal of American Chemical Society, vol.10 no.5, pp.7-15.
 Meenalochani, K, S & Vijayasimha Reddy, B, G 2017, ‘A review on water absorption behaviour and its
effect on mechanical properties of natural fibre reinforced composites’, International Journal of Innovation
Research in Advanced Engineering, vol.4 no.4, pp.143-147.
 Ministry of Power 2019, Annual Power Generation Report in India. Available from:
<http://www.powermin.nic.in//>. [May 2019].
 Mofidi, M & Prakas, M 2008, ‘Influence of counterface topography of sliding friction and wear of some
elastomers under dry sliding conditions’, Journal of Proceedings of the Institution of Mechanical Engineers,
Part J: Journal of Engineering tribology, vol.222, no.5, pp.667-673.
 Nand Kishor, R, P, Saini & S, P, Singh 2007, ‘A review on hydropower plant models and control’, Journal of
Renewable and Sustainable Energy Reviews, vol.11, no.5, pp.776-784.
 Nicola Frau 2017, Impact of guide vane clearance gap in a francis turbine flow comparison between
experimental and computational data. Master of Science thesis. University of Studies Padua.
 Northern Regional Power Committee 2019, Hydropower Generation and Powerhouse Contraction
Commission in India. Available from:
<http://nrpc.gov.in //>. [May 2019].
 Power Technology 2019, Pumped Hydropower Plants details Information in India. Available from:
<http://www.power-technology.com //>. [March 2018].
 Ravi Koirala, Baoshan Zhu & Hariprasad Neopane 2016, ‘Effect of guide vane clearance gap on francis turbine
performance’, International Journal of Energies, DOI: 10.3390/9040275.
 Samuel Ayanrohunamu Olusegun ilupeju 2015, Design, Modelling and Optimisation of Isolated Small
Hydropower Plants using Pumped Storage Hydropower and Control Techniques. Ph.D thesis, University of
KwaZulu- Natal, Durban, South Africa.
 Sachin Salunkhe & Pavan Chandankar 2018, ‘Friction and wear analysis of PTFE composite material’,
Proceedings of Innovative Design, Analysis and Development Practices in Aerospace and Automotive
Engineering Conference, pp.415-425.
 Shiguang Peng, Lin Z Hang, Guoxin Xie, Yue Guo, Lina Si & Jianbin Luo 2019, ‘Friction and wear
behaviour of PTFE coatings modified with poly(methyl methacrylate)’, Journal of Composites Part B:
Engineering, vol.172, pp.316-322.
 Shuaishuai Z eng, Jinbang Li, Ningning Zhou, Jiyang Zhang, Aibing Yu & Huabo He 2020, ‘Improving the
wear resistance of PTFE based friction material used in ultrasonic motors by laser surface texturing’, Journal of
Tribology International, vol.141, DOI: 10.1016/j.triboint.2019.105910.
 Sneha Remena & Arthesh Basak 2017, ‘Comparative study of hyper-elastic material model’, International
Journal of Engineering and Manufacturing Science, vol.7, no.2, pp: 149-170.
 Social Economic Statistical Data & Facts About India 2019, Economic Status Information in India. Available
from: <http://www.indiastat.com//>. [June 2019].
 Stefon Bokern, Pierrecoppens, Patrick Boize, Thomas Mathieu & Carola Mellon 2015, Aging-Resistant
Polyurethane Seal, US Patent 10370480B2.

102
NATCON 21 College of Engineering
Trikaripur
Conference Proceedings
13/12/21 & 14/12/21

A Performance Comparison of MobileNet and VGG16 CNN Models in Plant


Species Identification

Gargi Chandrababu Ojus Thomas Lee Rekha KS


Department of CSE Department of CSE Department of CSE
College Of Engineering Kidangoor College of Engineering Kidangoor College of Engineering Kidangoor
gargichandrababu@gmail.com ojusthomaslee@gmail.com rekha.swapnesh@gmail.com

Abstract
Plants are the building blocks of ecosystem and the major source of making ayurvedic medicine. Hence
accurate identification of plant species is an important task. There are huge varieties of plant species
which looks identical so it’s difficult to differentiate them. Finding the appropriate herb from thousands of
herbs is an exhausting and time consuming mission. Automated plant species identification systems help to
resolve this difficulty to a large extent. Our project aims to develop automated plant identification system
using two pre-trained CNN models namely MobileNet and VGG16. The developed models are compared
based on classification accuracy, precision, recall, F1-score and cross entropy loss. Our studies have
found that MobileNet outperforms VGG16 in plant identification task.

Keywords— VGG16, MobileNet, CNN, Fine-tuning

INTRODUCTION
Plants are really important for the planet and the living things on it. The pleasant breathable climate that
keeps us alive, the fuel that drive our energy requirements, medicines that heal us, all are blessing of the plants.
There are huge varieties of plant species are present all over the world. Hence, it is impossible and not practical
for a botanist or an expert to identify all the species. Some species are identical so it take long time to differentiate
between them. IN addition many plants are facing extinction or have already become extinct. Hence, there is a need
to develop an automatic plant recognition system. Such a system is not only useful for general use, but also
helpful to experienced botanists and plant ecologists.
We use the concept of deep learning for the identification of plant species. Deep learning is a sub-field of artificial
intelligence (AI), which is widely employed for classification and recognition tasks. Deep learning architecture is
based on neural network which is divided into Convolutional Neural Network (CNN), Artificial Neural Network
(ANN) and Recurrent Neural Network (RNN). CNN can be used to solve problems related to image data, ANN
can be also used to solve problems related to image data but it’s mostly used to resolve problems related to text
data. RNN is widely used for time series data. In our work we are comparing two pre-trained CNN models and are
used for the purpose of plant species identification.
In deep learning a model can be developed to perform a particular task. These models are pre-trained with
large dataset. Using transfer learning a model developed for a task can be reused to solve a second related task that is,
the knowledge gained from solving one problem is applied to a new but related problem. Transfer learning is a
shortcut to save time and assure better performance. There are lots of deep learning models such as VGG16, VGG19,
ResNet50, MobileNet, Xception, InceptionV3,etc. In this project we used two pre-trained models VGG16 and
MobileNet to classify plant species by applying fine-tuning technique. Fine-tuning is a way of applying transfer
learning, it not only update the architecture but also retrain it to learn new object classes. We build the plat
identification system with VGG16 and MobileNet and compare the performance of each system using evaluation
metrics such as precision, recall, classification accuracy and F1-score.

RELATED WORKS

Many of the state-of-the-art methods employed MobileNet [7] and VGG16 [14] to build plant species
identification system. Roopashree S et al.[11] proposed a new CNN model namely DeepHerb and created medicinal
leaf dataset which consists of 2515 medicinal leaf images from 40 different Indian species. The efficiency of the
dataset is evaluated by comparing pre-trained deep CNN architectures such as InceptionV3, VGG16, VGG19 and
Xception. In this work transfer learning technique is applied to the pre-trained models to perform feature extraction
and classification using Artificial Neural Network (ANN) and Support Vector Machine (SVM). The work consists of
4 phases - data sampling, image pre-processing and segmentation, feature extraction and finally classification. They

103
NATCON 21 College of Engineering
Trikaripur
Conference Proceedings
13/12/21 & 14/12/21

proposed four kinds of source models such as CNN-Inception V3, CNN-Xception, CNN-VGG16 and CNN-VGG19
for feature extraction. Three target models such as ANN, SVM and SVM+BO for classification. The proposed
DeepHerb model gives 97.5% accuracy by learning from Xception and ANN.
Varghese, B.K et al. [16] proposed an Android application called INFOPLANT to identify plants using the CNN
(Convolutional Neural Network). Transfer learned MobileNet model is used in this system. The model is trained
with customized dataset and is converted and stored as .tflite file. The application predict the input plant image with
the tflite model. After prediction the application will check all the labels and find the label with maximum
probability which gives the plant name as the output. The obtained output is then connected to firebase. The
output will be the details like biological name, common name, location, nutrient requirement needed by the plant
and the medicinal value of the plant. The proposed model achieved a prediction result with accuracy of 99% and
validation accuracy of 95%.
Beikmohammadi, A et al. [2] proposed a method that uses transfer learning to identify plant leaf for classification.
The proposed method uses the MobileNet model as a feature extractor. By comparing this method with other state-
of-the-art methods, it has been shown that with the help of the transfer learning technique, we can save time and
computation resource and can be successful in learning a new task by limited training dataset. In addition, the
proposed method works directly with RGB images and thus eliminate the needs of pre-processing and hand-craft
feature extraction. The efficiency of the proposed method is evaluated on two botanical datasets, Flavia and
Leafsnap, and precision of 99.6% and 90.54% achieved respectively.
Jasitha P et al. [8] proposed venation based plant leaves classification using fine-tuned GoogleNet and VGG16.
GoogleNet and VGG16 models are trained and tested with Flavia, Leaf1 and DLeaf datasets. Support Vector
Machine (SVM) is used as the classifier. The proposed CNN model consists of three phases: image pre-processing,
feature extraction and classification. Fine-tune GoogleNet achieved 99.2% validation accuracy on Leaf1 dataset
using SVM classifier.
Habiba, S.U et al. [6] proposed Bangladeshi plant recognition system. They used deep convolutional neural
network as a classifier and transfer learning approach. The dataset is prepared using eight different plant species.
They have experimented with VGG16, VGG19, Inception V3, Inception-ResnetV2, Xception and Resnet50 deep
CNN models. For training with the dataset deep CNN models were used. The training set consists of 80% data and
testing set consists of 20% of data. Among the deep CNN models VGG16 acquired highest classification accuracy,
which is almost 96%.
Singh G et al[15] built up a model in VGG16. They have used 70% of Flavia dataset for training and 30% for
testing. The level of accuracy is more than 95% not only for undamaged leaf but also for damaged (30%) leaf. In
this model they have used some useful pre-processing and feature extraction techniques such as: multi-scale
technology, noise removal, edge recognition, contrast searching, cropping and standardization, binary thresholding,
Grey-level Co-Occurrence Matrix (GLCM) and Fourier descriptor. Also some advanced techniques were adopted for
detection of testing dataset such as: CNN with VGG16 layer, linear and logistic regression, feature extraction
techniques and noise removal techniques (erosion and dilation).
Pechebovicz, D et al. [10] proposed a mobile application which recognizes Brazilian medicinal plants. They also
implemented CNN model to perform recognition task. For this MobileNetV2 architecture is used by freezing the
convolutional layers. The main objective of this work is to use the network model in mobile device, so they used
TensorFlow Lite framework. The dataset is created with downloaded images from internet by using google images
download program [17]. 80% of the dataset is used as training set and 20% as validation set. The MobileNetV2
gives good result as increasing the number of epochs from 10 to 30. To analyze the behaviour of the model the
number of training epochs is reduced to 20 and fine tuning epochs is increased to 20 which is initially taken as 5.
Paulson Anu et al. [9] proposed a system to identify medicinal plant species. They compared the performance of
CNN and pre-trained models VGG16 and VGG19 in leaf identification problem. According to their experiment CNN
obtained classification accuracy of 95.79%. VGG16 and VGG19 achieved 97.8% and 97.6% accuracy respectively
which outperforms CNN.
Gu Jian et al. [5] proposed leaf recognition method by applying transfer learning technique on VGG16 deep
learning network. The plant recognition process consists of 4 steps such as background whitening, normalization,
data augmentation and transfer learning. They used two datasets for their project namely Middle European Woody
(MEW) plants datasets and UCI Folio datasets which gives 93.4% and 97.9% accuracy respectively.
Akiyama et al. [1] developed a mobile application using CNN that will helpful for beginners to recognize
plant species. Also they compared three CNN models namely VGG16, MobileNet and MobileNetV2. Among these
three models MobileNetV2 achieved 0.992 F1-score and calculation time of 338.1 ms.

METHODOLOGY
MobileNet and VGG16 are two mostly used pre-trained models for classification and prediction tasks. VGG16
won the first and second place of ImageNet Large Scale Visual Recognition Challenge (ILSVRC) in 2014.
MobileNet is widely used in mobile applications which are small, low-latency CNN. Both models are trained on
more than a million images from the ImageNet database. MobileNet is 30 times smaller and 10 times faster than
VGG16. The accuracy of MobileNet is pretty much the same as VGG16. These pre-trained models can classify

104
NATCON 21 College of Engineering
Trikaripur
Conference Proceedings
13/12/21 & 14/12/21

images into 1000 categories. MobileNet consists of 28 layers [7] and VGG16 consists of 16 layers [14]. The
architecture of MobileNet and VGG16 are shown in Fig. 1 and Fig. 2 respectively.

Fig. 1. MobileNet Architecture [18]

Fig. 2. VGG16 Architecture [4]

A. SYSTEM ARCHITECTURE
The pre-trained VGG16 and MobileNet models undergo a process of fine-tuning. Fine-tuning enables the efficient
acceptance of the dataset used in our project. The dataset is then applied to the fine tuned models and processed. The
results of this processing are compared in the comparison module. The system architecture is shown in Fig. 3.

MobileNet
Model

Plant Leaf Comparison


Dataset

VGG16 Model

Fig. 3. System Architecture

B. FINE TUNING MOBILENET


Counting depthwise and pointwise convolutions as separate layers, a MobileNet has 28 layers. Our model will
consist of the original MobileNet up to the sixth to last layer. Our model does not include the last five layers of the
original MobileNet. We have found through experimentation that removing the last 5 layers works out well for
this particular task, the last layers of fine-tuned MobileNet model is shown in Fig. 5. We have included a new layer,
the output layer we created with 14 output nodes. We have found that training the last 23 layers will give us a pretty
decently performing model. The block diagram of MobileNet model is illustrated in Fig. 4

105
NATCON 21 College of Engineering
Trikaripur
Conference Proceedings
13/12/21 & 14/12/21

Pass Images To Finetuned


Image Acquired Pre-processing MobileNet Model Classification

Fig. 4. Block diagram of MobileNet plant identification system

Fig. 5. Last layers of fine tuned MobileNet model

C. FINE TUNING VGG16


VGG16 is a CNN with 16 layers. We trained the last 3 fully connected layer with softmax activation by
keeping other layers weights same as original VGG16 model. The last Dense layer of VGG16 has 1000 outputs
which correspond to the 1000 categories in the ImageNet library which is changed to 14 since we are categorizing
the species in 14 classes. The block diagram of VGG16 is shown in Fig. 9 and the summary of last layers are
shown in Fig. 7.

Pass Image To fine-tuned


Image Acquired Pre-processing Classification
VGG16 Model

Fig. 6. Block diagram of VGG16 plant identification system

Fig. 7. Last layers of fine tuned VGG16 model

RESULTS AND DISCUSSION


Our experimental process is done as follows: The plant dataset is split into three category training set, validation
set and testing set. The two models are trained with same dataset and the performance were evaluated using metrics
such as classification accuracy, precision, recall and f1-score. In the experimental evaluation we use a custom built
dataset having 14 different plant species, which are collected manually and from sources like Mendely data [12]
[13] and Folio dataset [3]. Fig. 8 shows the leaf images of plant species which are selected for classification task.
The 14 classes of the leaf species are Arjun, Basil, Betel, Coffee, Curry, Guava, Hibiscus, Jackfruit, Jasmine, Lemon,
Mango, Mint, Neem and Rose.
The 2505 images in the dataset split into two ways -
1) training set(80%), testing set(15%) and validation set(5%)
2) training set(70%), testing set(20%) and validation set(10%)

106
NATCON 21 College of Engineering
Trikaripur
Conference Proceedings
13/12/21 & 14/12/21

Fig. 8. Selected plant species

The comparison is made in two categories -


1) Based on test set evaluation
80-15-5: MobileNet achieved 88.35% accuracy, 87.96% precision, 87.41% recall and 87.13% f1-score and
VGG16 achieved 81.91% accuracy, 82.09% precision, 79.71% recall and 79.85% f1-score.
70-20-10: MobileNet achieved 90.95% accuracy, 90.48% precision, 89.93% recall and 90.10% f1-score and
VGG16 achieved 86.08% accuracy, 85.99% precision, 84.30% recall and 84.515 f1-score.
2) Based on validation set evaluation
Validation accuracy of both model is above 90% which is shown in the graph (Fig. 9 and Fig. 10). Cross
entropy loss measures the performance of a classification model whose output is a probability value between 0
and 1. MobileNet have very less cross entropy loss whereas VGG16 has higher cross entropy loss than
MobileNet in both 80-15-5 and 70-20-10 cases.
These results are obtained by training the model in 10 epochs and using 34 batch size. So we can see that 70-
20-10 gives better results compared to 80-15-5 and MobileNet outperforms VGG16 in both categories. The
comparison of MobileNet and VGG16 are shown in Table I.

TABLE I
COMPARISON OF MOBILENET AND VGG16

Dataset Accuracy Precision Recall F1-score

Division MobileNet VGG16 MobileNet VGG16 MobileNet VGG16 MobileNet VGG16


80-15-5 88.35% 81.91% 87.96% 82.09% 87.41% 79.71% 87.13% 79.85%
70-20-10 90.95% 86.08% 90.84% 85.99% 89.98% 84.30% 90.10% 84.51%

Fig. 9. Classification accuracy and cross entropy loss of VGG16 in 80-15-5 dataset and 70-20-10 dataset

Fig. 10. Classification accuracy and cross entropy loss of MobileNet in 80-15-5 dataset and 70-20-10 dataset
I. CONCLUSION
Identification of plant species is a difficult task due to the wide varieties of plant species and some species faces

107
NATCON 21 College of Engineering
Trikaripur
Conference Proceedings
13/12/21 & 14/12/21

problems of extinction hence there is a need of automatic plant identification system. Leaf is easy to access as
compared to other parts hence leaf is used to identify plant species. By using fine-tuning technique the
implementation of plant identification become easy task. In our system the pre-trained VGG16 and MobileNet
models are trained with the custom plant datasets. The plant dataset is split into three category training set, validation
set and testing set. The two models are trained with same dataset and the performance were evaluated using metrics
such as classification accuracy, precision, recall and f1-score. Also the validation accuracy and cross entropy loss is
evaluated. MobileNet and VGG16 performs better in plant identification task. But as compared to VGG16,
MobileNet outperforms. VGG16 is three times bigger than MobileNet and it takes more time for training. The
accuracy of both models in plant identification can be improved by increasing the number of epochs and number
of training samples.

REFERENCES
[1] T. Akiyama, Y. Kobayashi, Y. Sasaki, K. Sasaki, T. Kawaguchi, and J. Kishigami. Mobile leaf identification
system using cnn applied to plants in hokkaido. In 2019 IEEE 8th Global Conference on Consumer
Electronics (GCCE), pages 324–325. IEEE, 2019.
[2] A. Beikmohammadi and K. Faez. Leaf classification for plant recognition with deep transfer learning. In 2018
4th Iranian Conference on Signal Processing and Intelligent Systems (ICSPIS), pages 21–26. IEEE, 2018.
[3] D. Dua and C. Graff. UCI machine learning repository, 2017.
[4] P. Gfg. Vgg-16 — cnn model. https://www.geeksforgeeks.org/vgg-16-cnn-model/. Accessed: 27 Feb, 2020.
[5] J. Gu, P. Yu, X. Lu, and W. Ding. Leaf species recognition based on vgg16 networks and transfer learning. In
2021 IEEE 5th Advanced Information Technology, Electronic and Automation Control Conference (IAEAC),
volume 5, pages 2189–2193. IEEE, 2021.
[6] S. U. Habiba, M. K. Islam, and S. M. M. Ahsan. Bangladeshi plant recognition using deep learning based leaf
classification. In 2019 International Conference on Computer, Communication, Chemical, Materials and
Electronic Engineering (IC4ME2), pages 1–4. IEEE, 2019.
[7] A. G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, and H. Adam.
Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint
arXiv:1704.04861, 2017.
[8] P. Jasitha, M. DIleep, and M. DIvya. Venation based plant leaves classification using googlenet and vgg. In
2019 4th International Conference on Recent Trends on Electronics, Information, Communication &
Technology (RTEICT), pages 715–719. IEEE, 2019.
[9] A. Paulson and S. Ravishankar. Ai based indigenous medicinal plant identification. In 2020 Advanced
Computing and Communication Technologies for High Performance Applications (ACCTHPA), pages 57–63.
IEEE, 2020.
[10] D. Pechebovicz, S. Premebida, V. Soares, T. Camargo, J. L. Bittencourt, V. Baroncini, and M. Martins. Plants
recognition using embedded convolutional neural networks on mobile devices. In 2020 IEEE International
Conference on Industrial Technology (ICIT), pages 674–679. IEEE, 2020.
[11] S. Roopashree and J. Anitha. Deepherb: A vision based system for medicinal plants using xception features.
IEEE Access, 9:135927–135941, 2021.
[12] A. J. Roopashree S. Medicinal leaf dataset, mendeley data, v1, doi: 10.17632/nnytj2v3n5.1, 2020.
[13] S. U. P. Siddharth Singh, Kaul Ajay. A database of leaf images practice towards plant conservation with plant
pathology, mendeley data, v1, doi: 10.17632/hb74ynkjcn.1, 2019.
[14] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv
preprint arXiv:1409.1556, 2014.
[15] G. Singh, N. Aggarwal, K. Gupta, and D. K. Misra. Plant identification using leaf specimen. In 2020 11th
International Conference on Computing, Communication and Networking Technologies (ICCCNT), pages 1–7.
IEEE, 2020.
[16] B. K. Varghese, A. Augustine, J. M. Babu, D. Sunny, and S. Cherian. Infoplant: Plant recognition using
convolutional neural networks. In
2020 Fourth International Conference on Computing Methodologies and Communication (ICCMC), pages 800–
807. IEEE, 2020.
[17] H. Vasa. Google images download. Link: https://google-images-download.readthedocs.io/en/latest/index.html.
[18] T. Z. Wei Wang, Yutao Li, J. Y. Xin Wang, and Y. Luo. A novel image classification approach via dense-
mobilenet models, mobile information systems, vol. 2020, article id 7602384, 8 pages, 2020

108
NATCON 21 College of Engineering
Trikaripur
Conference Proceedings
13/12/21 & 14/12/21

A Novel Method of Rice Disease Detection Based on Deep Learning

Reenu Susan Joseph Nisha CA Asha


Department of CSE Department of CSE Vijayan
College Of Engineering Kidangoor College of Engineering Kidangoor Department of CSE
reenusj95@gmail.com nishaca@gmail.com College of Engineering Kidangoor
ashavijayan86@gmail.com

Abstract
Rice is the major food crop in India. 70% of the Indian economy depends upon agricultural yield and
37% of rice yield is lost due to pests and diseases. The early detection of rice diseases will prevent huge
economic loss for the farmer and the proper care can assist farmers to protect their rice crops from various
crop diseases. It is difficult and time consuming process to diagnose diseases manually. Deep learning
techniques are most promising one for identification. Therefore a proper method for detecting common rice
diseases ( rice blast, bacterial blight, sheath blight, brown spot, & Tungro ) based on deep learning is
proposed. An experimental study was conducted to identify the best model from VGG16 and ResNet18 for
the rice disease detection as well as studied the augmentation effect on models.

Keywords— Rice disease identification, CNN, deep learning, VGG16, ResNet18, transfer learning.
INTRODUCTION
India is the second-largest producer of rice and the largest exporter of rice in the world. Rice production is related to the Indian
economy and national development. Due to the environmental pollution and the use of the excess amount of pesticides and
fertilizers, the rice yields growth and production. Therefore disease control is crucial regarding rice yield growth and production.
Rice diseases are caused by various pathogens and are terribly affecting rice yield growth. Nowadays, Farmers use their
knowledge or approach to a specialist to identify diseases and the lack of Knowledge entails the inappropriate use of pesticides
that decreases the crop yield. The traditional method is an inaccurate and time-consuming process. Early detection of rice disease
is also essential in order to prevent the further spreading of diseases. The recent advancement of AI and deep learning
technologies can be used in agriculture and many researchers have proposed various methodologies regarding this field. Many
researchers have begun to study and propose various methods of deep learning technologies in the Agricultural sector. However,
we need to find out what method is more suitable and address the specific problem efficiently. For this reason,
we had analyzed various models and selected two of the best models aka VGG16 and ResNet18 among these.
The most destructive and common diseases in Asia, especially in India are Bacterial blight, Brown spot, Rice blast, sheath
blight and Tungro were selected for our classification system. The dataset consists of 6113 images of 6 classes, 5
diseases(Bacterial blight, Brown spot, Rice blast, sheath blight and Tungro) and healthy collected from various online repositories
and merged into a single dataset. Data augmentation was also performed in the dataset to enrich and address the data imbalance
problem. An experiment with data augmentation was also conducted to study the effect of various augmentation in order to apply
the best augmentation to our model. From the analysis and research studies, VGG16 and ResNet18 were found to be the best for
rice disease detection. VGG16 shows good results and is widely used in image classification problems. ResNet models are faster
in training. This paper can be comprehended as:
C. Studied the effect of various augmentation to achieve the best performance of the model.
D. Analysis of ResNet18 and VGG16 made and based on the analysis, the model that has higher performance was selected for
the identification of rice diseases.
E. Implementation of the efficient model.
RELATED WORK
G. Zhou et.al[1], proposed a method for the detection of rice disease based on the fusion of FCM-KM and faster CNN which
detects three types of rice diseases namely Rice blast, Bacterial Blight and Sheath Blight with an accuracy of 97.2%.The precision
and detecting speed of Faster RCNN is higher than compared to other methods. Generating a larger number of boundary boxes is
the main drawback. R. Sharma et.al [2] proposed a simple Convolutional Neural Network rice disease detection model for the
Hispa rice disease. Images were collected from the Indian rice fields. MATLAB was used for the pre-processing of the acquired
image to remove the noisy and unclean data. Implemented a Simple memory-efficient CNN model with a real-time dataset
obtained an accuracy of 94%.
H. Andrianto et. al,[3] developed a smartphone-based rice disease detection application that captures images via phone and
communicates with a cloud server for prediction. The system uses VGG16 architecture for prediction and predicts either healthy
or diseased. The dataset collected from Kaggle contains 1600 images and has 100% training accuracy and 60% testing accuracy.
[6] K. Shrivastava et. al,[4] explores various deep CNN models on a real field image dataset that predicts seven classes with 6
diseases and a healthy class. This paper performs the comparative analysis of 10 deep learning CNN models namely AlexNet, 16,

109
NATCON 21 College of Engineering
Trikaripur
Conference Proceedings
13/12/21 & 14/12/21

ResNetVGG152V2, InceptionV3, InceptionResNetV2, Xception, MobileNet, DenseNet169, NasNetMobile, NasNetLarge. The


Dataset consists of 1216 images with 7 classes. Transfer learning applied to all the 10 pre-trained models and on the comparative
analysis VGG16 has the highest classification accuracy and best for Rice Disease detection. The experimental result of the paper
reinforces that VGG16 is better among 10 pre-trained models on the real-time dataset.
M. J. Hasan et. al,[5] proposed a hybrid model for Rice disease detection, a unique approach that adopts SVM Classifier
combined with deep CNN. The dataset contains 1080 images of 9 diseases and the features were extracted using Deep CNN
Model and used to train the SVM classifier and obtained 97.5% accuracy. The dataset contains 9 classes of images collected from
rice plants and various online resources. InceptionV3 with Transfer learning is used to extract relevant features. Finally, the
classification is done using the SVM classifier. A. K and A. S. Singh [6] proposed a method to detect rice plant leaf diseases using
mask R-CNN, Faster R-CNN algorithms. Real-time data were used as the dataset for the method to detect 5 diseases. 1500 images
were collected and preprocessed using a two-dimensional multistage median filter that removes unwanted outliers and binary
noise. After the preprocessing step, the identification of rice diseases is done using Faster R-CNN is two different stage methods
that examine and predict the results based on computer hardware. Finally, rice diseases are identified using mask R-CNN, a
region-based CNN built on the top of the faster RCNN.
Prasetyo et. al,[7] proposed a website-based rice disease detection based on GoogleNet Architecture and obtained an accuracy
of 94%. Dataset is from Kaggle and contains 3355 images of 4 classes: Healthy, Leaf blast, Hispa and Brown spot. Moreover,
conducting an experiment on the parameter epoch, shows that the number of epochs is directly proportional to the performance of
the model up to a certain limit and found the best epoch is 60 for the proposed system.
Guan et. al, [8] developed a general plant disease detection method by integrating the four models: Inception, ResNet,
Inception ResNet, and DenseNet achieved an accuracy rate of 87%. The system used a dataset from AI- Challenger containing
36258 images with 10 classes of different species and used a voting mechanism to predict the class. Each species consists of
healthy and various diseases. In addition, the use of stacking mechanisms and transfer learning improves the performance of the
proposed system.
Li et. al, [9] proposed a method based on image processing and deep learning to identify the apple leaf diseases. This paper
used a gray symbiosis matrix to extract features more precisely. The system used a plant village dataset that contains 2172
images of healthy and diseased apple leaf images and obtained a 99% accuracy rate. A comparative study made between ResNe18
and ResNet34 results from the ResNet18 model being better for predicting the diseases. Later verified with VGG16 that
reinforces the ResNet algorithm is efficient to the classification of apple diseases.

METHODOLOGY
A. DATASET
We acquired images from various resources such as Kaggle and other web repositories and combined them. Table I shows
diseases in the dataset and it’s count. From the table I it is clear that there is a class imbalance problem, each class of disease holds
the images from 200 to 1600 and is not an ideal dataset. Thus we use data augmentation techniques to address the problem as well
as to enlarge the dataset. The entire dataset is partitioned for training and testing purpose in the ratio of 70% and 30% respectively.
Then to perform augmentation study and for the analysis of the two models, dataset is partitioned into 4 sets namely set I, set II,
set III, set IV. Set I used for the analysis of ResNet18 and VGG16. Set II for color augmentation, set III used for spatial
augmentation and set IV for combination of both color and spatial augmentation.

B. AUGMENTATION STUDY
Data augmentation is useful when we have a smaller dataset. By applying color, spatial transformations can expand the dataset.
This method also resolves the class imbalance problem. In this paper, we use both color space and spatial transformations for
experimental purposes in order to apply the best transformation to expand the dataset. Spatial transformations like
rotation,scaling, flipping, and shear and the color space transformations like Brightness, Contrast, Saturation, Hue being
performed. The experiment was conducted based on 4 sets of data. Set I is considered as the miniature version of the rice
disease dataset that applies to the VGG16 and ResNet18 for the analysis. No augmentation was done in the set I. Set II was used
for color augmentation and trained the best model after analysis. Set III for the spatial transformations and set IV used for the
combination of both color and spatial augmentation.

110
NATCON 21 College of Engineering
Trikaripur
Conference Proceedings
13/12/21 & 14/12/21

Disease Number of Images


Bacterial Blight 1635
Brown Spot 651
Healthy 400
Leaf Blast 1921
Sheath Blight 201
Tungro 1308

TABLE I: Dataset Details

C. ANALYSIS OF RESNET18 AND VGG16


VGG16 is a simple and widely used Convolutional Neural Network (CNN) architecture for image classification problems that
consists of 16 convolutional layers. ResNet18 is a convolutional neural network consisting of 18 layers and is efficient for image
classification problems. In order to identify the best model, both VGG16 and ResNet18 were trained using a set I and analysed the
results.
VGG16: A 16 layer architecture consists of a convolution layer, pooling layer and fully connected layers. There are thirteen
convolutional layers with a ×
3 3 filter of stride 1, 5 max-pooling layers with a 2 2 filter of stride 2 and 3 dense layers.
Conv-1 Layer has 64 filters, Conv-2 has 128 filters, Conv-3 has 256 filters, Conv 4 and Conv 5 has 512 filters. In the
three fully connected layers,
×the first 2 layers consist of 4096 and the third layer consists of 1000 channels. Figure 1
depicts the architecture of VGG16.

Fig. 1: VGG16 Architecture

• ResNet18: Residual Network (ResNet) is one of the famous deep learning models that was introduced by Shaoqing Ren et. al.
ResNet is a short form of Residual Network that is made up of Residual Blocks. Training a deep neural network causes a degradation
problem that leads to an increased % in error rate. The ResNet model is one of the popular and most successful deep learning models so
far.

Fig. 2: ResNet18 Architecture.


D. IDENTIFICATION OF RICE DISEASE USING RESNET18
By the comparative analysis of models, ResNet18 is more effective than VGG16 to identify rice diseases. The model was
implemented in the PyTorch framework on the dataset with a Training set of size 5589 and a validation set size of 1242. The table
shows the implementation details of rice disease identification using ResNet18.

111
NATCON 21 College of Engineering
Trikaripur
Conference Proceedings
13/12/21 & 14/12/21

Parameters Values
Epoch 20
Batch size 64
Augmentation applied color
Learning rate 0.001
Optimizer SGD
Dataset training size 5589
Dataset testing size 1242

TABLE II: ResNet18 Training Details

G. IMPLEMENTATION
Google colab IDE is used for all the experiments and the analysis that runs in google servers also provides free
GPU and faster processing.
A. DATASET CONSTRUCTION
Our system constructed a dataset from three publicly available datasets: Kaggle, Mendeley and Github. Kaggle dataset contains
5447 images of Brown Spot, healthy, hispa and leaf blast. Mendeley consists of Bacterial blight, Blast, Brown Spot and Tungro
with 5932 images. Images of Sheath blight collected from GitHub. Figure 3 shows the class distribution of each disease in the
dataset before and after augmentation. From the figure, it is clear that a class imbalance problem occurred to resolve this as well as
to enrich the dataset, data augmentation is used. Our dataset is partitioned into 4 sets in order to perform various operations. Set I is
reserved for the analysis of models, set II and set III were used for color and spatial augmentation respectively and set IV was used
for color and spatial augmentation.

Fig. 3: Class distribution of Rice Disease Dataset


[7] DATASET AUGMENTATION
Albumentation is an open-source library that performs fast and good augmentation results. In order to study the effect of
augmentation in the ResNet18 model, augmentation is performed in 4 different ways: first No augmentation and train
and evaluate the model, color augmentation, spatial augmentation, and both color and spatial augmentation being
performed and evaluated. Table III depicts the augmentation results and concludes that color transformation improves
the performance of the model.

Parameters No Augmentation Color Spatial Color and spatial


Epoch 10 10 10 10
Dataset size 1405 2613 2408 2408
Training time 47 min 120 min 113 min 113 min
Accuracy 75.50% 76.50% 64.25% 64.50%
TABLE III: Augmentation results
[8] ANALYSIS OF MODELS: VGG16 AND RESNET18
We trained both the models on the same dataset(i.e, set I) in the PyTorch framework and evaluated the performance.
Furthermore, transfer learning is applied to both models. The table shows the performance evaluation of VGG16 and
ResNet18.
Model Epoch Training time Accuracy

112
NATCON 21 College of Engineering
Trikaripur
Conference Proceedings
13/12/21 & 14/12/21

VGG16 10 71 min 66.00%


RESNET18 10 48 min 75.50%
TABLE IV: Performance evaluation of VGG16 and RESNET18
D. DISEASE DETECTION USING
RESNET18
Trained ResNet18 with a large dataset that combines set I and IV with 6 classes of training and validation and obtained higher
accuracy 90.96%.The purpose of the saliency map is to find the regions which are prominent or noticeable at every location in the
visual field and to guide the selection of attended locations, based on the spatial distribution of saliency. Figure 4 visualize the
deep learning model.

(a)Bacterial blight (b) Brown spot (c) Healthy

(d)Rice blast (e) Sheath blight (f) Tungro


Fig. 4: Saliency Map for Visualizing RESNET18
Feature maps are generated by applying Filters or Feature detectors to the input image or the feature map output of the prior
layers. Feature map visualization will provide insight into the internal representations for specific input for each of the
Convolutional layers in the model. Hence the proposed model achieved higher accuracy in detecting rice diseases. Learning rate
optimization, enriching the dataset with higher resolution images and Batch size optimization is further research as future works in
order to improve the accuracy of our system.

• Layer 1 (b) Layer 16

Fig. 5: Visualizing feature maps & Filters of RESNET18

CONCLUSION
The Implemented ResNet18 model obtained 90.86% of accuracy and efficient in identifying rice diseases. The study of
augmentation helped us to apply the best augmentation to the proposed system. Thus improving the efficiency of the model.
Learning rate optimization, Usage of high-resolution images, batch size optimization are some of the factors that affect the
performance of the model. We are planning to extend our research work in order to improve the performance of the system.

113
NATCON 21 College of Engineering
Trikaripur
Conference Proceedings
13/12/21 & 14/12/21

REFERENCES
[11]G. Zhou, W. Zhang, A. Chen, M. He, and X. Ma, “Rapid detection of rice disease based on fcm-km and faster r-cnn fusion,” IEEE Access,
vol. 7, pp. 143 190–143 206, 2019.
[12]R. Sharma, V. Kukreja, and V. Kadyan, “Hispa rice disease classification using convolutional neural network,” in 2021 3rd International
Conference on Signal Processing and Communication (ICPSC), 2021, pp. 377–381.
[13]H. Andrianto, Suhardi, A. Faizal, and F. Armandika, “Smartphone application for deep learning-based rice plant disease detection,” in 2020
International Conference on Information Technology Systems and Innovation (ICITSI), 2020, pp. 387–392.
[14]V. K. Shrivastava, M. K. Pradhan, and M. P. Thakur, “Application of pre-trained deep convolutional neural networks for rice plant disease
classification,” in 2021 International Conference on Artificial Intelligence and Smart Systems (ICAIS), 2021, pp. 1023–1030.
[15]M. J. Hasan, S. Mahbub, M. S. Alom, and M. Abu Nasim, “Rice disease identification and classification by integrating support vector
machine with deep convolutional neural network,” in 2019 1st International Conference on Advances in Science, Engineering and Robotics
Technology (ICASERT), 2019, pp. 1–6.
[16]A. K and A. S. Singh, “Detection of paddy crops diseases and early diagnosis using faster regional convolutional neural networks,” in 2021
International Conference on Advance Computing and Innovative Technologies in Engineering (ICACITE), 2021, pp. 898–902.
[17]H. D. Prasetyo, H. Triatmoko, Nurdiansyah, and I. N. Isnainiyah, “The implementation of cnn on website-based rice plant disease detection,”
in 2020 International Conference on Informatics, Multimedia, Cyber and Information System (ICIMCIS), 2020, pp. 75–80.
[18]X. Guan, “A novel method of plant leaf disease detection based on deep learning and convolutional neural network,” in 2021 6th International
Conference on Intelligent Computing and Signal Processing (ICSP), 2021, pp. 816–819.
[19]X. Li and L. Rai, “Apple leaf disease identification and classification using resnet models,” in 2020 IEEE 3rd International Conference on
Electronic Information and Communication Technology (ICEICT), 2020, pp. 738–742.G. Zhou, W. Zhang, A. Chen, M. He, and X. Ma,
“Rapid detection of rice disease based on fcm-km and faster r-cnn fusion,” IEEE Access, vol. 7, pp. 143 190–143 206, 2019.

114
NATCON 21 College of Engineering
Trikaripur
Conference Proceedings
13/12/21 & 14/12/21

Text Detection and Recognition From Signboards with Hazy Background

Revathy A S Jyothis Joseph


Department of CSE Department of CSE
College Of Engineering Kidangoor College of Engineering Kidangoor
revathy.sankar1991@gmail.com jyothis@ce-kgr.org
Anitha Abraham
Department of CSE
College of Engineering Kidangoor
anithaabraham@ce-kgr.org

Abstract

The signboards which are placed on the sides of roads provide information to drivers. It provides
valuable information about the roads, buildings and plays an important role in safe and smooth
driving. It is based on an important approach to find direction, and will play an important role in
locating specific domains such as islands, schools, traffic signs, universities, hospitals, offices and
hotels. But continuous changes in the environment, lightning conditions, partial obscuring, blurring
and fading of traffic signs which can create a problem for the detection purpose. This is a framework
for sign board detection and recognition in hazy background. Removing fog is significantly increases
the visibility of screen.The project includes mainly two modules. The first module is the haze removal
from the input image and the resultant image looks more clear than the input image. Text detection and
localization is the second module. The performance of the system is measured by using the metric
character accuracy.The system achieves 96.72% character accuracy.

Keywords— Dark channel prior, CNN, Text localization CNN

INTRODUCTION

Images of outdoor scenes are usually destroyed by the particles and water droplets in the atmosphere. Haze and
smoke reduce the screen visibility and detection of texts from the images become difficult. So removing haze enhances
the visibility of the screen and text detection became easy.The detected texts from the images are separated from the
background using bounding boxes.Non-Maximum suppression is a process that eliminates unnecessary bounding
boxes. Haze removal from the input image by using a technique called dark channel prior. This technique is useful
for later image editing.The dark channel is based on statistics of outdoor fog-free images.Most non-sky patches have
at least one color channel pixel that is very low intensity and close to zero. Similarly, the minimum intensity of such a
patch is close to zero. If J is an outdoor fog-free image, then J’s dark channel has a lower intensity and tends to
zero.The low intensity of the dark channel is mainly due to three factors: shadows, colored objects, dark objects or
surfaces.The dark channel of these images is really dark.Estimating the transmission, soft matting, estimating
atmospheric light and recovering the scene radiance are the main steps in haze removal using the dark channel prior
technique.
The structure of text localization neural network is a 28-layer fully convolution network which consist of two
parts, one consist of several convolution layers and another is six text detection layers. These detection layers are
responsible for predicting the default bounding boxes. The performance of the system evaluated using the metric
character accuracy. Character Accuracy(CA)=M/N. N is the total number of characters in the input image and M is the
correctly predicted characters in the output hazy free image.

115
NATCON 21 College of Engineering
Trikaripur
Conference Proceedings
13/12/21 & 14/12/21

RELATED WORKS
Although many different approaches have been proposed for the sign board detection in complex backgrounds. Saluja
et al [9]proposed a robust end to end systems for reading licence plates and street signs. It consist of a baseline model
and end to end model. The baseline model is a conventional Convolution Neural Network (CNN) encoder followed
by a Recurrent Neural Network (RNN) decoder. The end to end model is for scene text recognition.
Pandey et al [7] proposed a traffic sign detection for advanced driver assistance system. In this paper contour
analysis is used to detect road signs. It has a voice system to assist drivers with audio messages. There is a three
categories of sign detection methods that are shape based methods, color based methods and machine learning
methods. Various image-based segmentation methods are used to locate areas of interest in color-based methods.
There is a regular polygon detector for detecting traffic signs. The advantage of this method is that, if the system is
learned once then it can be easily identified using contour analysis. Changzhen et.al [1] proposed a traffic sign
detection algorithm based on Deep Convolutional Neural Network(CNN). This paper focuses on Chinese traffic sign
data set. The algorithm mainly includes Region Proposal Network(RPN) and Cnvolutional Neural Network(CNN).
First collect data sets of seven major categories and obtain sub classes, and then use a better technology for detection
purposes. The model also contains text with 33 video sequences. The system uses faster R-CNN method for perfect
detection of the traffic signs. This system is trained with VGG16 and ZF. The testing dataset contains 4706 images.
After 70,000 repetitions the accuracy of the model becomes stable.
Sivasangari et al [11] proposed a Indian traffic sign board recognition and driver alert system using CNN. In this
paper which uses different preprocessing and picture planning algorithms for improving the quality of images. Also the
system uses camera captured images.Image converted to gray scale image and then again converted to black and white
image for the sign detection. Hossain et al[4] proposed a road sign text detection using contrast intensity Maximally
Stable Extremal Regions(MSER). This paper focus on the detection of road directions. Sallah et al[8]proposed a road
sign detection and recognition system for real time embedded applications. This paper focuses on detecting the shape
of road signs like diamond, circular, hexagonal and square. The detection is based on the Hue saturation intensity.
Nair et al [6]proposesd a recognition of speed limit from traffic signs using Naive Bayes Classifier. The speed limit
need in particular areas for avoiding road accidents. The system provides an efficient speed sign detection by
performing segmentation and geometric detection. Shopa et al [10] proposed a traffic sign detection and recognition
system using OpenCV. In this paper different image processing methods like Gaussian filter, Canny edge detector
and contour analysis are used for the detection and recognition of traffic signs in complex backgrounds.

2. PROPOSED SYSTEM

2.1 System Architecture

The input image to the system is a hazy image. This haze removal of an image is done by a technique called dark
channel prior and the resultant haze free image passed through different convolutional and text detection layers. After
this text detection process text bounding boxes are created. Next level process is called Non-Maximum suppression
which eliminates the redundant bounding boxes. Figure.1 represents the proposed system architecture.

Figure 1: Proposed System Architecture.

116
NATCON 21 College of Engineering
Trikaripur
Conference Proceedings
13/12/21 & 14/12/21

3.2 Data set

The data set contains 100 images of sign boards with hazy background. Some sample images from the data set shown
in Figure 2.

Figure 2:Images with hazy background.

3.3 Haze Removal from the Image

We use dark channel prior algorithm [3] for removing the haze from the image. Transmission estimation is an impor-
tant step in removing haze from an image. The estimate for each pixel describes the portion of visible light that is
degraded by the blur and eventually reaches the image sensor. Soft matting is the second step in the haze removal [5],
it is the process of extracting foreground object from an image. Next step is estimating atmospheric light from the
image. By using this atmospheric light and transmission map we can recover the scene radiance. The following figure
shows the input image with haze, converted dark image and corresponding haze free image.

Figure 3:(a)Input Image, (b)Dark Image, (c)Haze free Image.

3.4 EAST Text Detection Model

It is an Efficient and Accurate Scene Text Detector (EAST)model in natural scene images with predicted bounding
boxes. This is a trained model with the dada set ICDAR2015. The haze free image is passed through different 28
convolution layers[2]. Text bounding boxes are generated via six text detection layers. The final detection results are
obtained from the process called non-maximum suppression. Some detection results are shown in Figure 4.

117
NATCON 21 College of Engineering
Trikaripur
Conference Proceedings
13/12/21 & 14/12/21

Figure 4: Text Detection with bounding boxes.

3. IMPLEMENTATION RESULT
We implement the proposed method with Open CV and PyCharm IDE. The performance evaluation of the system is
measured using character accuracy. Character Accuracy(CA) = M/N, N = Total number of characters in input image.
M = Correctly predicted characters in the result. The average character accuracy of the system is 96.72%.

Table 1: Sample Text Detection Result.

118
NATCON 21 College of Engineering Trikaripur
Conference Proceedings
13/12/21 & 14/12/21
4. CONCLUSION
Sign board detection in deep learning framework is more powerful than other methods. For reducing road accidents, safe
driving and identification of different sign boards in complex backgrounds,there is a good and accurate sign board detection
system is needed. This paper is mainly focus on sign board detection and recognition in hazy or foggy backgrounds. The
performance of the system evaluated using character accuracy. The EAST text detection model is an efficient and accurate
model for the prediction of texts in a sign boards. It is applicable in intelligent transport systems and driver less cars.

References
[1] X. Changzhen, W. Cong, M. Weixin, and S. Yanmei. A traffic sign detection algorithm based on deep convolu- tional
neural network. pages 676–679, 2016.
[2]X. Gao, S. Han, and C. Luo. A detection and verification model based on ssd and encoder-decoder network for scene
text detection. IEEE Access, 7:71299–71310, 2019.
[3]K. He, J. Sun, and X. Tang. Single image haze removal using dark channel prior. IEEE transactions on pattern analysis
and machine intelligence, 33(12):2341–2353, 2010.
[4]M. S. Hossain, A. F. Alwan, and M. Pervin. Road sign text detection using contrast intensify maximally stable extremal
regions. pages 321–325, 2018.
[5]Z. Li and J. Zheng. Edge-preserving decomposition-based single image haze removal. IEEE Transactions on Image
Processing, 24(12):5432–5441, 2015.
[6]S. K. Nair and R. Aneesh. Recognition of speed limit from traffic signs using naive bayes classifier. pages 1–6, 2018.
[7]P. S. K. Pandey and R. Kulkarni. Traffic sign detection for advanced driver assistance system. pages 182–185, 2018.
[8]S. S. M. Sallah, F. A. Hussin, and M. Z. Yusoff. Road sign detection and recognition system for real-time embedded
applications. pages 213–218, 2011.
[9]R. Saluja, A. Maheshwari, G. Ramakrishnan, P. Chaudhuri, and M. Carman. Ocr on-the-go: Robust end-to-end
systems for reading license plates & street signs. pages 154–159, 2019.
[10]P. Shopa, N. Sumitha, and P. Patra. Traffic sign detection and recognition using opencv. pages 1–6, 2014.
[11]A. Sivasangari, S. Nivetha, P. Ajitha, R. Gomathi, et al. Indian traffic sign board recognition and driver alert
system using cnn. pages 1–4, 2020.

119
NATCON 21 College of Engineering Trikaripur
Conference Proceedings
13/12/21 & 14/12/21

IOT BASED HOME AUTOMATION


P.Hosanna Princye1, M.Lavanya2, M.Arivalagan3, S.Sivasubramanian4
1
Associate Professor, Electronics and Communication Engineering, SEA College of Engineering and Technology, Bangalore,
India. 3hprincye@gmail.com.
2
Assistant Professor, Electrical and Electronics Engineering, Saveetha School of Engineering,
Saveetha Institute of Medical and Technical Sciences, Saveetha University, Chennai. India.
1
laviraju88@gmail.com.
3
Assistant Professor, Electronics and Instrumentation Engineering, Saveetha Engineering College, Chennai, India.
2
arivu.mit@gmail.com.
4
Professor, Electrical and Electronics Engineering, Karpagam College of Engineering, Coimbatore, India.
siva.ace@gmail.com

ABSTRACT
Internet of Things (IOT) conceptualizes the idea of remotely connecting and monitoring real world objects (things) through the
Internet. When it comes to our house, this concept can be appreciated incorporated to make it smarter, safer and automated. This IOT
project focuses on building a smart wireless home security system which sends alerts to the owner by using Internet in case of any trespass
and raises an alarm optionally. Besides, the same can also be utilized for home automation by making use of the same set of sensors. The
leverage obtained by preferring this system over the similar kinds of existing systems is that the alerts and the status sent by the WiFi
connected microcontroller managed system can be received by the user on his phone from any distance irrespective of whether his mobile
phone is connected to the internet. The selected platform is very flexible and user-friendly. The sensing of different variables inside the
house is conducted using the NodeMCU-ESP8266 microcontroller board, which allows real-time data sensing, processing and
uploading/downloading to/from the Blynk server.
Keywords: IoT, Automated, NodeMCU microcontroller, Blynk server, security system.

INTRODUCTION
Today in the headway of Automation innovation, life is getting simpler and less demanding in all
spheres. Home automation is a modern technology that modifies your home to perform different sets of
tasks automatically. Today Automatic frameworks are being favored over manual frameworks. No
wonders, home automation in India is already the buzz word, especially as the wave of second
generation home owners grows, they want more than shelter, water, and electricity. The first and most
obvious advantage of Smart Homes is comfort and convenience, as more gadgets can deal with more
operations (lighting, temperature, and so on) which in turn frees up the resident to perform other tasks.
Smart homes filled with connected products are loaded with possibilities to make our lives easier,
more convenient, and more comfortable as shown in fig 1. There is no shortage of possibilities for smart
home IoT devices as home automation seems to be the wave of the future. The requirement for Office
and Home automation arises due to the advent of IoT, in a big way in homes and office space.
The smart home/office gadgets interact, seamlessly and securely, control, monitor and improve
accessibility, from anywhere across the globe. The concept of Home Automation aims to bring the
control of operating your everyday home electrical appliances to the tip of your finger, thus giving user
affordable lighting solutions, better energy conservation with optimum use of energy. Apart from just
lighting solutions, the concept also further extends to have an overall control over your home security as
well as build a centralized home entertainment system and much more.

120
NATCON 21 College of Engineering Trikaripur
Conference Proceedings
13/12/21 & 14/12/21

Fig. 1 Block diagram of IoT based Home Automation

METHODOLOGY
The Iot based home automation system is implemented using Arduino. The system is used to
monitor level of a tank, gases and smokes ppm, amount of brightness, temperature and humidity and
control automatically based on the values read from the sensor using Nodemcu as shown in fig 2.

121
NATCON 21 College of Engineering Trikaripur
Conference Proceedings
13/12/21 & 14/12/21

Fig. 2 Methodology of IoT based Home Automation System

2.1 Retrival of Sensor Data


There are two major ways of displaying sensor data in the app:
PULL. In this case, Blynk app will request the data only when the app is open.
PUSH. In this case your hardware will be constantly sending data to the Blynk
Cloud. And when you open the app, it will be there waiting. There are lots of ways to send data in
intervals, but here is a simple one. We recommend using a BlynkTimer for that. It's included in Blynk
Library Package, so if you installed Library correctly, you are all set.

2.2 Communicating with NodeMCU


After uploading the code open the Blynk app in the Phone let it connect to the internet then you would
see your dashboard with a button, Press Play button on the top most right corner of the app. Blynk works
over the Internet. So the one and only requirement is that your hardware can talk to the Internet. No
matter what type of connection you choose Ethernet, Wi-Fi or maybe this new ESP8266 everyone is
talking about Blynk libraries and example sketches will get you online. Once everything is setup and
NodeMCU is programmed you can move on to test the setup. First power up the board and make sure
the WiFi is on. The Board will automatically get

122
NATCON 21 College of Engineering Trikaripur
Conference Proceedings
13/12/21 & 14/12/21
connected to the WiFi network. It will take a few seconds and the app will be connected to the blink
server. Now just hit the buttons to turn on/off as mentioned in fig 3.

Fig. 3 Communication with NodeMCU

2.3 Connections for Relays

Fig .4 Relay Connection

In fig 4 we have connected Relay at D0, D1, D2 and D3. You need to make sure there should not be
alternate functions to that pins. Because the D0 pin has connected with Onboard LED (D4 has TXD1
function), whenever you Power ON the NodeMCU it blinks for a second. In case you have connected a

123
NATCON 21 College of Engineering Trikaripur
Conference Proceedings
13/12/21 & 14/12/21
Tube Light, it will flash during startup. So please refer the NodeMCU pinout and assign the proper pins
without any conflictions.

2.4 Mobile Application


Blynk was designed for the Internet of Things. It can control hardware remotely, it can display sensor
data, can store data, visualize it and do many other cool things. Every time you press a Button in the
Blynk app, the message travels to the Blynk Cloud, where it magically finds its way to your hardware. It
works the same in the opposite direction. Blynk works over the Internet this means that the hardware
you choose should be able to connect to the internet. Some of the boards, like Arduino Uno will need an
Ethernet or WiFi Shield to communicate, others are already Internet enabled: like the ESP8266,
Raspberry Pi with WiFi dongle, Particle Photon or Spark Fun Blynk Board. Blynk can control Digital
and Analog I/O Pins on your hardware directly. We designed Virtual Pins to send any data from your
microcontroller to the Blynk App and back. Anything you connect to your hardware will be able to talk
to Blynk. With Virtual Pins you can send something from the App, process it on microcontroller and
then send it back to the smartphone. You can trigger functions, read I2C devices, convert values, control
servo and DC motors etc. Virtual Pins can be used to interface with external libraries (Servo, LCD and
others) and implement custom functionality.

2.5 Controlling Devices over Internet


An IoT device will likely contain one or more sensors which it will use to collect data. Just what those
sensors are collecting will depend on the individual device and its task. Smartphones do play a large role
in the IoT, however, because many IoT devices can be controlled through an app on a smartphone. You
can use your smartphone to communicate with your smart thermostat, for example, to deliver the perfect
temperature for you by the time you get home from work as shown in fig 5.

Fig .5 Controlling Device over Internet

4. RESULT
Upload the code to the NodeMCU. After uploading the code, you should see the IP address of your web
server displayed in the serial monitor as shown below in fig 6.

124
NATCON 21 College of Engineering Trikaripur
Conference Proceedings
13/12/21 & 14/12/21

Fig .6 Uploading Code to NodeMCU

Once the NodeMCU is connected to the WiFi network we are all set to control the relays, the interface
in the app somewhat looks like this, each relay can be turned on/off using this as sown in fig 7.

Fig .7 Relay turning ON/OFF

Next as shown in fig 8, we read the output of temperature sensor it is shown in widgets we added, the
sensor gives the humidity levels and temperature around it.
125
NATCON 21 College of Engineering Trikaripur
Conference Proceedings
13/12/21 & 14/12/21

Fig .8 Temperature and Humidity output is read

Similarly, we can get all sensors readings connected to the NodeMCU here is the LDR sensor output
which senses the amount of brightness in that area it is present as shown below in fig 9.

Fig .9 Sensing the brightness using LDR sensor

126
NATCON 21 College of Engineering Trikaripur
Conference Proceedings
13/12/21 & 14/12/21
Readings from motion sensor and gas sensor, if motion is detected in PIR sensor is triggered and gas
sensor outputs the ppm of gases or smoke present in the home as shown in fig 10.

Fig .10 ppm of gases or smoke is sensed using PIR sensor

Fig 11 shows the readings from Ultrasonic Sensor, since the sensor is capable of measuring the distance
it is used to indicate the water level in tank.

Fig .11 Distance measurement of water level in tank using ultrasonic sensor

127
NATCON 21 College of Engineering Trikaripur
Conference Proceedings
13/12/21 & 14/12/21

CONCLUSION
The home automation system has been experimentally proven to work satisfactorily by connecting
sample appliances to it and the appliances were successfully controlled from a wireless mobile data.
We learned many skills such as soldering, wiring the circuit and other tools that we use for this
project and was able to work together as a team during this project. The Bluetooth client was
successfully tested on a multitude of different mobile phones from different manufacturers, thus
proving its portability and wide compatibility. Thus, a low cost home automation system was
successfully designed, implemented and tested.

5. FUTURE SCOPE
Home of the future is a space for the digital natives. With the invention of lots of automation
technologies featuring IOT and AI, home automation has become a reality. One can implement several
of their tasks with just a single command of verbal instructions. These technologies can used to build
fully functional home automation system and control smart home devices including smart lights,
connected thermostats, and appliances. You will be able to save money on house operating costs by
using IoT. You can make use of smart grid integration so that you can monitor where you are using up
the most electricity and save energy accordingly. You can even control lighting and heating. Not every
home out there has made progress in terms of adopting IoT. Many still need to make technology
upgrades at the most basic levels. Whatever these new technology developments entail, smart home
automation is not just about entertainment but it covers other important aspects related to our daily life it
comes with the potential to change our lives for the better.

REFERENCES
[1] D. Norris, Smart Home Automation Based on IOT and Android Technology M. Abivandhana1, K.
Divya2, D. Gayathri3.

2) BOHORA, Bharat; MAHARJAN, Sunil; SHRESTHA, Bibek Raj. IoT Based Smart Home Using
Blynk Framework. Zerone Scholar, [S.l.], v. 1, n. 1, p. 26-30, Dec. 2016. ISSN 2542- 2774.
3) DC-DC Step Down Converter Power Supply Provides Regulated 5VDC Output with Range Input of
10-32VDC, Model GTD21088L-1505-T2.

4) Home Automation Using Internet of Thing 2016 IEEE 7th Annual Ubiquitous Computing,
Electronics & Mobile Communication Conference (UEMCON) Published: 2016. Google Scholar.
5) Internet of Things in Home Automation and Energy Efficient Smart Home Technologies Simon G. M.
Koo Department of Computer Engineering, Santa Clara University, CA 95053, USA
6) Low Cost Implementation of Smart Home Automation Ravi Kishore Kodaly Department of
Electronics and Communication Engineering National Inst itute of Technology, Warangal, 506004 India.

7) Mobile based home automation using Internet of Things (IoT) 2015 International Conference on
Control, Instrumentation, Communication and Computational Technologies (ICCICCT) Published: 2015.

128
NATCON 21 College of Engineering Trikaripur
Conference Proceedings
13/12/21 & 14/12/21
8) NodeMCU Features and Pinout. A Brief Tutorial on the Introduction to NodeMCU V3. IOT BASED
HOME AUTOMATION SYSTEM Department of ECE Page 47.

9) Yoyosteven in Circuits Microcontrollers. NODEMCU 1.0 (ESP8266) CONTROLLED RELAY


USING BLYNK (OVER THE WEB).

10) 5V 4-Channel Relay Interface Board, Standard Interface that can be Controlled Directly by
Microcontroller. 11) 15-17 March 2018 U. Venkanna IoT Based Smart Home Automation System Using
Sensor Node. Google Scholar.

11) International Journal of Innovative Studies in Sciences and Engineering Technology (IJISSET) IoT
Based Home Automation Using Raspberry PI ISSN 2455-4863 (Online) www.ijisset.org Volume: 3
Issue: 4 | April 2017.

12) Alheraish, “Design and Implementation of Home Automation System,” IEEE Transactions on
Consumer Electronics, vol. 50, no. 4, pp.1087-1092, Nov. 2004.

13) International Advanced Research Journal in Science, Engineering and Technology ISO 3297:2007
Certified, Vol. 5, Issue 6, June 2018, IoT-based Home Appliances Control System Using NodeMCU and
Blynk Server.

14) Home automation - Wikipediaen.wikipedia.org › wiki › Home automation.

15) https://lastminuteengineers.com/esp8266-nodemcu-arduino-tutorial/.

16) https://www.instructables.com/id/NodeMCU-ESP8266-Details-and-Pinout/

129
WAVELET BASED DETECTION OF THE ARRIVAL OF SEISMIC P-WAVES
M. Beena Mol 1, L. Prinza 2, Jerald3, Johny Elton4 and Ignisha Rajathi5
1
Assistant Professor, Civil Engineering Department, LBS College of Engineering (A Government of Kerala
undertaking)
Email: beena.civil@gmail.com,

Abstract:
The sound waves generated during an earthquake results in strong ground motion and are observed as time series
signals called accelerograms. The strong ground motion with intensity greater than 6.0 results in the destruction and
losses to lives and properties. Providing an effective early warning system by accurately detecting the primary
waves (P-waves) can alert the people and thereby can minimize the losses. But the presence of precursory signals in
front of the P-waves called noise prevents the accurate detection of P-waves. This article investigates the application
of wavelet transforms in the accurate detection of the arrival of P-waves. The investigations were carried out using
the accelerograms of stations IBR003 and 4901 of the 11 Mar 2011 Japan earthquake and 2 Oct 2001 Eastern
Turkey earthquake respectively. The rule based thresholding of wavelet coefficients has effectively removed the
coefficients corresponding to the precursory signals called noise and thereby have exactly detected the arrival of P-
waves.

Keywords: Wavelet transforms, accelerograms, earthquake, P-wave detection, processing.

1. Introduction:

An earthquake is one of the unpredictable natural disasters due to the energy release occurring in the earth’s
lithosphere claiming millions of lives and destruction of structures [1]. The mortality rate and the destruction of the
valuable properties are mostly due the destructive surface waves (S-waves) travelling along the surface of the earth.
The primary waves are the body waves travelling deep below the earth’s surface, which are least destructive and
they arrive before the arrival of S-waves [2,3]. The seismic exciting force acting at the base of any man made
structure is non-linear [4] and is assumed to be the same as that experienced by the ground surface [5-8]. Accurate
detection of the P-waves experienced at the ground surface is important for providing an effective early warning
system in structures [3,9,10]. Presence of the precursory signals called noise in front of the p-waves in an
accelerogram causes error in the P-wave detection [10-12]. Removal of these precursory signals can aid in the exact
detection of the arrival of P-waves. Figure 1 shows an example of a 3 component accelerogram recorded from the
October 23, 2001, Eastern Turkey earthquake of magnitude 7.1[13]. Each accelerogram component is made up of
several phases of wave packets corresponding to different types of wave motion with different path of propagation
through the earth’s interior and surface [2].
But all observed strong ground motion records originating
from the ground accelerometer sensors need to be subjected
to processing in order to remove the precursory waves or the
noise [10,14]. Because of the electronic and environmental
noise, the raw data captured from the seismic sensors are
significantly distorted [10]. This could affect the exact
detection of the arrival of P-waves. The range of this
influential noise is of low to high frequency [10, 12, 14-16]
and, will lead to the false detection of the P-waves arrival.
Therefore, the main objective of this investigation is of two
fold. One is to remove the precursory noisy signal and the
other is the accurate detection of the arrival of P-waves. Figure 1: Three component accelerogram
recorded from Eastern Turkey earthquake

2. Rule based wavelet denoising

Wavelet transform [17, 18] is a transformation tool which can analyze a discrete time series data by decomposing it
into approximate and detailed wavelet coefficients using scaling and shifting functions [12, 19-21]. Wavelet
transforms has made it a suitable tool for seismic noise corrections [12] over the Fourier transforms [22,23]. Wavelet
multi-resolution analysis has been used to detect the precursory signals [11]. The reference spectrum method has
also been used to detect the P-wave triggering [9]. Wavelet processing is achieved by filtering or thresholding the
decomposed wavelet coefficients corresponding to the noisy signals. The block diagram of the general wavelet
processing of accelerograms is shown in Figure 2.

Wavelet Processed
Real observed filtering of
noisy and denoised
noisy accelerogram
accelerogram coefficients

Figure 2: Block diagram of wavelet processing of accelerogram

The wavelet denoising of accelerograms involves three important steps. (i) Performing the forward discrete wavelet
transform (FDWT) (ii) selection of threshold and the application of thresholding rule and (iii) Performing inverse
discrete wavelet transform (IDWT). The steps involved in the process of wavelet denoising are shown in Figure 3.

Denoised
Noisy
Forward Threshold and Inverse discrete accelerogram
accelerogram
discrete wavelet thresholding wavelet
transform rules transform
Step 1 Step 2 Step 3

Figure 3: Steps involved in the wavelet processing of accelerograms

Step 1: Perform the FDWT on the noisy accelerogram X m , i. e. , DWT X m to obtain the discrete wavelet
coefficient vectors C(j, k) using the selected mother wavelet and the level of decomposition. Hence the signal will be
transformed from the time domain to the time-scale wavelet domain using eq. (1).

X m XT m (1)

In the wavelet domain Equation (1) becomes,

XT m = fT (m) + WT (m)
= � �, � (2)
= �1 + �2

Where �1 and �2 in Equation (2) are the wavelet coefficients corresponding to the desired and the noisy data in the
accelerogram time series representation. The process of this time–scale transformation of accelerograms is shown in
figure 4.

Noisy Wavelet Coefficients


accelerogram Approximation wavelet
coefficients
Mother FDWT
wavelet &
Detailed wavelet
Level of coefficients
decomposition

Figure 4: Forward discrete wavelet transformation (FDWT) process


Step 2: Perform the thresholding operation on the resultant wavelet coefficients of the transformed signal
WT(X m ) using the calculated threshold (TH) λ, and the thresholding rule (THR) ξ to obtain the modified wavelet
coefficients corresponding to the denoised wavelet coefficients using Eq. (3). The process of the thresholding is
described also in fig. 5.

C j, k C' j, k (3)

Approximation Modified wavelet Coefficients


wavelet
coefficients Modified approximation
wavelet coefficients
Global Thresholding
threshold rule &
Detailed Modified detailed
wavelet wavelet coefficients
coefficients

i.e.,TH, THR (WT(X M) , λ, ξ)


Figure 5: Thresholding process of an accelerogram in wavelet domain

Step 3: Perform inverse discrete wavelet transform on the modified wavelet coefficient to obtain the denoised
accelerogram signal using Equation (4).

i.e., WT−1 (TH, THR (WT X M , λ, ξ)


C' j, k f m (4)
IDWT

In rule based wavelet denoising [24], the decomposed wavelet coefficients are classified in to four different regions
using the globally determined threshold and a bandwidth. Four different thresholds are determined for all the four
regions. The soft, semi-soft, Garrotte and hard thresholding rules are applied for the regions of low, medium, high
and very high range frequencies. The hard thresholding rule (HTR), soft thresholding rule (STR), semi-soft
thresholding rule (SSTR) and the Garrotte thresholding rule (GTR) are expressed in Equations (5), 6 , 7 & (8)
respectively.

� �, � � �, � > λ
HTR, �' �, � � � = (5)
0 � �, � < λ

��� � �, � � �, � − λ , � �, � > λ
STR, �' �, � � � = (6)
0 � �, � < λ

SSTR, �' �, � � � =
0 � �, � ≤ λ1
λ2 � �,� −λ1
��� � �, � λ2−λ1
, λ1 < � �, � ≤ λ2 (7)
� �, � � �, � > λ2

λ2
� �, � − ) � �, � >λ
GTH, �' �, � � � = � �,� (8)
0 � �, � <λ

Where, λ is the threshold, � �, � is the wavelet coefficients and �' �, � is the modified and filtered wavelet
coefficients.
3. Database and Softwares used

The earthquake ground motion data used in this investigation is taken from the United States Geographical Survey
(USGS) database of centre for engineering strong motion data (www.strongmotion center.org). This database
provides us with huge discrete time series ground motion and structural acceleration data for low to large magnitude
earthquakes at varying source distances and magnitudes. The experimental investigations are conducted on a
Pentium IV running windows XP. The programmes of rule based wavelet denoising techniques are developed using
Matlab 2008. The Matlab signal processing toolbox and the wavelet toolbox are also used for implementing the
investigations.

4. Detection of the arrival of p-waves

Detecting the arrival of P-waves is vital to design an earthquake early warning system. The arrival of P-waves is a
point phase and is experienced within the first few seconds in the strong ground motion observation. In this section,
the application of rule based wavelet denoising tool is investigated for the detection of the arrival of P-waves. The
observed accelerograms are processed using the rule based localized threshold and thresholding scheme by
considering the varying low, medium, high and very high range of noise frequencies. The accelerograms used for
this investigation are observed at the stations IBR 003 during the 11 March 2011 Japan earthquake and 4901 during
the 23 October 2001 Eastern Turkey earthquake. The first 30 seconds of the N-S, E-W and U-D components of the
raw and processed accelerograms observed from the station IBR003 of Japan earthquake are shown in Figures 6, 7
and 8 respectively.

From the Figures 6 - 8, it is observed that, the initial part of the raw accelerogram components are suffered
with random noise and hence there is no clear distinction to identify the arrival of the earliest P-waves. The random
vibrations of the environmental disturbances and the earthquake are mixed together. But, it is also viewed that the
processed accelerogram components are free from random non-stationary noises and there is a clear identification of
the P-waves arrival. From the processed accelerograms, the arrival of the P-waves is experienced in the 27th second
of the observation. But, in the real observed raw accelerogram the exact time of arrival of P-waves is not identified.
Therefore, any structural system cannot be provided with an effective early warning system without wavelet
processing.

Figure 6: Arrival of P-waves in the N-S component of Figure 7: Arrival of P-waves in the E-W component of
station IBR003 station IBR003

Figure 8: Arrival of P-waves in the U-D component of


station IBR003

It is observed that the random discrete pattern observed by the three components of the same IBR003 station is
different. The peak values of the ground acceleration are also noted to be varied. But the rule based wavelet
processing of accelerograms of all the three components have exactly picked the arrival of P-waves at the same time.
Therefore it can be concluded that the wavelet processing of accelerograms can be effectively used in picking the
arrival time of P-waves. Thereby, the proposal can provide a more effective earthquake early warning system,
subjected to earthquakes of very high intensity.

The effect of rule based wavelet processing in detecting the arrival of P-waves in the high magnitude
earthquake is also investigated by using the accelerograms of station 4901 of Eastern Turkey earthquake. The first
20 seconds of the N-S, E-W and U-D components of the raw and processed accelerograms observed from the station
4901 of Eastern Turkey earthquake are shown in Figures 9, 10 and 11 respectively.

Figure 10: Arrival of P-waves in the E-W component


Figure 9: Arrival of P-waves in the N-S component of
of station 4901
station 4901

Figure 11: Arrival of P-waves in the U-D component


of station 4901

From the Figure 9 to 11 it is observed that the initial part of the accelerogram is suffered with random noise and
hence there is no clear distinction to identify the arrival of the earliest P-waves. On casual inspection of Figure 9, the
signal may appear harmless. But, little introspection suggests the enormity of the problem. The duration between 12-
14 seconds of Figure 9 is zoomed out to expose the enormity. The arrival of the P-waves is experienced in the 14th
second of the observation. But, in the raw accelerogram the exact time of arrival of P-waves is not identified. But,
the rule based processed accelerogram components clearly show the distinction of the random vibration of
environmental disturbances and the vibration of seismic event.

5. Conclusion

From the above experimental analysis, it can be observed that the proposed rule based wavelet processing
of accelerograms can improve the quality of the accelerograms. The processed accelerograms can predict the arrival
of P-waves with more accuracy irrespective of the low, medium, high and very high level of noise variances. The
accurate prediction of P-wave arrival can be a milestone in providing an earthquake early warning system.
Processing of accelerograms reduces the uncertainties present in the SGM observations for the analysis and design
of structures. The proposed localized rule based wavelet denoising technique has very effectively determined the
point phase in the arrival of P-waves, which is the most important detection criterion in providing an effective
earthquake early warning system. By providing an early warning system, the people get a few seconds to minutes to
protect their lives.

References:

[1] http://www.Oxbridgewriters.com/essays/engineering/use-of-earthquake-accelerograms.
[2] http://www.seismicwarning.com/technology/waveseparation.php
[3] http://earthquake.usgs.gov/research/earlywarning/
[4] S. Faroughi, J. lee, Analysis of tensegrity structures subject to dynamic loading using a Newmark approach,
Journal of Building Engineering, 2 (2015), pp. 1-8
[5] A. S. Pisharady, A. D. Roshan, V. V. Muthekar, Seismic safety of nuclear power plants, Civil and Structural
Engineering division, Atomic Energy regulatory board, India
[6] E. F. Cruz, S. Cominetti, Three dimensional buildings subjected to bi-directional earthquakes, validity of
analysis considering uni-directional earthquakes, 12th World Conference on earthquake engineering, Auckland,
Newzland, (2000).
[7] M. Girish, M. Pranesh, Sliding Isolation Systems: State-of-the-Art Review, IOSR Journal of Mechanical and
Civil Engineering,Second international conference on emerging trends in Engineering and Technology, (2009),
pp. 30-35.
[8] B. H. Stana, Linear and Non-linear analysis of a high rise building excited by earthquake, Norwegian University
of Science and Technology, Master Thesis, (2014).
[9] M. Miyazawa, Detection of seismic events triggered by P-waves from the 2011 Tohoku-Oki earthquake, Earth
Planets Space, Vol.64, (2012), pp. 1223-1229.
[10] A. G. Hafez, M. Rabie, & T. Kohda, Seismic noise study for accurate P-wave arrival detection via MODWT,
Computers and Geosciences, vol. 54, no. 1, (2013), pp.148-159.
[11] A.G. Hafez, M. Rabie and T. Kohda, Detection of precursory signals in front of impulsive P-waves, Digital
Signal Processing, Vol. 23, (2013), pp. 1032–1039.
[12] A. Ansari, A. Noorzad, H. Zafarani, & H. Vahidifard, Correction of highly noisy strong motion records using a
modified wavelet denoising method, Soil Dynamics and Earthquake Engineering, vol. 30, no. 4, (2010), pp.
1168–1181.
[13] www.strongmotioncentre.org
[14] D. M. Boore, & J. J. Bommer, Processing of strong motion accelerograms: needs, options and consequences,
Soil Dynamics and Earthquake Engineering, vol. 25, no. 1, (2005), pp. 93-115.
[15] F. Botella, J. Rosa- Herranz, J. J. Giner, S. Molina, & J.J. Galiana- Merino, A real-time earthquake detector
with pre-filtering by wavelets, Computers and Geosciences, vol. 29, no. 2, (2003), pp. 911–919.
[16] A. M. Converse, & A. G. Brady, BAP: Basic strong motion processing software; version 1.0, United States
department of the Interior Geological Survey, Open file report, (1992), pp. 92- 296.
[17] A. Bagheri & S. Kourehli, Damage detection of structures under earthquake excitation using discrete wavelet
analysis, Asian Journal of Civil Engineering (BHRC), vol. 14, no. 2, (2013), pp. 289–304.
[18] M. Kumar & S. Pandit, Wavelet transform and wavelet based numerical methods: an introduction, International
Journal of Non Linear Science, vol. 13, no. 3, (2012), pp. 325–345.
[19] A. Bron, Wavelet based denoising of speech, Ph.D thesis, Isreal Institute of technology, Haifa, (2000).
[20] F. J. Dessing, A wavelet transform approach to seismic processing, Phd thesis, Delft University of Technology,
Netherlands. (1997).
[21] A. Heidari & E. Salajegheh, Wavelet analysis for processing of earthquake records, Asian Journal of Civil
Engineering (Building and Housing), vol. 9, no. 5, (2008), pp. 513–524.
[22] Z. Chik, T. Islam, S.A, Rosyidi, H. Sanusi, M. R. Taha & M. M. Mustafa, Comparing the performance of
Fourier decomposition and wavelet decomposition for seismic signal analysis, European Journal of Scientific
Research, vol. 32, no.3, (2009), pp. 314–328.
[23] M. Beena Mol, S. Prabavathy and J. Mohanalin, Wavelet based Seismic Signal Denoising using Shannon and
Tsallis Entropy, Computers and Mathematics with Applications, Vol 64, No. 11, (2012), pp: 3580 – 3593.
[24] M. Beena Mol, S. Prabavathy and J. Mohanalin, A Robust Rule based Denoising Scheme using Wavelets,
Journal of Medical Imaging and Health Informatics, Vol. 4, no.4, (2014), pp. 1-12.

View publication stats

You might also like