Download as pdf or txt
Download as pdf or txt
You are on page 1of 169

NATIONAL CONFERENCE ON RECENT

TECHNOLOGICAL ADVANCES IN BASIC


AND APPLIED SCIENCES
NCRTABAS-2020

25.02.2020

ORGANIZED BY

DEPARTMENT OF ELECTRONICS AND COMMUNICATION


SCIENCE

ALPHA ARTS AND SCIENCE COLLEGE


PORUR, CHENNAI – 116
Chief Editor
Dr. B. Benita Merlin
Assistant Professor and Head
Dept. of ECS,
Convener,
NCRTABAS-2020

Sub Editors
Ms. B. Rajani, Assistant Professor, Dept. of ECS.
Ms. A. Selvarasi, Assistant Professor, Dept. of ECS.
Ms. R. Kalaimathi, Assistant Professor, Dept. of ECS.

Year of Publication – 2020

Printed by - Today Printers


Chennai-5
Publishers - Today Publications
Chennai-62,

ISBN - 978-81-945377-0-0

All rights are reserved. No part of this publication may be reproduced in any form or by any

means, mechanical, photocopying or otherwise, without prior written permission of the

author.
NATIONAL CONFERENCE ON RECENT TECHNOLOGICAL
ADVANCES IN BASIC AND APPLIED SCIENCES-
NCRTABAS-2020
25.02.2020

CONFERENCE PROCEEDINGS

Patron
Dr. (Mrs.) Grace George
Chairperson
Alpha Group of Institutions

Vice Chairperson
Mrs. Suja George
Alpha Group of Institutions

Principal
Dr. D. Ashalatha
Alpha Arts and Science College

Convener
Dr. B. Benita Merlin
Assistant Professor and Head
Department of ECS, Alpha Arts and Science College

Coordinators
Ms. B. Rajani,
Ms. A. Selvarasi
Ms. R. Kalaimathi
Assistant Professors
Department of ECS, Alpha Arts and Science College

Organized by
Department of Electronics and Communication Science
Alpha Arts and Science College
Porur, Chennai – 116
NCRTABAS-2020 i
Dr. (Mrs.) Grace George
Founder & Chairperson
Alpha Group of Institutions

MESSAGE

Alpha Arts and Science College founded in the year 1996, aims of transforming itself as a
centre for excellence in the field of education by creating a strong cadre of professionals and
good citizens of India. Over these 23 years, it strives hard to fulfil its Motto: SEEK, SHARE,
and SERVE. The college was accredited by NAAC in the year 2012 and 2017.

The Department of Electronics and Communication Science came into existence in the same
year. It makes every effort to uplift the quality of education and instils the necessary skills
within students to excel in the ever-changing trend of the current scenario.

The National Conference is an yet another milestone in the history of the Department which
will serve as a solid platform for young scholars to share their innovations in their field of
interest and to obtain valuable inputs from erudite academicians and industry experts.

I would like to congratulate the Department of Electronics and Communication Science for
taking this initiative of organizing a National Conference on ‘Recent Technological
Advances in Basic and Applied Sciences (NCRTABAS-2020)’. The conference covers a
wide range of topics so as to provide opportunity to all the aspiring scholars to present their
work in a research forum.

I wish NCRTABAS-2020 a grand success and also the Department, to grow from strength to
strength and witness growth by leaps and bounds in the years to come.
NCRTABAS-2020 ii
Mrs. Suja George
Vice Chairperson
Alpha Group of Institutions

MESSAGE

Alpha Arts and Science College provides quality education to foster the young minds to
disseminate knowledge to the society at large. Over the years, the college has grown from
strength to strength exhibiting intensification in all sectors, bridging the gap between the
industry and institution to match technological developments. It strives hard to build up a
knowledge based society by upholding ethical and professional standards. In this era of
rapidly changing technologies, new inventions manifest the involvement of budding scholars
in research activities.

It is indeed an immense pleasure to know that the Department of Electronics and


Communication Science is organizing a National Conference on Recent Technological
Advances in Basic and Applied Sciences NCRTABAS-2020. This conference provides a
basic platform for scholars belonging to basic sciences to bring forth research findings from
their respective fields. It will pave the way for scholars to exchange their ideas, develop new
association with the research forum, discover new avenues in their area of research, and
eventually enrich and enhance their knowledge. We sincerely believe that the conference will
benefit the participants from different research forums.

I wish NCRTABAS-2020 a grand success and also the Department to reach greater heights
in academic excellence and outstanding professionalism in the years to come.
NCRTABAS-2020 iii
Dr. D. Ashalatha
Principal
Alpha Arts and Science College

MESSAGE

In the digital world ruled by the every minute change it is an era of technological
advancement. The present education system has to concentrate on the dissemination of
knowledge, sharpening of the basic skill set and develop a competent attitude. Alpha arts and
Science College caters to the academic pursuit of the students and has been playing a great
role in fostering the holistic student progression.

The Department of Electronics and Communication Science has been very smart to plan a
National Conference on “The Recent Technological Advances in Basic and Applied
Sciences” which is a vital area for substantial discussions. The 21st Century has been a
witness to the growth of artificial intelligence, e- management and commerce, nuclear power,
space technology, digital media, genetic engineering, etc. It has become the order of the day
that the developing countries have to invest in quality education for youth and continuous
skills training. All this has to lead to the improvement of human condition and positive
transformation of economy. The Conference aims at providing students a congenial platform
to deliberate on the impact of the technological advances on the global community.

I take this opportunity to congratulate and wish the Faculty, Staff, and Students of the
Department of Electronics and Communication Science for hosting the Conference on the apt
topic which can further inspire the students to create and innovate for achieving the
sustainable UN goals.
NCRTABAS-2020 iv
Dr. B. Benita Merlin
Assistant Professor and Head
Dept. of ECS
Convener – NCRTABAS-2020
Alpha Arts and Science College

MESSAGE

In this era of rapidly changing technologies it is mandatory that every professional to keep
abreast of the latest developments and emerging techniques in their respective field of
research. The challenges faced in all fields require appropriate solutions which emerge
through the intensification of research.

The main focus of the conference is to bring together the researchers who show keen interest
in theoretical foundations with practical approach in basic and applied sciences. The National
Conference on Recent Technological Advances in Basic and Applied Sciences
(NCRTABAS-2020) provides an ideal platform to share their knowledge and experiences. It
mainly creates new avenues towards knowledge exchange and networking among researchers
to yield productive results.

The conference proceeding comprises of abstracts of plenary lectures, detailed programme


schedule, papers contributed by participants along with advertisements from sponsors. It
covers a wide spectrum of fields that include articles in physics, electronics, and computer
science.

First and foremost I express my heartfelt thanks to the management for giving an opportunity
to organize NCRTABAS-2020. I gratefully acknowledge the contribution made by plenary
speakers. We also thank all the authors for presenting their research work. I am also indebted
to thank the organizing committee for their tireless efforts towards the successful conduct of
the National conference. The participants of the conference are highly appreciated.
NCRTABAS-2020 v

CONTENT

INVITED LECTURES

S.NO TITLE AND AUTHOR NAME PAGE


NO.
IL-1 FABRICATION OF DYE SENSITIZED TIO2 NANOCRYSTALLINE SOLAR 2
CELLS (DSSC) USING ORGANIC AND RU COMPLEX DYES AS
SENSITIZERS, IODIDE/TRIIODIDE (I-/I3-) ELECTROLYTE AND
TRANSITION METAL CARBIDE (TMC) BASED COUNTER ELECTRODE
Prof. Ramasamy
Dean, SSN Institutions, Chennai

IL-2 TUNING OF MACROSCOPIC PROPERTIES FROM MICROSCOPIC 3


PHENOMENA FOR NOVEL MATERIALS APPLICATIONS
Dr. Rita John
Professor and Head. Department of Theoretical Physics,
University of Madras, Chennai

IL-3 RECENT TRENDS IN SCIENTIFIC DATA PROCESSING 4


Dr D. Nedumaran
Professor and Head, Department of Central Instrumentation,
University of Madras, Chennai

IL-4 MICRO & NANO TECHNOLOGY – AN INTRODUCTION 5


MEMS & COMMERCIAL MICRO SENSORS IN NOVEL APPLICATIONS.
Dr.J.Jayapandian
Head & Scientific Officer – H (Rtd).
Electronics and Instrumentation Section, Surface & Nano Science Division.
Materials Science Group, Indira Gandhi Centre for Atomic Research.
Kalpakkam, India

IL- 5 AUTOMOTIVE RADARS 7


Commander Pannir Selvam Elamvazhuthi (Rtd.),
Chief Engineer & Chief Architect, Bengaluru, India

IL-6 METABOLOMICS IN CANCER DIAGNOSIS USING AUTOFLUORESCENCE 8


AND RAMAN
Dr. Aruna Prakasarao
Department of Medical Physics,
Anna University, Chennai, India
NCRTABAS-2020 vi
CONTRIBUTED PAPERS

S.NO TITLE AND AUTHOR NAME PAGE


NO.
PHYSICS
P-1 CHARACTERIZATIONS OF UNIDIRECTIONAL GROWN BIS-THIOUREA 10
CADMIUM CHLORIDE SINGLE CRYSTAL FOR ELECTRO OPTIC
APPLICATIONS
D. Jayanthi Margret, S.Vinoth Kumar, R. Uthrakumar B

P-2 ACOUSTICAL AND SPECTRO STUDIES ON NANO TIO2 INTRODUCED 12


INTO PEG200 POLYMER MATRIX
S.Sarala. S.K.Geetha

P-3 SYNTHESIS, GROWTH AND CHARACTERIZATION OF 2-AMINO-5- 16


NITROPYRIDINE 4-CHLOROBENZOIC ACID (1:1): A NEW ORGANIC
SINGLE CRYSTAL FOR THIRD-ORDER NONLINEAR OPTICAL (NLO)
APPLICATIONS
V. Sivasubramani, Muthu Senthil Pandian, P. Ramasamy

P-4 STABILIZATION OF NANOFLUIDS WITH MECHANICAL DC 17


VIBRATOR
U. Rajeshwer, Rita John

P-5 SYNTHESIS OF CARBON DOTS SUPPORTED SILVER 17


NANOPARTICLES FOR THE SELECTIVE AND SENSITIVE
DETECTION OF GLUTATHIONE
M. Sandhiya Malathi, M. Chandhru, S. Kutti Rani, N. Vasimalai

P-6 STRUCTURAL AND ELECTRICAL PROPERTIES OF (1- 18


X)BI1/2NA1/2TIO3–XBATIO3 SINGLE CRYSTALS ACROSS THE
MORPHOTROPIC PHASE BOUNDARY BY TSSG METHOD
M. William Carry, Muthu Senthil Pandian, P. Ramasamy

P-7 LUMINESCENCE PROPERTIES AND ENERGY TRANSFERPROCESS 19


OF CE3+ANDMN2+CO-ACTIVATED SRB2SI2O8RED EMITTING
PHOSPHOR
V. Pushpamanjari, B. Sailaja, R.V.S.S.N. Ravikumar

P-8 FT-IR STUDIES OF ND3+ DOPED ALKALI ZINC BORATE GLASS 20


B. Sailaja , G. Tirumala Rao , R.V.S.S.N. Ravikumar

P-9 EFFICIENT SEGMENTATION & CLASSIFICATION METHODS FOR 21


MRI BRAIN IMAGES
E. Synthiya Judith Gnanaselvi, M. Mohamed Sathik
NCRTABAS-2020 vii
P-10 INVESTIGATIONS ON THE GROWTH AND CHARACTERIZATION OF 24
A CRYSTALLISEDSUPERACID, AMMONIUM TETROXALATE
DIHYDRATE
Eunice Jerusha, S. Shahilkirupavathy

P-11 INVESTIGATION OF TYPE-I AND TYPE-II PHASE MATCHING 26


ELEMENTS USING ORGANIC 2AP4N SINGLE CRYSTALS GROWN
BY POINT SEED ROTATION AND NOVEL RSR TECHNIQUE
P. Karuppasamy, T. Kamalesh, Muthu Senthil Pandian, P. Ramasamy,
Sunil Verma

P-12 PERFORMANCEOFADENINE DOPED POLYVINYLIDENEFLUORIDE 27


BASED SOLID POLYMER ELECTROLYTES FOR DYE-SENSITIZED
SOLAR CELL APPLICATIONS
S. Kannadhasan, Muthu Senthil Pandian, P. Ramasamy

P-13 STUDIES ON LINEARTY AND ASSAY USING UV-VISIBLE 28


SPECTROSCOPY FOR THE DRUGS PANTOPRAZOLE AND
PARACETAMOL BEFORE AND AFTER EXPIRY PERIOD
Manimegalai E, Bright A
P-14 SPRAY COATED NIOX THIN FILM AS HOLE TRANSPORT 29
MATERIAL FOR PEROVSKITE SOLAR CELL APPLICATIONS
R. Isaac Daniel, N. Santhosh, Muthu Senthil Pandian, P. Ramasamy

P-15 INFLUENCE OF TIO2 NANOFILLER ON THE THERMALAND 29


STRUCTURALPROPERTIES OF PEO BASED POLYMER
ELECTROLYTE
A.Christina Nancy, Agnes Robeena Quincy

P-16 HYDROTHERMAL SYNTHESIS AND CHARACTERIZATION OF HIGH 30


CRYSTALLINE ZNAL2O4 NANOPOWDER
G. Thirumala Rao, B. Sailaja, R.V.S.S.N. Ravikumar

P-17 GROWTH OF BULK SIZE ORGANIC TRIPHENYLAMINE (TPA) 31


SINGLE CRYSTAL BY BRIDGMAN – STOCKBARGER METHOD: A
POTENTIAL CANDIDATE FOR NONLINEAR OPTICAL (NLO)
APPLICATION
K. Ramachandran, A. Raja, Muthu Senthil Pandian, P. Ramasamy

P-18 SYNTHESIS AND CHARACTERIZATION OF DISC-SHAPED 31


THIOPHENE BASED ZN-PORPHYRIN FOR ORGANIC SOLAR CELLS
APPLICATION
M. Muthu, P. Pounraj, Muthu Senthil Pandian, P. Ramasamy
NCRTABAS-2020 viii
P-19 3 D STRUCTURE DETERMINATION, HIRSHFELD SURFACE 32
ANALYSIS, ENERGY FRAMEWORK AND CHARACTERIZATION
STUDIES OF (2E,4E)-1-(4-CHLOROPHENYL)-5-(4-
METHOXYPHENYL) PENTA-2,4-DIEN-1-ONE
K. Biruntha , G.Usha

P-20 CARBON BASED HOLE-COUNDUCTOR-FREE PEROVSKITE SOLAR 33


CELLS BASED ON 2D-3D ORGANIC INORGANIC HALIDE
ABSORBER
K. R. Acchutharaman, N. Santhosh, Muthu Senthil Pandian,
P. Ramasamy

P-21 COMPUTATIONAL MODELING ON MC-SILICON CRYSTAL 33


GROWTH PROCESS
M. Srinivasan, P. Ramasamy

P-22 ZN+CU CO-DOPED TIO2 SEMICONDUCTOR NANOSTRUCTURE-AN 34


EFFECTIVE CATALYST FOR METHYLENE BLUE DYE
B.Manikandan, K. R. Murali, Rita John

P-23 GROWTH AND CHARACTERIZATION OF 4-AMINOPYRIDINIUM 4- 35


NITROPHENOLATE 4-NITROPHENOL (4AP4NP) SINGLE CRYSTAL
FOR NONLINEAR OPTICAL (NLO) APPLICATIONS
T. Kamalesh, P. Karuppasamy, Muthu Senthil Pandian,
P. Ramasamy, Sunil Verma

P-24 FABRICATION OF HOLE-TRANSPORT-FREE PEROVSKITE SOLAR 36


CELLS (PSC) USING 5-AMMONIUM VALERIC ACID IODIDE (5-
AVAI) AS ADDITIVE AND CARBON AS COUNTER ELECTRODE
N. Santhosh, Muthu Senthil Pandian, P. Ramasamy

P-25 FIRST PRINCIPLES CALCULATIONS ON GOLD (AU) AND 37


PLATINUM (PT) TO INVESTIGATE TOPOLOGICAL SEMIMETAL
PHASES
Rita John, D. Vishali

P-26 IMPACT OF SPIN ORBIT COUPLING ON THE ELECTRONIC 38


PROPERTIES OF BINARY COMPOUNDS HGTE AND CDTE USING
GGA AND TB-MBJ
R. Anubama, Rita John

P-27 TITANIUM BASED INTERMETALLIC COMPOUNDS 38


Hannah Ruben, Ancy

P-28 EVOLUTION OF STARS – A REVIEW 41


S.Nivedha , T.S.Renuga Devi
NCRTABAS-2020 ix
P-29 COMPARITIVE STUDY BETWEEN HALL EFFECT SENSOR AND 43
TUNNEL MAGNETO RESISTIVE SENSOR (TMR MAGNETOMETER)
G. Yuvasri

P-30 TIME TRAVEL AND ITS POSSIBILITIES 45


Divyanshi Dubey

P-31 ANTIMATTER – THE FUTURE GAME CHANGER 48


Shreya Hembrom

P-32 BIO- DEGRADABLE MATERIALS FOR GREEN AND SAFETY FOOD 50


PACKAGING
B. Rajani, U. Manoj Kumar

P-33 INTERACTING ANTI-SYMMETRIC TENSOR FIELD THEORIES 52


K. Ekambram, A.S.Vytheeswaran

P-34 THE ELECTRONIC AND OPTICAL PROPERTIES OF AB STACKED 54


BILAYER SILICENE – A FIRST PRINCIPLES APPROACH
Benita Merlin, Rita John, Sarath Santhosh

P-35 POWER DOMINATOR CHROMATIC NUMBER FOR SOME TREE 55


GRAPHS
A. Uma Maheswari, Bala Samuvel J

P-36 TECHNOLOGY IN ACCELERATOR PHYSICS 58


Abiya Jose

P-37 FUTURE OF ENVIRONMENTAL MONITORING: ROLE OF FERRITE 60


NANOPARTICLES AS GAS SENSOR
S. Jayanthi, V. Saravanasundar, G. Rajarajan

P-38 RECENT TRENDS IN LASER TECHNOLOGY 61


G.Vinitha

ELECTRONICS
E-1 STUDY OF INTELLIGENT TESTBENCH FOR WORKLOAD 64
CHARACTERIZATION FOR VARIOUS MULTICORE EMBEDDED
PROCESSORS
A. Gopinath

E-2 INTELLIGENT CONTROL OF ELECTRICAL SYSTEM FOR ENERGY 65


CONSERVATION USING PIR SENSOR
R.Vajubunnisa Begum, H. Jasmin , H B Ayesha
NCRTABAS-2020 x
E-3 WIRELESS CHARGING CIRCUIT USING INDUCTIVE COUPLING 68
METHOD
Daniel Dias, Ramanunni.O.R, R.Raj Mohan

E-4 ENERGY EFFICIENCY OPTIMIZATION FOR RF POWERED 69


WIRELESS SENSOR NETWORKS
S. Thiyagarajan, P. Gowthaman.

E-5 NEURAL NETWORK BASED IMAGE RESTORATION TECHNIQUE 73


D. Beula, T.V.Gayathri

E-6 ZIGBEE BASED DATA SECURED AND OPTIMAL CONDITION OF 79


“WIRELESS COMMUNICATION” USING ADVANCED E-D
STANDARDS.
T. Shantha Kumar

E-7 A REVIEW STUDY OF SMART HEALTH CARE SYSTEM IN IOT 79


R.Vajubunnisa Begum, K. Dharmarajan, H. Jasmin

E-8 REAL TIME FABRIC FLAW DETECTOR USING MICROCONTROLLER 81


A. Selvarasi, M. Jeevitha

E-9 NANOMETROLOGY FOR NANOPARTICLES 83


T. Angeline

E-10 A STUDY ON AWARENESS OF MATERIALS TO SAFE 83


ENVIRONMENT WITH REFERENCE TO HOUSEHOLD E WASTE
R. Selvi

E-11 E-WASTE MANAGEMENT 85


N. Muthulakshmi, N. Chandrakala, K. Divya

E-12 MOBILES AND OPTICAL COMMUNICATION 88


G. Keshav

E-13 IOT AND WIRELESS NETWORKS 89


S. Shantha , V. Savithri

E-14 ARTIFICIAL INTELLIGENCE 90


R.Vajubunnisa Begum, S. Divya, S. Tamilarasi

E-15 ARTIFICIAL INTELLIGENCE – FUTURE OF EVERYTHING 91


M.Karthick, D.S. Hemanth, J.Simon Ebinezer, J.Syedabuthahir
NCRTABAS-2020 xi
E-16 MEMS AND NANO TECHNOLOGY 91
V. Savithri, S. Shantha

E-17 SOLAR CELLS AND FUEL CELLS 92


Pravin S

E-18 INTERNET OF THINGS AND WIRELESS NETWORKS 93


S. Vasanth, P.V. Prasanth

COMPUTER SCIENCE
C-1 SMART GREENHOUSE SOLUTION BASED ON IOT AND 96
CLOUD COMPUTINGTECHNOLOGIES
H.J. Felcia Bel

C-2 A STUDY TO EXPOSURE THE COVARIANCE PLAUSIBILITY OF 97


DISEASE USING DATA MINING
Aneeshkumar A.S.

C-3 INGENIOUS LIGHTING SYSTEM (ILS) FOR SMART CITIES USING 97


IOT
R. Praveenkumar, M.Kamarajan, M.P.Prabakaran

C-4 IMPLEMENTATION OF DENSITY BASED TRAFFIC LIGHT 98


CONTROLLER USING ARDUINO
Shrinitha S, Devipriya.E, Suvalakshmi.V, R.Thiruvengadam

C-5 DESIGN AND IMPLEMENTATION OF GSM BASED BANK VAULT 99


SECURITY SYSTEM USING ARDUINO
K.Hemapriya, Richa Suman Sharma, M.Selva Kumar

C-6 DESIGN AND IMPLEMENTATION OF TV REMOTE CONTROLLED 100


ROBOTIC VEHICLE
G. Tony Santhosh, S. Akshhaya, M. Ishwarya

C-7 DESIGN AND IMPLEMENTATION OF FIRE EXTINGUISHING ROBOT 100


Andrews Juben Ratchanyaraj, N. Dharshinipriya, D. Manju, S.
Saseedharan

C-8 MINE WORKERS SAFETY SYSTEM USING ZIGBEE 101


E. Niranjana, C. Kiran Kumari

C-9 REMOTE SENSING APPLICATIONS USING IMAGE PROCESSING 101


S. Lakshmi
NCRTABAS-2020 xii
C-10 PERFORMANCE AND ANALYSIS OF VIDEO STREAMING OF 104
SIGNALS IN WIRELESS NETWORK TRANSMISSION
N.Praveen, R.Shyam Sundar, J.Syedabuthahir

C-11 NEURAL NETWORK TECHNIQUE IN COMBINATION WITH 105


TRAINING ALGORITHM AND WAVELETS
K. Indhu, R. Subashini

C-12 OPTIMIZED BINARIZATION TECHNIQUE FOR DENOISING 105


DOCUMENT IMAGES
N. Habibunnisha , D. Nedumaran

C-13 A LIGHTWEIGHT CRYPTOGRAPHIC ALGORITHM USING SHA - 3 106


Heerah D

C-14 REAL-TIME BIG DATA ADOPTATION AND ANALYTICS OF CLOUD 107


COMPUTING PLATFORMS.
M. Anoop

C-15 PROVIDING OPTIMAL PERFORMANCE AND SECURITY 107


GUARANTEES USING DROPS FOR THE CLOUD
S.Nanthini, C.Surya, A.Sumathi, M.Gomathi

C-16 IMPACT OF DATA MINING TECHNIQUES IN HEALTHCARE 111


RESEARCH
R. Bagavathi Lakshmi, S.Parthasarathy

C-17 DENSITY-BASED CLUSTERING: AN OVERVIEW 115


Vinolyn Vijaykumar, R.Kiruthika

C-18 PRIVACY-PRESERVING OF FILES AND SECURING THE DATA SETS 117


USING SYSTEM BASED CLOUD STORAGE
D. Suganthi

C-19 HEALTHCARE DATA PREDICTION SYSTEM USING 118


COLLABORATIVE FILTERING - MACHINE LEARNING TECHNIQUE
Heerah D

C-20 DATA COLLECTION IN WIRELESS SENSOR NETWORKS FOR IOT 119


USING PREDICTION
C.John Paul, Aparyay Kumar

C-21 MONITORING OF CLUSTER FOR HADOOP OPTIMIZED MACHINE 119


TRANSLATION
V. Prema
NCRTABAS-2020 xiii
C-22 AMBIENT INTELLIGENCE (AMI) AND ERGONOMICS – A STUDY 124
Maharasan.K.S, Harinesenthil

C-23 ANALYSIS OF BIG DATA CHALLENGES AND TRENDS IN RECENT 125


ENVIRONMENT
A. Saranya

C-24 QUANTUM COMPUTING 126


R.Vajubunnisa Begum, H. Jasmin , M. Keerthana

C-25 IMAGE PROCESSING 127


A. Jemimah

C-26 ANALYSIS OF SCHEDULING ALGORITHMS FOR WIMAX 128


P. Sudha, A.Rengarajan

C-27 CLOUD COMPUTING 130


P. Sruthivennela

C-28 SOFT COMPUTING & INTELLIGENT SYSTEMS 131


S. Divya, G. Priyadharshini

C-29 SECURE INTERNET SERVICES IN ONLINE BANKING 132


M.Vimal Raj, C.Prabakaran, P.Akash, Praveen.N

C-30 SURVEY ON THREATS ATTACKS AND IMPLEMENTATION OF 132


SECURITY IN CLOUD INFRASTRUCTURE
D. Kowsalya, M. SubaSree

C-31 CLOUD COMPUTING 133


P Karthick, P Jeeva

C-32 AN OVERVIEW OF IoT APPLICATIONS 135


S.Vijayalakshmi, R. Rama, S. Karthika

C-33 IMPACT OF MICROCANTILEVERS BASED MICROSENSOR FOR THE 137


DETECTION OF TOXIC GAS MOLECULES IN THE ATMOSPHERE
J. Jayachandiran, N. Mahalakshmi and D. Nedumaran*
NCRTABAS-2020 1

INVITED lectures
NCRTABAS-2020 2
IL-1
FABRICATION OF DYE SENSITIZED TIO2 NANOCRYSTALLINE SOLAR CELLS
(DSSC) USING ORGANIC AND RU COMPLEX DYES AS SENSITIZERS,
IODIDE/TRIIODIDE (I-/I3-) ELECTROLYTE AND TRANSITION METAL
CARBIDE (TMC) BASED COUNTER ELECTRODE

Muthu Senthil Pandian, P. Ramasamy*


SSN Research Centre, SSN Institutions, Chennai-603110, Tamilnadu, India
Email: ramasamyp@ssn.edu.in

Dye sensitized solar cell (DSSC) has been investigated as one of the potential
candidates for next-generation solar cells due to its advantages of low cost and high
efficiency. Our work deals with the fabrication of solid state DSSC by the modifications of
photoanode, dyes, electrolytes and counter electrodes. Pure and nitrogen doped TiO2
nanostructures were synthesized by Hydrothermal and Sol-Gel method. The physical
properties and crystallinity were characterized by PXRD, FESEM, TEM, pore volume and
BET surface area analysis. The BET surface area and pore volume of the synthesized
material are 84.83 m2/g and 0.1316 cc/g respectively. An efficient phenyl-conjugated organic
Oligoene dye was synthesized. The obtained dye was characterized using Cyclic-
Voltammogram, FTIR, UV-Vis absorption, 13C NMR and Mass spectrum. The commercially
available Rhodamine-B and N3-Ruthenium complex dyes are also used for cell fabrication.
The Solid-Organic (T2/T-) and Solid-Polymer (I3-/I-) electrolytes are synthesized in a Glove
Box under Argon atmosphere and introduced instead of liquid electrolyte for avoiding the
leakage and getting higher stability. Transition metal carbides embedded in Mesoporous
Carbon (TMC-MC) such as Tungsten Carbide (WC) and Vanadium Carbide (VC) have been
synthesized using Rotary Evaporator. The synthesized WC-NRs have large surface area of
401.0302 ± 1.3208 m2/g. The pore width distributions are narrow, centered at 7.249Å with
pore volume 0.2384 cm3/g. The morphological results show the prepared materials having
rod like structures, approximately 100 nm in length and 25 nm in width. The Brunauer-
Emmett-Teller (BET) results show the TiO2 nanorods having large surface area (84.83 m2/g).
Finally the DSSC cell has been made using TiO2 nanorods as photoanode, N719 (ruthenium
complex) as dye, Iodolyte HI-30 as electrolyte and Platinum (Pt) as counter electrode. The
photovoltaic parameters, such as short circuit current density (JSC), open-circuit voltage
(VOC), fill factor and overall conversion efficiency (η) of the DSSC was found to be 22.98
mA/cm2, 0.715 V, 59% and 9.76%, respectively. Efficiency enhancement is being
investigated by improving the bonding characteristics of materials in various segments of
DSSC.
NCRTABAS-2020 3
IL-2
TUNING OF MACROSCOPIC PROPERTIES FROM MICROSCOPIC
PHENOMENA FOR NOVEL MATERIALS APPLICATIONS

Dr. Rita John


Professor & Head,
Department of Theoretical Physics
University of Madras, Chennai
Email: ritajohn.r@gmail.com

ABSTRACT: Science and technology have been growing leaps and bounds like never before in
the present decade. Most of the innovations in technology including Artificial Intelligence (AI),
Machine Learning (ML), Big Data Analysis, Robotics, Space Science, Spintronics, Photonics etc.
are due to the dynamic exploration of research in basic sciences. Scientific investigations and
findings especially in Physics fall under three major areas: abinitio, empirical, and experimental.
The materials consisting of more than one element from the periodic table with varied
compositions can be predicted using quantum mechanical theoretical prescriptions. It is possible
to tailor make materials possessing desired properties for crucial applications including bio
materials for organ replacement. In such predictions, an idea of bonding in the compound plays a
vital role. Semiconductors, for example, have revolutionised electronic industry. It is possible to
engineer the energy gap of semiconductors to suit a particular area of application. When we move
from elemental to binary, ternary to quaternary semiconductors, the covalent nature reduces with
increasing ionic nature. This mixed bonding nature can cause direct to indirect band gap
transitions. It also alters the valence and conduction band positioning and enables one to predict
materials for photonic, electronic and spintronic applications with an aid of doping with other
transition elements wherever necessary. It is also possible to polarise the spins. The majority spin
channels show metallic and minority channels show semiconducting nature in some of the heusler
alloys. The hybridisation of various electronic states is responsible for some of this interesting
behaviour of the alloy. Investigations on Half metallicity are promising for giant magneto
resistance applications in industry. Two dimensional materials are also promising – 2D materials
of the fourth group in the periodic table; Graphene, Silicene, Germanene, and Stanene show
interesting correlations in their structural, optical, and mechanical properties due to the changing
of sp2 to sp3 hybridisation. The presentation will cover some of the interesting results of the
materials mentioned highlighting the salient role played by the nature of bonding in them. Thus,
the macroscopic properties are tuned by the microscopic phenomena for promising applications
in the current technological evolution.

Key words: Mixed bonding, semiconductors, Engineering band gaps, hybridization in 2D


materials, half metallicity, Heusler alloys.
NCRTABAS-2020 4

IL-3
RECENT TRENDS IN SCIENTIFIC DATA PROCESSING
Dr. D. Nedumaran
Professor & Head,
Central Instrumentation & Service Laboratory,
University of Madras, Guindy Campus, Chennai 600 025
E-mail: dnmaran@gmail.com, dnmaran@yahoo.com, dnmaran@unom.ac.in

ABSTRACT: In the digital arena, most of the scientific information (both signal and image) are
readily available in the form of digital data. In order to get useful information from the data, the
data scientist and technocrats process and analyse the data using hardware and software tools and
techniques. In this paper, the fundamental concepts of scientific data processing with some
applications are presented. Fundamentally, the scientific data primarily define the results obtained
from experiments in the form of signals or images. Usually, signals are one-dimensional in
nature, which can be analysed to get the hidden information, using mathematical functions.
Conversely, images have two-dimensional representation, which can be processed to understand
the details of the images.Digital signal and image processing is a computer based technology used
to process, manipulate and interpret the information, which are not readily available in the raw
signal and image data. Signal and image processing techniques are widely employed in many
fields of science and technological applications such as biomedical, biometric, robotic,
photography, industrial, remote-sensingand scientific instruments. Researchers in these fields
have been developing several hardware and software tools for enhancing the clarity and details of
the signals/images. This paper covers the overview of the digital signal and image processing
techniques and tools presently employed for various signal/imaging applications.In the case of
biomedical signals such as ECG, EEG, Echocardiography and Phonocardiography, the various
signal processing operations like denoising, baseline wandering, high-frequency noise removal,
frequency spectrum estimation are dealt with suitable examples. For imaging applications,
various processing operations like, enhancement, restoration, compression, segmentation, edge-
detection, representation and recognition are covered in detail with examples. Then, signal and
image processing hardware and software tools employed are briefly discussed. Additionally, the
role of signal and image processing techniques employed in ECG and feature extraction of vision
based applications are illustrated. Finally, the recent developments in therespective fields are
discussed with applications. The concluding remarks of this paper cover the highlights and
potentialities of data processing techniques in improving the quality of the signals and images for
various applications.
Key words: Data processing, Signal and Image processing, hardware and software tools,
biomedical applications.
NCRTABAS-2020 5

IL-4
MICRO & NANO TECHNOLOGY – AN INTRODUCTION
MEMS & COMMERCIAL MICRO SENSORS IN NOVEL APPLICATIONS.

Dr.J.Jayapandian
Head & Scientific Officer – H (Rtd).
Electronics and Instrumentation Section, Surface & Nano Science Division.
Materials Science Group, Indira Gandhi Centre for Atomic Research.
Kalpakkam – 603 102. TN. India.
E-mail: jjpandian@gmail.com

ABSTRACT: Micro and Nano has a cutting-edge technology in research and development
and focus in a wide range of interest spanning discovery themes of: Health and Wellness,
Food Production and Safety, Energy & Environment.

In the micro & nano area, tiny devices offer giant opportunities for technology, consumer
products, energy saving systems, environmental, Health care, bio medical and more. The
concepts that seeded micro and nanotechnology were first discussed in 1959 by renowned
physicist Richard Feynman in his classic talk There's Plenty of Room at the Bottom, in
which he described the possibility of synthesis via direct manipulation of atoms. Inspired by
Feynman's concepts, subsequently Scientist and engineers started using the term
"nanotechnology".

First, the invention of the scanning tunneling microscope (STM) in 1981 which provided
unprecedented visualization of individual atoms and bonds, and was successfully used to
manipulate individual atoms in 1989. The inventers Binnig and Rohrer at IBM Zurich
Research Laboratory received a Nobel Prize in Physics in 1986. Binnig, Quate and Gerber
also invented the analogous atomic force microscope (AFM) that year.
Second, Fullerenes (buckyball, is a representative member of the carbon structures) were
discovered in 1985 by Harry Kroto, Richard Smalley, and Robert Curl, who together won
the 1996 Nobel Prize in Chemistry.

Nanotechnology is the engineering of functional systems at the molecular


scale. One nanometer (nm) is one billionth, or 10−9, of a meter. Typical carbon-carbon bond
lengths, or the spacing between these atoms in a molecule, are in the range 0.12–0.15 nm,
and a DNA double-helix has a diameter around 2 nm.
Until the late 1980s, the main fundamental transduction modes used in sensors could be
categorized as (a) thermal, (b) mass, (c) electrochemical, and (d) optical. Each of these
NCRTABAS-2020 6
detection modes is associated with features that are complementary rather than competitive
with respect to the other, and a search of an ‘‘ideal transducer’’ has continued.

During the last two decades, advances in microelectromechanical systems (MEMS) have
facilitated development of sensors that involve transduction of mechanical energy and rely
heavily on mechanical phenomena. Development of microfabricated cantilevers for atomic
force microscopy (AFM) signified an important milestone in establishing efficient
technological approaches to MEMS sensors.

The general idea behind MEMS sensors is that physical, chemical, or biological stimuli can
affect mechanical characteristics of the micromechanical transducer in such a way that the
resulting change can be measured using electronic, optical, or other means. In particular,
microfabricated cantilevers together with read-out means that are capable of measuring 10 -12
to 10-6 m displacements can operate as detectors of surface stresses, extremely small
mechanical forces, charges, heat fluxes, and IR photons.

As device sizes approach the nano scale, their mechanical behavior starts resembling
vibrational modes of molecules and atoms. Dimensional scaling of cantilevers is associated
with respective scaling of their mass, frequency, and energy content. In the nanomechanical
regime, it is possible to attain extremely high fundamental frequencies approaching those of
vibrational molecular modes.

Ultimately, very small nanomechanical transducers can be envisioned as human-tailored


molecules that interact controllably with both their molecular environment and readout
components. Nanomechanical resonators with a mass of 2.34 x10-18 g and a resonance
frequency of 115 MHz were fabricated and displacement of 2 x 10-15 mHz -1/2 were measured.
Mass sensitivity of only a few femto grams was reported recently using nanoscale resonators.

This talk will cover interdisciplinary activities and indigenous design effort / experience on
Micro-sensors like: Micro cantilever (MCL) sensors, its fabrication techniques and its read-
out mechanism, Societal applications of micro sensors, in particular Bio-medical and
sustainable agricultural applications. Also discuss about indigenous efforts with the
exploitation of commercial MEMS sensors in novel applications.
NCRTABAS-2020 7
IL-5
AUTOMOTIVE RADARS

Commander Pannir Selvam Elamvazhuthi (Rtd.)


Chief Engineer & Chief Architect
Bengaluru

ABSTRACT
The automotive radar which is used as a sensor and supports the advanced driver
assistance system (ADAS). Specifically, it is used for adaptive cruise control (ACC) and
features such as lane change assist (LCA) system. It is also used to find the free space
around a vehicle, which is used in the autonomous vehicle navigation. The final outcome
expected in the development of autonomous vehicles is to have the cars to become
driverless. Research and developments have been
goingonsincetheearlypartofthelastcenturyandtheyhavebecomeintensivesincethelasttwodecad
es. Society of automotive engineers (SAE) International, an automotive standardization
body classifies the automated driving into various levels. These levels range from level 0,
wherein the automated system issues warnings and may momentarily control to level 5,
where no human intervention is required.

The technology that is used in the radars has been maturing from the time the radars were
used to detect the bombers when they take off, during the world wars to the commercial use
in the automotive radars. The automotive radars use mm wavelength electromagnetic waves
and can get resolutions up to centimetres which is needed in the autonomous vehicles. Radar
being a sensor, that is more effective than other sensors, particularly when there is fog or
rain or during the night time, is expected to be used extensively in the autonomous vehicles
to reach Level5.
NCRTABAS-2020 8
IL-6
METABOLOMICS IN CANCER DIAGNOSIS USING AUTOFLUORESCENCE AND
RAMAN
Dr. Aruna Prakasarao
Department of Medical Physics, Anna University, Chennai, India
Email: *aruna@annauniv.edu; phone 00914422358721
ABSTRACT
Laser induced fluorescence spectroscopy (LIF) and Raman spectroscopy are sensitive to
molecular level changes. In particular, these methods are fast, non-invasive and quantitative.
Furthermore, they can also be used to elucidate key features, such as cellular metabolic rate, and
various alterations in cellular metabolic pathway. These features can be interpreted to shed light
on a variety of clinical problems, such as pre malignant and cancerous growth. If applied
successfully, optical spectroscopy has the potential to represent an important step towards
advances in Medical application.

Biofluids contains various metabolites which reflect the diseased condition of the person and
could potentially be used to diagnose cancer. The fluorescenceemission from biofluid exhibit
peaks due to various intrinsic fluorophores tryptophan,collagen, NADH, FAD and
porphyrin.Tryptophan is an amino acid,fundamental building block of protein and the alteration
of thisamino acid level may reflect conformational state function ofthe protein. Structural protein
collagen is primarily present inextracellular matrix of tissue and its fluorescence emission
mayelucidate changes in structural integrity of cancer tissues.Fluorescence emission of the
endogenous fluorophores of thetissue is also sensitive to their microenvironment.Coenzymes
FAD and NADH arefluorescent which acts as biomarkers for monitoring the oxidation –
reduction state in cells. Since cancer cells have an increased metabolic demand compared to
normal cells, the concentration of these fluorophores varies significantly. Further, fluorescence
spectroscopyalso enable to study the microenvironmental conditions of molecules in the cells
such as pH, viscosity, temperature, etc.

Raman scattering is an inelastic scattering process which arises from perturbations of the
molecules that induces vibrational or rotational transitions and infrared lasersare used for
excitation to avoid overlap of fluorescence. Raman spectroscopy has been used to analysis the
various biomoleculessuch as nucleic acids, amino acids, proteins, antioxidant, carbohydrates and
lipids.The spectroscopic were used not only to diagnose cancer but also to monitor treatment
efficacy of cancer patients.

Keywords: Metabolite, autofluorescence, Raman, blood, urine, saliva


NCRTABAS-2020 9

Contributed papers FROM


PHYSICS
NCRTABAS-2020 10

P-1
CHARACTERIZATION OF UNIDIRECTIONAL GROWN BIS-THIOUREA CADMIUM
CHLORIDE SINGLE CRYSTAL FOR ELECTRO OPTIC APPLICATIONS

D.Jayanthi Margreta, S.Vinoth Kumara, R. Uthrakumar b*


a
Assistant Professor, Hindustan College of Arts & Science, Chennai, TN, India
b
Department of Physics, Govt. Arts College (Autonomous), Salem, TN, India
*Corresponding author:uthraloyola@yahoo.com

ABSTRACT
Optically cleared single crystal of Bis-thiourea cadmium chloride (BTCC) has been
successfully grown from aqueous solution by unidirectional growth technique. Single crystal X-
ray diffraction analysis confirms that the crystal is found to be a crystallized complex crystallizes
in orthorhombic system with space group P212121. The Dielectric and optical transmittance studies
on this sample enunciate the minimum absorption region which is well suited for optical
applications.

1. INTRODUCTION
Crystal growth from solution in particular, the unidirectional solution technique has been
widely used to grow nonlinear-optical quality single crystals [1]. A new approach to high
efficiency, low angular sensitivity organic based non-linear optical materials is to consider
compounds in which a high polarizable organic molecule can be attached to an inorganic host to
form semi-organic crystals, as they have large nonlinearity, high resistance to laser-induced
damage, low angular sensitivity and good mechanical hardness [2-4]. This paper reports the
synthesis, growth aspect and characteristic studies of the Bis-thiourea cadmium chloride. The
grown crystals are subjected to X-ray analysis, optical, hardness and thermal studies presented
and discussed.

2. MATERIAL SYNTHESIS
Slow solvent evaporation technique was used for the development of the title compound.
Transparent colourless seed crystals were obtained after a time span of 7 days. The ampoule is
rested in unidirectional growth setup. Due to the rapid evaporation at the top of the ampoule, the
overall concentration increased at the bottom and promoted the crystal growth along the ampoule
by the specified axis. The growth rate of the crystal was found to be around 2 mm per day. The
crystal of 50mm length has been grown successfully within a period of 25 days. The grown
crystal shows a cylindrical morphology and the photograph of the grown crystal is shown in
Figure 1.

Fig. 1 Photograph of the BTCC single crystal


NCRTABAS-2020 11
3 RESULTS AND DISCUSSION
3.1 X-ray diffraction analysis
Single crystal X-ray diffraction analysis for the grown BTCC crystal has been carried out
to confirm the crystallinity and also to identify the unit cell parameters using Bruker Kappa
APEX-2 diffractometer withMoKα (λ = 0.71073Ǻ) radiation. orthorhombic system with space
group P212121. Unit cell dimensions are a = 9.952 Å, b = 13.484 Å, c = 7.255 Å, α= β= γ= 90°.

3.3 Dielectric studies


Dielectric measurements were accomplished on L-proline lithium bromide (LPLB)
single crystals using employing a HIOKI HITESTER model 3532-50 LCR meter and traditional
two terminal sample holder. The sample of dimension 2 × 2 × 1 mm 3 has been placed inside a
dielectric cell whose capacitance is measured at various temperatures for different frequencies.
Material is characterized to load a resonant cavity and therefore the sample permittivity is
evaluated from the shift of the resonant frequency value compared thereto to that of the empty
(unload) cavity. The dielectric constant and dielectric loss have been calculated using the
equations 1 and 2
cd
 1
A 0
 '   tan  2
Where d is the thickness of the sample, A is that the area of the sample. The observations
are made in the frequency range 100 Hz to 5 MHz at different temperatures. Dielectric
measurement of optical quality L-proline lithium bromide (LPLB) crystal is shown in figure 3
and figure 4. From the spectra, it is observed that the dielectric constant and dielectric loss
decreases slowly with increasing frequency and attains saturation at higher frequencies. The high
dielectric constant of the crystal at low frequency is attributed thanks to space charge
polarization. In accordance with Miller’s rule, the lower value of dielectric constant at higher
frequencies is a suitable parameter for the enhancement of SHG coefficient. The variation of
dielectric constant is due to incorporation of metal ions inside the L-proline lithium bromide
crystal lattices and also, the characteristic of low dielectric loss with high frequency for the
sample suggests that the crystal possess enhanced optical quality with lesser defects and this
parameter play a vital role for the construction of devices from nonlinear optical materials.

500
323 K
3
323 K
Dielectric constant

400 353 K
2.5 353 K
Dielectric loss

300 2

1.5
200
1
100 0.5
0
0
0 2 4 6 8
0 2 4 6 8
log f
log f

(a) (b)

Fig. 2 (a) Dielectric constant  versus log frequency (f)


(b).3 Dielectric loss versus log frequency (f)
NCRTABAS-2020 12
3.3. Optical transmittance studies
The transmittance spectrum of grown single crystal was recorded using Varian Cary 5E
UV–vis–NIR spectrometer in the range of 200–1200 nm with a crystal of 2mm thickness as
shown in Figure 3. From the transmission spectrum, it is observed that the maximum
transparency of 85% and UV cut-off wavelength 330nm were observed for the BTCC single
crystal. The very high transmission in the entire visible and near IR region and short cut-off
wavelength facilitates it to be a potential NLO material for second harmonic frequency doubling.
110

100

90

80

% Transmittance
70

60

50

40

30

20
200 400 600 800 1000 1200
Wavelength (nm)

Fig. 3 UV-transmittance spectrum of BTCC

4. CONCLUSION
Good quality single crystals of Bis-thiourea cadmium chloride (BTCC) were grown by
unidirectional growth technique. Single crystal X-ray analysis, confirmed the BTCC is
orthorhombic system with space group P212121. Dielectric study displays that the dielectric
constant decreases with increase in frequency. Optical transmittance studies on these samples
reveal minimum absorption in the region 230 nm.

REFERENCES
[1] H. G. Gallazher, R. M. Vercelj, J. N. Sherwood, J. Cryst. Growth 250 (2003) 486.
[2] M. H Jiang, D. Xu, G. C.Xing, Z. S.Zhao, Synth. Cryst.3 (1985)1.
[3] W. B.Hoi, M. H.Jiang, N. Zhang, M. G.Liu, X. T.Tao, Mater. Res. Bult. 28 (1993) 645.
[4] R.Uthrakumar, C. Vesta, G. Bhagavannarayana, R. Robert, S. Jerome Das, J. Alloys and
Comp. 509(2011) 2343-2347

P-2
ACOUSTICAL AND SPECTRO STUDIES ON NANO TiO2 INTRODUCED INTO
PEG200 POLYMER MATRIX

S.Sarala.1,2,* Dr.S.K.Geetha,2
1
Department of Physics, Kanchi Shri Krishna College of Arts & science,Kilambi, kancheepuram,
TN.
2
Department of Physics, Government Arts College for Men, Chennai, TN.
*Corresponding author:saralaaswanth@gmail.com

ABSTRACT
Ultrasonic studies were performed before and after dispersing TiO2 into PEG200. From
these studies various acoustical parameters, namely adiabatic compressibility(β), free
volume (vf), intermolecular free length (Lf), internal pressure(πi), ultrasonic velocity and
NCRTABAS-2020 13
specific acoustical impedance(Z) were calculated. Also FTIR, and UV - vis spectroscopic
studies were done .Ultrasonic velocity and free volume measured confirmed dispersion of
TiO2 into the PEG matrix.

INTRODUCTION
The aim of this paper is to evaluate the effect of polymer: PEG used as additive for
colloidal suspension of TiO2 particles. The TiO2 colloidal suspensions were prepared in the
presence/absence of PEG 200 that were chosen because they have terminal hydroxyl groups that
may produce hydrogen bond with the surface OH groups in order to form C - O - Ti covalent
bonds with the oxide network.

EXPERIMENTAL STUDIES
In the present study, 1.5g of PEG 200 is weighed and dissolved in 75ml of
Toluene and taken in a clean bottle. The solution is stirred using a magnetic stirrer, so
that the PEG 200 dissolves completely. Then using different methods determined the viscosity,
density and velocity of ultrasonic waves 2 MHz in the solution. Then nano TiO2, 0.130g is
weighed in the electronic balance and dissolved in that solution (75ml of Toluene, 1.5g
of PEG). Using the same methods determined the viscosity, density and velocity of ultrasonic
waves 2MHz in the solution with TiO2.FTIR analysis were done on presence/absence of
TiO2 into Toluene – PEG 200 solution, Similarly UV-VIS, spectroscopic studies were done on
the same solutions.

TABLE1: EXPERIMENTAL STUDY OF (PEG 200 + TOLUENE) & (PEG 200 +


TOLUENE + TiO2) AT 306K

Concentr Effective Relative Ultrasonic


Density
compound name ation molecular Viscosity Velocity
( Kg/m3)
in % weight Meff(g/mol) (cP) U(m/sec)
PEG 200 + Toluene
2 93.272 856.5895 0.001008 1243.081
(306 K)
PEG 200 + Toluene +
2 93.240 855.389 0.001009 1240.951
TiO2(306 K

TABLE 2: PARAMETER CALCULATIONS FROM ULTRASONIC STUDIES ON THE


COMPOUND (PEG 200 + TOLUENE) & (PEG 200 + TOLUENE + TiO2)AT 306 K

Adiabatic Intermolecular Free Internal Specific


Compound Compressi free Volume Pressure Acoustical
name bility Length Lf(m) Vf(m3/mol) πi(atm) Impedance
PEG 200 + β(m2 /N)
7.55489E-
5.73911E-11 1.39322E-07
384249713. 1064810.13
Z(Kg m-2s-1)
Toluene 10 7 2
PEG
(K) 200
306 + 7.59149E- 380641564. 1061495.83
5.75575 E-11 1.38681 E-07
Toluene 10 1 5
(K) 306+TiO2
Table 3: PERCENTAGE CALCULATIONS OF CHANGE OF ULTRASONIC
PARAMETERS AFTER INTRODUCING TiO2.
NCRTABAS-2020 14

Density Relative viscosity Ultrasonic Velocity Adiabatic


(Kg/m3) (cP) U(m/sec) compressibility
0.140 0.099 0.171 β(m 2 /N)
0.4845

Table 4
Intermolecular free Free Internal Specific Acoustical
Length Lf(m) Volume Vf(m3/mol) Pressure πi(atm) ImpedenceZ(Kg m-2s-
1)
0.289 0.459 0.939 0.311
90 100

60 80 100 120 140


50 60 70 80
Transmittance [%]

Transmittance [%]
40

40
30
20

20
3384.74

2871.60

1646.32

1456.23

1350.10

1247.55

1059.92

934.57
884.60
829.51
525.46
505.91
494.21
483.79
478.22
464.47
452.38
444.86
433.85
427.64
417.89
408.99

3390.83

2868.18

1454.17

1349.97

1247.21

1058.35

933.41
884.68

507.81
497.98
488.74
479.51
468.96
451.92
437.51
430.89
418.85
3500 3000 2500 2000 1500 1000 500 3500 3000 2500 2000 1500 1000 500
Wavenumber cm-1 Wavenumber cm-1

D:\MEAS\PEG TiO2.1 PEG TiO2 Instrument type and / or accessory 18/04/2014 D:\MEAS\PEG 200.0 PEG200 Instrument type and / or accessory 18/04/2014

Page 1/1 Page 1/1

FTIR Graph for PEG200 FTIR Graph for PEG200 with TIO2

(a) UV-Vis Absorbance Spectrum of PEG200 (b) UV-Vis absorbance spectrum of


PEG200+TIO2

Table 5 : COMPARING WAVENUMBERS, TRANSMISSION, OF PEG200

Compound Wave number ,cm-1 Transmission(%) Absorption+reflection (%)


3390.83 68 32
PEG200
2868.18 46 54
TiO2 3441.90
1247.21 62
40 38
60
2853.20 90 10
1239.89
3384.74 92
65 8
35
PEG200+TiO2
2871.60 59 41
1247.55 53 47
Without TiO2and PEG200 With TiO2 - FTIR DATA

TABLE6

X - TiO2
Y - % change in free volume
concentration
2 0.461
4 0.232
6 0.153
NCRTABAS-2020 15
8 0.115
10 0.092

0.5

0.4

% Calculation of free volume


0.3

0.2

0.1

2 4 6 8 10
Tio2 - Concentration %

Graph 7

DISCUSSION
From the Tables 1 and 2, there was change in density from 856.5895 Kg/m3 to
855.389 Kg/m3showing decrease in density and increase in viscosity and decreased
Ultrasonic velocity from 1243.081m/s to 1240.951m/s, confirms TiO2 is dispersed into PEG
200 matrix. Free volume decreased from 1.39322E-07m3/mol to 1.38681 E-07 m3/mol after
adding TiO2. The decrease in free volume again confirms TiO2 has been successfully introduced
into the PEG 200 matrix.There is decrease of Specific Acoustical Impedance from 1064810.132
Kg m-2s-1to 1061495.835 Kg m-2s-1. From density measurements the calculation shows 30mg of
nanoTiO2 has dispersed into 1.5g of PEG 200matrx. Hence the weight percentage of TiO 2 in
PEG 200 is
30𝑚𝑔
× 100 = 2%
1500𝑚𝑔
So all the above calculations from ultrasonic studies confirm TiO2is successfully put
into the PEG matrix.For further increased concentrations of TiO2 in PEG 200 the variation can
be seen from graph7.

CONCLUSION
The results of those experiments allowed the choiceof PEG which determines a far better
dispersion and adhesion on substrates of TiO2 particles. The ultrasonic studies confirm that
TiO2 is successfully introduced into the matrix of PEG 200. However our experiment introduced
by choice 2% of TiO2 higher concentration can also be introduced by taking more TiO2 .Further
such studies of TiO2 into polymer matrix can be carried out so that they can be used for water
purification, catalysis, solar cell applications, conducting paints, pollution control, surface
detection etc.

REFERENCES
[1] A. Fujishima and K. Honda, NATURE,1972,vol 238, pp(37-38).
[2] J. Yu, X. Zhao and Q. Zhao,MATER. CHEM. PHYS.,2001,vol 69,pp( 25-29).
[3] B. O’Regan and M. Gratzel, Nature, 1991, 353,p 737-740.
[4] R. D. McConnell, Renew. Sust.Energ. Rev., 2002, 6,p 271- 293.
[5] W. Göpel and G. Reinhardt In: H. Baltes, W. Göpeland J. Hesse, “Sensors Update”, Editors,
Wiley, New York, 1996, p. 47.
NCRTABAS-2020 16

P-3
SYNTHESIS, GROWTH AND CHARACTERIZATION OF 2-AMINO-5-
NITROPYRIDINE 4-CHLOROBENZOIC ACID (1:1): A NEW ORGANIC SINGLE
CRYSTAL FOR THIRD-ORDER NONLINEAR OPTICAL (NLO) APPLICATIONS

V. Sivasubramani, Muthu Senthil Pandian*, P. Ramasamy


SSN Research Centre, SSN College of Engineering, Kalavakkam, Chennai, TN
Corresponding author: sivasubramaniv1989@gmail.com; *senthilpandianm@ssn.edu.in

ABSTRACT
Organic nonlinear optical (NLO) 2-amino-5-nitropyridine 4-chlorobenzoic acid (1:1)
(2A5NP4CB) single crystals have been grown by slow evaporation solution technique (SEST)
for the first time in literature. The structural properties of the grown crystal have been
determined by single crystal X-ray diffraction (SXRD) analysis. The results reveal that the
grown crystal belongs to monoclinic crystal system with the centrosymmetric space group P2 1/n.
The molecular weight and the density of 2A5NP4CB crystal have been found to be 295.68 g/mol
and 1.538 Mg/m3, respectively. The presence of functional groups and its molecular structure
have been confirmed by FTIR and NMR spectrum analysis, respectively. By employing
unidirectional Sankaranarayanan-Ramasamy (SR) method, optically transparent 2A5NP4CB
single crystal has been grown with the size of about 30 mm length and 10 mm diameter over a
period of 60 days. The comparative investigations of 2A5NP4CB crystals grown by SEST and
unidirectional SR method have been carried out by HRXRD, UV-Vis NIR, chemical etching,
Vickers microhardness and laser damage threshold analyses. The overall results show that the
crystal grown by unidirectional SR method possesses high quality compared to conventional
SEST grown crystals. Third-order nonlinear behavior of the grown crystal was studied using Z-
scan technique by employing He-Ne laser of wavelength 532 nm and it reveals that the grown
2A5NP4CB crystal can serve as a promising candidate for NLO device applications. The optical
limiting behavior was studied under He-Ne laser with the wavelength of 532 nm and the limiting
threshold of the grown crystal was found to be 7.4 mW/cm2. The optimized molecular structure,
frontier molecular orbitals (FMOs), linear polarizability, first-order hyperpolarizability and
natural bond orbital (NBO) of 2A5NP4CB molecule were performed by density functional
theory (DFT).

Fig 1. (a) ORTEP view of 2A5NP4CB with the atom numbering scheme, (b) SEST grown
2A5NP4CB single crystals and (c) SR method grown
2A5NP4CB single crystal
NCRTABAS-2020 17

P-4
STABILIZATION OF NANOFLUIDS WITH MECHANICAL DC VIBRATOR

U. Rajeshwer1, Rita John1*.


1
Department of Theoretical physics, University of Madras University, Chennai, TN.
*Corresponding author: ritajohn.r@gmail.com

ABSTRACT
Nanoparticle agglomeration is an unavoidable challenge in nanofluid applications. Many
researchers are intending to bring a maximum stability in nanofluids through different
aspects.Capping agents, Surfactants, Ultrasonication are some of the methods widely practising
to reduce the particle agglomeration. In this paper, the mechanical DC vibrator, which enhance
the stability factor of Bamboo based activated carbon nanofluid and its viability is ensured
through visual inspection method. The settling time is compared for samples with respect to the
time elapsed, before and after inducing DC vibrator. Also, This paper deals with building
mechanism of DC vibrator through relevant schematic diagrams and circuits.
Keywords: Nanofluids, Nanoparticles, Stability factor; Capping
agents;Surfactants;Ultrasonication, DC vibrator; Activated carbon

P-5
SYNTHESIS OF CARBON DOTS SUPPORTED SILVER NANOPARTICLES FOR THE
SELECTIVE AND SENSITIVE DETECTION OF GLUTATHIONE

M. Sandhiyamalathi, M. Chandhru, S. Kutti Rani* and N. Vasimalai*


Department of Chemistry, B.S. Abdur Rahman Crescent Institute of Science and Technology,
Vandalur, Chennai, TN.
*Corresponding author: skrani@crescent.education, vasimalai@crescent.education

ABSTRACT
Glutathione is an important Antioxidant, which is the main non-protein thiol in mammalian cells
that participating in many critical cellular functions including antioxidants defense and cell
growth [1]. Increased-glutathione level occurs early during linear regeneration and drug and
radiant resistant tumours [1]. Therefore, the detection of glutathione using novel nanomaterial as
probe is an important today. For example carbon dots (CDs) is a new class of fluorescent carbon
material with a particle size less than 10 nm [2]. It is extensively used in several field, due to
their fascinating properties such as good biocompatibility, excellent, photostability low toxicity
great water solubility, simple preparation, higher sensitivity and excellent selectivity to target
analytes [2,3]. On the other hand, silver nanoparticles have been substantial interests from the
scientific community over a century because of the unique size and shape silver nanoparticles
have been extensively used in several applications [4]. Herein, we are reporting the facile
synthesis of carbon dots supported silver nanoparticles (CDs@AgNPs) by wet chemical method.
The synthesized CDs@AgNPs was well characterized by several techniques. CDs@AgNPs
shows the surface plasma resonance (SPR) band at 408 nm. Interestingly, after the addition of
glutathione, the SPR band of CDs@AgNPs was decreased and the fluorescence of CDs@AgNPs
was enhanced. Based on the fluorescence and SPR band changes, we have calculated the
NCRTABAS-2020 18
concentration of glutathione (Fig.1). This method was successfully applied for the detection of
glutathione in different biological samples.

(A) (B)
Fig. 1. (A) UV-vis spectrum of CDs @ AgNPs. Inset: Photograph of CDs@AgNPs. (B) UV-vis
spectra of CDs@AgNPs in the presence of different concentrations of Glutathione.
REFERENCES
[1] D. Scibior, M.Skrzycki, M. Podsiad, H. Czeczot, Glutathione level and glutathione-
dependent enzyme activities in blood serum of patients with gastrointestinal tract tumours,
Clinical Biochemistry 10 (2008) 852.
[2] N. Vasimalai, V. Vilas-Boasa, J. Gallo, M.T. Fernández-Argüelles, Green synthesis of
fluorescent carbon dots from spices for in-vitro imaging and tumour cell growth
inhibition, Beilstein Journal of Nanotechnology, 9 (2018) 530.
[3] J. Gallo, N. Vasimalai, M.T. Fernandez-Arguelles, M. Bañobre-López, Green synthesis of
multimodal ‘OFF-ON’ switchable MRI/optical probes, Dalton Transactions, 45 (2016)
17672.
[4] M. Chandhru, R. Logesh, S.K. Rani, N. Ahmed, N. Vasimalai, One-pot green route synthesis
of silver nanoparticles from jack fruit seeds and their antibacterial activities with escherichia
coli and salmonella bacteria, Biocatalysis and Agricultural Biotechnology 20 (2019) 101241.

P-6
STRUCTURAL AND ELECTRICAL PROPERTIES OF (1-X)BI NA TIO –XBATIO
1/2 1/2 3 3
SINGLE CRYSTALS ACROSS THE MORPHOTROPIC PHASE BOUNDARY BY TSSG
METHOD

M. William Carry*, Muthu Senthil Pandian*, P. Ramasamy


SSN Research Centre, SSN College of Engineering, Chennai, TN
*Corresponding author: carrywilliam5234@gmail.com

ABSTRACT
The piezoelectric solid solution 0.94(Na0.5Bi0.5) TiO3-0.06BaTiO3 (NBT-xBT) is a promising
material to substitute for the environmentally undesired Pb-based piezoelectrics. Lead-free
piezoelectric single crystals with a composition near the morphotropic phase boundary (MPB)
have been grown by the Top-seeded solution growth (TSSG) method. NBT-xBT (x = 0.04, 0.05,
0.06 and 0.07) single crystal with different compositions, covering the rhombohedral to
predominantly monoclinic phase and encompassing the morphotropic phase boundary (MPB),
were grown by TSSG method. The Laue diffraction confirms the grown crystal as a single
crystal with a mosaic structure. Rietveld refinement studies revealed the presence of a phase
NCRTABAS-2020 19
boundary between monoclinic (Cc) and Rhombohedral (R3c) phases near MPB, where the
dielectric and piezoelectric properties were enhanced.The dielectric, ferroelectric and
piezoelectric properties along the <001> direction have been measured.
20000

Intensity (cps)
10000
32 34 56 58

20 30 40 50 60 70 80
2 Thetha (deg)

FIGURE: Rietveld plot of PXRD for NBBT fitted with rhombohedral (R3c) and
monoclinic (Cc) structural modelsfor NBBT 94/6.

P-7
LUMINESCENCE PROPERTIES AND ENERGY TRANSFERPROCESS OF
CE3+ANDMN2+CO-ACTIVATED SRB2SI2O8RED EMITTING PHOSPHOR

Dr.V. Pushpamanjari1, Dr. B. Sailaja2,* and Dr.R.V.S.S.N. Ravikumar3


1
Assistant professor,Bapatla women’s Engineering college,Bapatla-AP
2
Head of General Section, Govt. Polytechnic, Addanki, AP
3
Department of Physics, Acharya Nagarjuna University, Nagarjuna Nagar, AP
*Corresponding author:drsailajaballa@gmail.com

ABSTRACT
Phosphor-converted white-light-emitting diodes (pc-WLED) become the next-generation
solid-state lighting source in both indoor and outdoor lighting areas. Traditional WLEDs are
composed of a blue LED chip and yellow YAG:Ce3+ phosphor; however, they emit cold white
light because the emission spectra do not cover the red region. Therefore, red phosphor is very
important to produce warm white luminescence and to improve the luminous efficiency.
Ce3+andMn2+ co-activated SrB2Si2O8red emittingphosphor was synthesized via conventional
high temperature solid state reaction method. In this study the photoluminescence (PL) of
Ce3+andMn2+ co-activated SrB2Si2O8 as well as the energy transfer (ET) from Ce3+ to Mn2+ were
examined. Phase purity of prepared samples was studied by X-ray powder diffraction. To
elucidate the Photo luminescent properties of Ce3+andMn2+ co-activated SrB2Si2O8 emission and
excitation spectra along with diffuse reflectance spectra were recorded. Additionally,
fluorescence lifetime measurements were performed. Furthermore, correlated colour
temperature(CCT) and chromaticity coordinates were calculated. It was found that Ce3+andMn2+
co-activated SrB2Si2O8 possesses two emission bands located in the violet/blue and deep red
range of the electromagnetic spectrum. TheEnergy transfer from Ce3+ to Mn2+ in co-activated
SrB2Si2O8 phosphor takes place via an exchange interaction mechanism. Ce3+and Mn2+ co-
activated SrB2Si2O8 phosphor will be a promising red emitting phosphor to achieve warm white
for high power LEDs.
NCRTABAS-2020 20

Figure 1(a) Symbolic representation, (b) packaged conventional blue LED chip, (c) white light
generation from red and green phosphor, (d) white light generation from yellow phosphor, (e)
white light emission from phosphor in glass(PIG) operated with blue chip, and (f)excitation and
emission process initiated from the two Thermally coupled(TC) energy levels

P-8
FT-IR STUDIES OF ND3+ DOPED ALKALI ZINC BORATE GLASS

B. Sailaja 1*, G. Tirumala Rao 2, R.V.S.S.N. Ravikumar3


1
Head of General Section, Govt. Polytechnic, Addanki, A.P.
2
Physics Division, GMR Institute of Technology, Rajam, AP.
3
Department of Physics, Acharya Nagarjuna University, Nagarjuna Nagar, AP.
*Corresponding author: drsailajaballa@gmail.com

ABSTRACT
Advancement in the progress of variety of novel solid-state laser active media and
infrared to visible converters are going to be the major interest of research in fields of science
and technology. Green light sources through frequency up conversion utilized as photonic
devices are designed for optoelectronics, high density optical storage and medical
instrumentation for diagnostic purposes are at present extensively explored. The structural study
of such novel rare earth doped materials, especially greenlight emitting Nd3+ doped into alkali
zinc borate glass, is proficient, owing to its probable luminescent nature, makes it suitable for
optical device applications.In the present study0.1 mol% Nd2O3doped into 20ZnO -15Li2O-15
Na2O-49.9B2O3, glass is formed by melt quench technique, after heating up to 1243K and
annealed at 673K.FT-IR characterization spectrum indicates four prominent bands. The bands
at 492cm-1 is due to Li ions, 717 cm-1 is for B-O- B bonding. The missing 806 cm-1 is an
evidence for borates manifested from it.The B-O stretching of BO4 is caused due to significant
and active modes of vibration at1015 cm-1 andat 1381cm-1 is attributed to B-O stretching of
BO3 units.Hydrogen bonds are missing beyond 1600 cm-1, indicates closely packed. The
hydroxyl groups are absent around 3200cm-1 reveals the material under investigation is without
optical loss, intern quantum efficiency increases with decreaseof OH content confirms the
Nd3+doped ZLNB glass could be a best luminescent material.

Figure: FT-IR spectrum of 0.1 mol % of Nd3+ doped ZLNB glass


NCRTABAS-2020 21

P-9
EFFICIENT SEGMENTATION & CLASSIFICATION METHODS FOR MRI BRAIN
IMAGES

E. Synthiya Judith Gnanaselvi1*, Dr. M. Mohamed Sathik2


1
Research Scholar, Bharathiar University, Coimbatore, TN
2
Principal, Sadakathullah Appa College, Tirunelveli, TN
*Corresponding author:judithsynthiya@gmail.com

ABSTRACT
In the field of medical imaging, the segmentation and classification of brain tumour is a complex
and important area of studies because it is essential for the intention of early tumour diagnosing
and treatment of brain tumours and other neurologic complaints. This paper is an in-depth
analysis of diverse methods used in segmenting and classifying brain tumour images and
measures the performance of such methods. Generally in medical imaging, segmentation of brain
tumour images is performed manually. It requires immense skill combined with expertise and
experience. Apart from being time consuming, manual brain tumour delineation is complicated.
Hence more and more research is undertaken in all corners of the world to derive a method
which will surpass all present disadvantages. Among many three methods are considered
promising and comparative study has been developed here. The performance metrics such as
accuracy, PSNR and MSE are used for scrutinize the performance of these methods. From the
investigational outcomes, the classification accuracy was found to be very high using the Chan-
Vese segmentation method with the Random Forest (RF) Classifier.
Keywords: Classification, Segmentation, Random Forest, Radial Basis Function, Chan-Vese

1. INTRODUCTION
Brain is the body's imperative component in the structure of the human body and has a
completely complex structure. The brain tumour causes the abnormal development of
uncontrolled tissue of cancer in the brain. Some of the primary brain tumours commonly found
includes glioma, meningioma, pituitary adenomas, and nerve sheath tumours. This paper
identifies in the analysis only glioma and meningioma. [1]. In current days, research work on
automatic segmentation and classification of brain tumours has risen, resulting in demand for this
area of research and it is still in progress. The region based Active Contour Method (ACM) for
segmentation method by A. Shenbagarajanet al.[2], presents high accuracy, and sensitivity,
specificity measures. Xiuming Li et al.[3] establishes that it is very hard to segment MR images
with intensity inhomogeneity by using the Chan-Vese (CV) model.

The rest of the paper is ordered as follows: The overview of segmentation methods is
offered in Section 2. Also classification methods are depicted in section 3. The performance
metrics and result analysis is placed in section 4. Finally, conclusions are made in Section 5.

2. SEGMENTATION METHODS
2.1. Geodesic ACM
In geodesic ACM, the technique which is based on active contours evolving in time
according to intrinsic geometric measures of the image is followed. The evolving contours
NCRTABAS-2020 22
naturally split and merge, allowing the simultaneous detection of several objects and both
interior and exterior boundaries. The relation between active contours and the calculation of
geodesics or minimal distance curves is maintained. This geodesic approach for object
segmentation allows to connect classical “snakes” based on energy minimization and geometric
active contours based on the theory of curve evolution[4].

2.2. Region Based ACM


This region based ACM for segmentation aims to drive the curves to reach the boundaries
of the input MRI brain images. It deals well with noisy images, blur images, and images that
have multiple holes, disconnected regions etc. In MRI brain image analysis the region based
active contour model since considers global properties of images such as contour lengths and
MRI image pixel regions as against local properties such as gradients[2].

2.3. Chan Vese Model


Chan-Vese model is a prevailing and flexible method which is able to segment many
types of images using thresholding or gradient based methods. The model is based on an energy
minimization problem, which can be reformulated in the level set formulation, leading to an
easier way to solve the problem [5].

3. CLASSIFICATION
3.1 Random Forest
A Random Forest (RF) is amid the most prevailing classifiers, and is an ensemble of
multiple decision trees. The fabrication of RF is based on bootstrap set and OOB set (Out-Of-
Bag set). The bootstrap set contains the instances for building a tree and the OOB set contains
test instances which are not included in bootstrap set. The splitting criterion applied in every
node is considered as the maximization of information gain in this method. By applying this
criterion each node splits the incoming instances in two sets. In order to evaluate the information
gain in this RF method, only a small number of variables (mtries) are used in every node among
all existing variables(M).

3.2 Radial Basis Function


A RadialBasis Function exploits radial basis functions as activation functions. The output
of the network is a linear permutation of radial basis functions of the inputs and neuron
parameters. An RBF achieves classification by measuring the input’s similarity to instances from
the training set. Each RBF neuron stores a “prototype”, which is just one of the patterns from the
training set. Each neuron calculates the Euclidean distance between the input and its prototype
while we want to classify a new input. When the input closer resembles class A prototypes than
class B prototypes, it is classified as class A.

2. RESULTS
3. 1. Performance Metrics
The performances of all the methods are predicted by using the performance metrics like
accuracy, Mean Square Error and PSNR. The formulas are given below.
3.1.1. Accuracy
Accuracy means the percentage of instances correctly classified.
NCRTABAS-2020 23

#(TP) + #(TN)
Accuracy =
#(TP) + #(TN) + #(FP) + #(FN)

3.1.2. Mean Square Error


The mean square error (MSE) measures the average of the square of the difference between the
segmented and the original image. The MSE can be calculated by,
n
1
MSE = ∑(I_seg i − I_orig i )2
n
i=1
3.1.3. Peak Signal-to-Noise-Ratio (PSNR)
The peak signal-to-noise ratio (PSNR) is a measure of reconstruction quality. The PSNR
formula is defined as follows:
maximum intensity 2
PSNR = 10 log10
MSE
3.2 Result Analysis
Table 1. Performance evaluation for type of tumour based on different algorithms with
classifiers
Accuracy % PSNR MSE
Classifi
Normal

Normal

Normal
Glioma

Glioma

Glioma
Menin-

Menin-

Menin-
gioma

gioma

gioma
Algorithms
cation

RBF 66.95 66.89 66.95 51.79 51.90 51.43 0.431 0.420 0.468
Geo_Desic
RF 74.49 78.70 86.46 51.96 52.02 52.56 0.414 0.408 0.361
RBF 66.85 66.95 66.95 51.53 51.84 52.05 0.457 0.425 0.405
Chan_Vese
RF 76.12 81.24 86.43 55.87 54.14 52.56 0.168 0.251 0.361
RBF 66.85 66.95 66.95 51.51 51.86 52.05 0.460 0.423 0.406
Region
RF 75.91 81.60 82.92 55.87 54.14 52.56 0.168 0.251 0.361
Table 1 shows that the type of tumour affected such as glioma, meningioma and normal
based on different algorithms such as Geodesic, Chan Vese, region based ACM with the
classifiers like RF and RBF. From the scores in Table 1, it can be seen that that the Chan-Vese
algorithm performs better than other methods. It clearly shows that on comparing with other
methods, Chan-Vese method got highest accuracy and PSNR value while using RF classification
method. Also the Chan-Vese method yields the lowest MSE value when use RF classification
method compared with RBF classification method.

Figure 1: Algorithm based RBF & RF classification Accuracy


NCRTABAS-2020 24

CONCLUSION
In this paper, various methods in segmenting and classifying brain tumour images
are analysed and measures the performance of such methods by using the performance metrics
accuracy, PSNR and MSE. An important observation in this work is that Random Forest (RF)
classification method with Chan-Vese method boosts the performance (85.87% accuracy)
significantly. The results concluded that Chan-Vese method outperforms the existing methods
and achieves highest accuracy and PSNR value while using RF classification method.

REFERENCES
[1] Synthiya Judith Gnanaselvi E, & Mohamed Sathik M. (2018). “A novel approach for self
organisation based segmentation of MRI brain images”. International Journal of Pure and
Applied Mathematics, 119(12), 2481-2500. Retrieved from http://www.ijpam.eu.
[2] Shenbagarajan A, Ramalingam V, Balasubramanian C and Palanivel S, “Tumour Diagnosis
in MRI Brain Image using ACM Segmentation and ANN-LM Classification Techniques”,
Indian Journal of Science and Technology, Vol 9(1), DOI:10.17485/ijst/2016/v9i1/78766,
January 2016.
[3] Xiuming Li, Dongsheng Jiang, YonghongShi,andWensheng Li, “Segmentation of MR
image using local and global region based geodesic model”,BiomedEng Online. 2015;14: 8,
Published online 2015 Feb 19. doi: 10.1186/1475-925X-14-8.
[4] VicentCaselles, Ron Kimmel, Guillermo Sapiro, Geodesic Active Contours,
International Journal of Computer Vision 22(1), 61–79 (1997), ©1997 Kluwer Academic
Publishers.Manufactured in The Netherlands.
[5] Payal Gupta, Er. Satinderjeet Singh, Review Paper On Brain Image Segmentation
Using Chan-Vese Algorithm And Active Contours, International Journal of
Advanced Research in Computer Engineering & Technology (IJARCET) Volume 3
Issue 11, November 2014.

P-10
INVESTIGATIONS ON THE GROWTH AND CHARACTERIZATION OF A
CRYSTALLISEDSUPERACID, AMMONIUM TETROXALATE DIHYDRATE

Eunice Jerushaa*, S. ShahilKirupavathyb


a
Department of Physics, R.M.D. Engineering College, Kavaripettai, TN.
b
Department of Physics, Velammal Engineering College, Chennai, TN.
Corresponding author: *eunjer@gmail.com

ABSTRACT
Single crystals of ammonium tetroxalatedihydrate were grown by the slow evaporation solution
growth method using deionised water as solvent. The characteristic properties were studied.
From the single crystal XRD analysis the grown crystals’ structure was confirmed. The presence
of the functional groups was analysed from the recorded FT-IR and FT-Raman spectra. The
TGA and DTA studies were carried out to explore information about thermal behaviour.Optical
studies revealed the crystal’s transparency.
NCRTABAS-2020 25

1. INTRODUCTION
Oxalic acid and oxalates were known to exist in different crystalline forms and are
isomorphous;and interest wasshown in the study of tetroxalates. These crystals are also highly
elastic anisotropic in nature. Most of the anisotropic physical, optical and dielectric properties of
the single crystals get deteriorated or vanished when they are not single crystals or having the
defects. In the present paper, we report the results of the structure redetermination and the
various characterisation studies which the grown crystals have been subjected to.

2. EXPERIMENTAL - CRYSTAL GROWTH


The starting reagents were L-asparagine monohydrate (L-Asp) with molecular formula
C4H10N2O4 and commercial oxalicacid H2C2O4.2H2O. The precursors were used after
recrystallisation.Colourless crystals of ATOXAL were obtained at room temperature by slow
evaporation at room temperature from the aqueous solution of L-Asp:H2C2O4 in the 1:2 molar
ratio.The crystalswith perfect shape and free frommacro defectswere formed by spontaneous
nucleation in the saturated solution at room temperature.

3. CHARACTERIZATION STUDIES
Confirmation of the chemical composition of the synthesized compound was carried out
using the instrument Perkin-Elmer 2400 Series CHNS/O Analyser. The single crystal X-ray
diffraction analysis of ATOXAL crystal was carried out using ENRAF NONIUS CAD4 X-ray
diffractometer. FT-IR spectrum was recorded by the KBr pellet technique using a BRUKER 66V
FT-IR spectrometer. Perkin Elmer GX2000 FT-Raman spectrometer in the wave- number range
100–3500 cm-1 is employed to carry out the vibrational studies of the grown crystal.The thermal
behaviour was studied by thermo gravimetric analysis using a NETZSCH STA 409 PC/PG
thermal analyser in the nitrogen atmosphere and the NETZSCH DSC 200F3 was used to
study the phase transition that occurred in the crystal. UV–vis spectrum was recorded inthe
region of 200–800 nm using VARIAN CARY 5E spectrometer.

4. RESULTS AND DISCUSSION


4.1. CHNanalysis
The CHN analysis carried out confirms quantitatively the elemental composition of the
synthesized salt. The empirical formula for the title compound isC4H11NO10. The experimental
and calculated values of C, H and N agree with each other confirming the formation of
ATOXAL.

4.2 Single Crystal XRD


From the single crystal XRD analysis it is confirmed that the grown ATOXAL
crystallizes in the triclinic system with space group P1̅. The obtained crystallographic data are in
good agreement with the literature.

4.3Vibrational spectral analysis


The FT-IR and FT-Raman spectra of ATOXAL were recorded in the range 400 – 4000
cm-1. The most high-frequency absorption band in the IR spectrum at 3426 cm-1 is due to the
stretching vibrations of the water molecule. These disappear at dehydration. The position of the
band at 1682 cm-1 in the FT-IR spectrum and bands at 1689 cm-1 and 1736 cm-1 in the FT-Raman
NCRTABAS-2020 26
spectrum are characteristic for the ν(C=O) vibration of the carboxyl group in the structure of
NH4.C2HO4.C2H2O4.2H2O. The ammonium ion has its group frequencyat 1404 cm-1 and 1489
cm-1 in the IR and Raman spectra respectively.

4.4Thermal Studies
The thermal behaviour of ATOXAL was recorded in the temperature range of 25–
500°CasTGAandDTAcurves. A major weight lossat 106.4°C is assigned to dehydration of
ATOXAL from the TGA trace. The total degradation occurred in four stages and it is extended
up to 500°C. The resultsof DTA enumerate the absence of any endotherm/exotherm up
to106.4°C indicating no solvent inclusion in the crystal lattice which is also supportedby TGA
results. A sharp endothermic peak was observed at 115.5°C, which corresponds to the melting
point of the title compound. Hence,the material can be exploited for any suitable application
up to its melting point (115.5°C).

4.5 Optical Studies


UV-Vis transmission studies shows that the crystal has transparency with maximum
transmittance being 13-14% in the range 300 nm - 800 nm, which indicates that this crystal may
be employed for applications in the entire visible and IR region. The band gap was determined to
be 4.03 eV from the Tauc’s plot.

5. CONCLUSION
Bulk single crystals of ATOXAL have been successfully grown by slowevaporation
technique from the aqueous solution.The FT-IR and FT-Ramantraces reveal the presenceof
ammonium group and functional groups. It is evident fromthe thermal studies that the crystal is
thermally stable up to115.5 °C.The crystal’s transparency and band gap were determined from
the UV-vis studies.

P-11
INVESTIGATION OF TYPE-I AND TYPE-II PHASE MATCHING ELEMENTS USING
ORGANIC 2AP4N SINGLE CRYSTALS GROWN BY POINT SEED ROTATION AND
NOVEL RSR TECHNIQUE

P. Karuppasamy1*, T. Kamalesh1, Muthu Senthil Pandian1, P. Ramasamy1,


Sunil Verma2,3
1
SSN Research Centre, SSN College of Engineering, Chennai, TN.
2
Laser Materials Development and Devices Division, RRCAT, Indore, MP.
3
Homi Bhabha National Institute (HBNI), Anushakti Nagar, Mumbai, Maharashtra.
*Corresponding author:karuppasamyp75@gmail.com, karuppasamyp@ssn.edu.in

ABSTRACT
Nonlinear optical (NLO) 2-aminopyridinium 4-nitrophenolate 4-nitrophenol (2AP4N) organic
single crystals were successfully grown by point seed rotation technique and novel Rotational
Sankaranarayanan-Ramasamy (RSR) method [1]. The unit cell parameters of the 2AP4N single
crystals were confirmed by single crystal X-ray diffraction (SXRD) analysis. The various
crystalline planes and direction of RSR grown crystal were confirmed by powder XRD (PXRD)
NCRTABAS-2020 27
measurement. The optical quality of the 2AP4N crystal and optical band gap were analysed by
UV-Vis NIR spectrum analysis. The Kurtz–Perry powder technique was used to find the second
harmonic generation (SHG) efficiency of the 2AP4N crystal using Nd:YAG laser at the
wavelength of 1064 nm. The SHG efficiency was found to be 4.5 times that of KDP. Refractive
index measurement was carried out with different wavelength using prism coupling method. The
phase matching angle of 2AP4N was found out by fitting Sellmeier equation. Based on the phase
matching angle, the grown 2AP4N crystal was cut and the type-I and type-II SHG elements were
fabricated.

Figure. 2AP4N crystal grown by (a) Point seed rotation technique and (b) RSR method

ACKNOWLEDGEMENT
The authors gratefully acknowledge DAE-BRNS, Government of India for the financial support
(Ref. no. 34/14/06/2016-BRNS/34032).

REFERENCES
[1] P. Karuppasamy, T. Kamalesh, Muthu Senthil Pandian, P. Ramasamy, Sunil Verma, J. Cryst.
Growth, 518 (2019) 59-72.

P-12
PERFORMANCEOFADENINE DOPED POLYVINYLIDENEFLUORIDE BASED
SOLID POLYMER ELECTROLYTES FOR DYE-SENSITIZED SOLAR CELL
APPLICATIONS

S. Kannadhasan, Muthu Senthil Pandian*, P. Ramasamy


SSN Research Centre, SSN College of Engineering, Chennai, TN, India
*Corresponding author: senthilpandianm@ssn.edu.in

ABSTRACT
The different weight ratios of adenine-doped (0%, 10%, 20%, 30%, 40% and 50%)
PVDF/KI/I2 based solid state polymer electrolytes were equipped by casting technique using
N,N-dimethyl formamide (DMF) as a solvent. The synthesized solid state polymer electrolytes
were characterized by various techniques such as powder X-ray diffraction (PXRD), AC-
impedance analysis and scanning microscopy analysis. The PXRD studies confirmed the
crystalline and amorphous phase of the solid state polymer electrolytes. From the AC-impedance
analysis, ionic conductivity of the solid state polymer electrolytes was calculated. The adenine-
doped (0%, 10%, 20%, 30%, 40% and 50%) PVDF/KI/I2 based solid state polymer electrolytes
NCRTABAS-2020 28
have shown the ionic conductivity of 3.86×10-6 Scm-1, 1.05×10-5 Scm-1, 1.58×10-5 Scm-1,
2.41×10-5 Scm-1, 4.44×10-5 Scm-1and 1.84×10-5 Scm-1, respectively.The surface morphology of
adenine-doped (0%, 10%, 20%, 30%, 40% and 50%) PVDF/KI/I2 based solid state polymer
electrolytes have confirmed. The DSSCs were fabricated using the adenine-doped (0%, 10%,
20%, 30%, 40% and 50%) PVDF/KI/I2 based solid state polymer electrolytes that achieved
power conversion efficiency of 1.3%, 1.6%, 1.8%, 2.4%, 3.1% and 2.0%, respectively under an
illumination of 100 mW cm−2.

P-13
STUDIES ON LINEARTY AND ASSAY USING UV-VISIBLE SPECTROSCOPY FOR
THE DRUGS PANTOPRAZOLE AND PARACETAMOL BEFORE AND AFTER
EXPIRY PERIOD

1Manimegalai E, 2,*Bright A,
1,2
Department of Physics, Bharathi Women’s College (A), Chennai, TN
*Corresponding author: bright_04wcc@yahoo.co.in

ABSTRACT
Drugs must comply with the accepted specifications to be effective when used. Studying the
drug stability is important in order to understand the behaviour of the drug under various
conditions. The stability and the quality of the drug products could only be assured by continual
testing and systematic evaluation. The aim of this study is to emphasis the quantitative
deterioration a drug undergoes, after the stipulated expiry period. UV-Visible spectroscopic
technique is employed to estimate two drugs namely, Pantoprazole and Paracetamol in tablet
dosage form in their current formulation and 10-12 months after their expiry period.
Pantoprazole is a proton pump inhibitor that is used to treat erosive esophagitis and
Paracetamol is a drug used to treat pain and to reduce fever. In order to carry out UV –Visible
spectroscopy studies, the stock solution of the drug Pantoprazole was prepared by diluting
25mg of the pure sample with 0.1NaOH and water was used for all dilutions in the case of
Paracetamol. The stock solutions thus prepared were further diluted filtered and sonicated to
obtain solutions of required drug concentration. The maximum UV absorbances of the current
and expired samples were noted at their corresponding λmax with a double beam UV-
spectrophotometer. For both the drugs the absorbance values for various concentrations of the
standard drug solutions were noted and the linearity range was found. The drug content of the
samples was calculated from their respective regression curves obtained from standard drug
solutions by using the measured absorbance values.The calibration curves obtained using UV-
Visible spectroscopy was linear in the range 15-25 µgml-1 for Pantoprazole and 7.5-17.5µgml-
1
for Paracetamol. The drug content of Pantoprazole reduces to 93.32% after the expiry period
from the actual value of 99.68% present in the current formulation. In the case of Paracetamol
the drug content reduces to 97.53% after the expiry period from the actual value of 100.16%.
Thus the drugs tend to deteriorate and lose their efficiency after the stipulated shelf life.
NCRTABAS-2020 29

P-14
SPRAY COATED NIOX THIN FILM AS HOLE TRANSPORT MATERIAL FOR
PEROVSKITE SOLAR CELL APPLICATIONS

R. Isaac Daniel, N. Santhosh, Muthu Senthil Pandian*, P. Ramasamy


SSN Research Centre, SSN College of Engineering, Kalavakkam, Chennai, TN
*Corresponding author:senthilpandianm@ssn.edu.in

ABSTRACT
Lead halide perovskite solar cells have recently attracted tremendous attention due to their
outstanding photovoltaic performance. The inorganic p-type semiconductor NiOx is largely used
as hole transport layers for the realization of stable and hysteresis-free solar cells as a result of
their good electronic properties, facile fabrication, and excellent chemical endurance. Here, we
develop a simple spray deposition of NiOx hole transport layer. The prepared NiOx thin film is
characterized by XRD (X-ray diffraction pattern), UV-vis-NIR spectrophotometer,
photoluminescence spectroscopy and Thickness profilometer. From the XRD analysis, the
prepared NiOx thin film well matched with JCPDS card. No.89-3080 .The transmittance spectra
of the NiOx thin film show ~62% transmittance in the visible region. From the
photoluminescence spectra the effective charge carrier collection occurs between
NiOx/perovskite films. The thickness of NiOx thin films measured by Thickness profilometer.

P-15
INFLUENCE OF TiO2 NANOFILLER ON THE THERMALAND
STRUCTURALPROPERTIES OF PEO BASED POLYMER ELECTROLYTE

*A. Christina Nancy a, Agnes Robeena Quincy b


a
Department of Physics, Women’s Christian College, Chennai,TN
b
Post Graduate Department of Physics, Women’s Christian College, Chennai, TN
*Corresponding author: a.christinanancy@gmail.com

ABSTRACT
In recentpast, zinc-ion based polymer electrolytes have been receiving
considerableconsiderationin the place of conventional lithium-ion storage systems due to various
advantages like low-toxicity of zinc, low cost, readiness, high stability and fairly high specific
and volumetric density.Hence, this paper aims to discuss thethermal and structural properties of
one such zinc-ion conducting composite polymer electrolyte usingDifferential scanning
calorimetry (DSC),Thermogravimetric analysis (TGA), X-ray diffraction (XRD)and Fourier
transform infrared spectroscopy (FTIR) analyses.The influence of doping TiO2 nanofiller (5wt%)
to the polymer electrolyte, 85 wt% PEO - 15 wt% Zn(CF3SO3)2prepared by solution casting
techniquehas been reported.Melting temperature (Tm) of PEO decreased with the addition of both
ZnTf salt and nano TiO2 filler due to the increased amorphous state of polymer electrolyte,
which resultsin enhanced ionic conductivity. The XRD analysis revealed the presence of two
prominent PEO peaks at 19⁰ and 23⁰. From TGA results, it is observed that the addition of salt
and nanofiller reduced the thermal stability of pure PEO from 414 ⁰C to 315 ⁰C, however, the
NCRTABAS-2020 30
stability window is appreciable for commercial application of the polymer electrolyte.FTIR
analysis confirmed the structural changes occurring due to the coordination of ether oxygen, end
groups and also dispersion of filler nanoparticles into the triflate salt.
Key words: PEO polymer electrolyte, Zinc triflate, TiO2 nanofiller

P-16
HYDROTHERMAL SYNTHESIS AND CHARACTERIZATION OF HIGH
CRYSTALLINE ZNAL2O4 NANOPOWDER

G. Thirumala Rao1*, B. Sailaja2, R.V.S.S.N. Ravikumar3


1
Physics Division, GMR Institute of Technology, Rajam, AP.
2
Department of Physics, Govt. Polytechnic, Addanki, AP.
3
Department of Physics, Acharya Nagarjuna University, Nagarjuna Nagar, AP.
*Corresponding author: thirumalaphy@gmail.com

ABSTRACT
Nanophosphor has potential applications in many fields such as solid state lighting, medical,
security, display devices, remote thermometry and thermo luminescence.ZnAl2O4 nanophosphor
for lightening application has been proved as an emerging research area. Methods of preparation
of nanophosphor play a vital role in controlling the properties including particle size, shape and
morphology. Nanophosphor is preferred over micron size phosphor in a number of applications
not only due to their particle size but also enhanced optical properties. ZnAl2O4 also named as
Gahnite, a spinel type oxide which has high chemical and thermal stability, high mechanical
resistance and low surface acidity and being suitable for a wide range of applications such as
optical coating, high temperature ceramic material.In the present investigation, ZnAl 2O4
nanophosphor has been synthesized by hydrothermal method. The obtained nanophosphor was
annealed at 1000 ℃ for 5 h for better crystalline properties. As prepared nanopowder was
characterized by X-ray diffraction (XRD), scanning electron microscopy (SEM), energy
dispersive x-ray spectroscopy (EDS) and photoluminescence (PL). The XRD pattern shows the
face-centered cubic spinel structure of ZnAl2O4. SEM images show the non-uniformly
distributed spherical like structures. EDS pattern exhibited only the target elements such as Zn,
Al and O, which indicates the purity of the prepared sample. Photoluminescence spectrum
exhibited the emission bands in visible region.
Keywords: Hydrothermal synthesis, ZnAl2O4 nanophosphor, X-ray diffraction and
Photoluminescence
NCRTABAS-2020 31

P-17
GROWTH OF BULK SIZE ORGANIC TRIPHENYLAMINE (TPA) SINGLE CRYSTAL
BY BRIDGMAN – STOCKBARGER METHOD: A POTENTIAL CANDIDATE FOR
NONLINEAR OPTICAL (NLO) APPLICATION

K. Ramachandran, A. Raja, Muthu Senthil Pandian*, P. Ramasamy


SSN Research Centre, SSN Institutions, Chennai, TN
*Corresponding author: ramphy18@gmail.com

ABSTRACT
Optically transparent triphenylamine (TPA) single crystal has been grown by Bridgman-
Stockbarger method with the optimized temperature gradient. The grown organic TPA single
crystal belongs to monoclinic crystal system with non-centrosymmetric space group of Cc,
which is obtained from the single crystal X-ray diffraction (SXRD) analysis. The chemical
bonding structure of TPA crystal has been obtained from the Fourier transform infrared (FTIR)
spectral study. The optical transmittance spectrum of the grown TPA crystal was obtained from
the UV-Visible NIR spectrum analysis and the high optical transmittance (82 %) is observed in
the UV-visible to near infrared (NIR) region. The optical cut-off wavelength is observed at 368
nm. The optical band gap value is found to be 3.35 eV, which is obtained from the optical data.
Thermal stability and melting point of the TPA single crystal were identified by using
thermogravimetric and differential thermal analysis (TG-DTA). The luminescence property of
TPA single crystal was carried out using photoluminescence (PL) spectral analysis. The second
harmonic generation (SHG) efficiency of the title crystal was measured by Kurtz-Perry powder
technique.

P-18
SYNTHESIS AND CHARACTERIZATION OF DISC-SHAPED THIOPHENE BASED
ZN-PORPHYRIN FOR ORGANIC SOLAR CELLS APPLICATION

M. Muthu, P. Pounraj, Muthu Senthil Pandian*, P. Ramasamy*


SSN Research Centre, SSN College of Engineering, Kalavakkam, Chennai, TN.
* Corresponding author: senthilpandianm@ssn.edu.in, ramasamyp@ssn.edu.in

ABSTRACT
Disc-shaped thiophene donor-based Zn-porphyrin complex is designed and synthesized by
Alder’s two step method for organic solar cells application. The theoretical analysis is used to
study the HOMO-LUMO and band gap energies. The synthesized porphyrin (THPY) and Zn-
porphyrin complex (THPY-Zn) were characterized by Ultra Violet-Visible (UV-Visible),
photoluminescence (PL), Fourier Transform Infra-Red (FTIR) spectroscopic methods. The
experimental Highest Occupied Molecular Orbital (HOMO) and Lowest Unoccupied Molecular
Orbital (LUMO) energies are analysed by Cyclic Voltammetry (CV) and it is compared with
theoretical HOMO and LUMO energies. THPY-Zn has red shifted absorption spectrum than
corresponding THPY compound because of metallization. The synthesized THPY and THPY-Zn
complex have strong absorption at visible region. The electronic and optical properties of the
NCRTABAS-2020 32
synthesized disc shaped porphyrin and Zn-porphyrin complex are investigated for organic solar
cells application.

P-19
3 D STRUCTURE DETERMINATION, HIRSHFELD SURFACE ANALYSIS, ENERGY
FRAMEWORK AND CHARACTERIZATION STUDIES OF (2E,4E)-1-(4-
CHLOROPHENYL)-5-(4-METHOXYPHENYL) PENTA-2,4-DIEN-1-ONE

K. Biruntha *a, G.Ushab


a
Department of Physics, Bharathi Women's College, Chennai, TN
b
PG and Research Department of Physics, Queen Mary's College, Chennai, TN
*Corresponding author: birunthabalachandar@gmail.com

ABSTRACT
Chalcones are open chain flavonoids having a variety of biological activities, including
antioxidant, anti-inflammation, antimicrobial, antiprotozoal and antiulcer properties. More
importantly, chalcones have also shown anticancer activity as inhibitors of cancer cell
multiplication. The Claisen–Schmidt condensation reaction between substituted acetophenones
and aryl aldehydes under basic conditions has been broadly used to synthesize chalcone
derivatives. As part of our studies in this area, the title compound(2E,4E)-1-(4-
CHLOROPHENYL)-5-(4- METHOXYPHENYL) PENTA-2,4-DIEN-1-ONE with chemical
formulaC18H15ClO2 was synthesized, and a new organic crystal was grown by slow evaporation
technique using chloroform as the solvent, and its crystal structure was determined and
characterized by X-ray diffraction method. XRD reveals that the crystal has Monoclinic system,
with P21space group. The unit cell parameters: a, b, c and β are 9.2456(9), 4.0178(4), 19.905(2)Å
and β=92.567(3)º, respectively. The structure was solved by direct method using SHELXT-
2014/7 and it was refined by Full matrix least squares procedure on F2 method using SHELXL-
2014/7 software programs. Crystal Explorer (17.5) performs Hirshfeld surfaces computational
analysis and quantifies the intermolecular interactions in terms of surface contribution, 2D
fingerprint plots create the graphical representations, which indicates that the most important
contributions for the crystal packing are from H…H (40%), H…C/C…H (23.8%) and H…O/
O…H & H…Cl (12.9%) interactions. The intermolecular interaction energies were calculated
for the title compound and their distribution over the crystal structure was visualized graphically
by constructing energy frameworks, and it is found that the dispersion component is
significant.In the crystal packing, structure is stabilized by C-H…O hydrogen bonds in addition
to π- π interactions.The physical and chemical behaviour of the synthesized compound was
studied with the help of NMR, UV-VIS-NIR, FT-IR and PL spectra and TG/DTA thermo curves.
NCRTABAS-2020 33

P-20
CARBON BASED HOLE-COUNDUCTOR-FREE PEROVSKITE SOLAR CELLS
BASED ON 2D-3D ORGANIC INORGANIC HALIDE ABSORBER

K. R. Acchutharaman, N. Santhosh, Muthu Senthil Pandian*, P. Ramasamy


SSN Research Centre, SSN College of Engineering, Chennai, TN.
*Corresponding author: santhosh.10409@gmail.com

ABSTRACT
Carbon based hole-counductor-free mesoscopic perovskite solar cells based on
TiO2/ZrO2/Carbon architecture have promising future technologies due to its simple fabrication
process, long-term stability. However, the commonly used pristine 3D MAPbI3 perovskite
absorber is unstable in ambient-air condition. Another branch of hybrid 2D perovskite materials
are applicable to solar cells. We attempt to use 2D-3D hybrid EA2MA6Pb7I22 perovskite in triple
layer mesoscopic architecture. We find the EA2MA6Pb7I22 perovskite solar cells exhibits
excellent shelf-life of 20 day in ambient condition. The power conversion efficiency (PCE) of
EA2MA6Pb7I22 perovskite based solar cell was found to be 2.4%. The device shelf-life remains
75% over 20 days. The life time of electron injection is found to be 6.3 ms, which is found in
Bode’s Plot. The incident-photon to current conversion efficiency (IPCE) spectrum shows 44%
of external quantum efficiency (EQE) value reaches in the visible region. From the
electrochemical impedance spectroscopy analysis lower charge transfer resistance was found at
the interface of perovskite/carbon counter electrode. This triple mesoscopic architecture of 2D-
3D hybrid EA2MA6Pb7I22 perovskite will pave the way to promise the commercialization with
cheaper process.
6.4
As-prepared
5.6 12th day
14th day
4.8 20th day
Current density (mA/cm )
2

4.0

3.2

2.4

1.6

0.8

0.0
0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
Voltage (V)

Figure. 1. I-V curve of 2D-3D hybrid EA2MA6Pb7I22 perovskite

P-21
COMPUTATIONAL MODELING ON MC-SILICON CRYSTAL GROWTH PROCESS

M. Srinivasan*, P. Ramasamy
SSN Research centre, SSN College of engineering,Chennai, TN.
*Corresponding author: srinisastri@gmail.com

ABSTRACT
Numerical simulation is a comprehensive tool in modern process development which is widely
used for promotion of crystal growth processes. Multi-crystalline silicon is an important material
NCRTABAS-2020 34
with advantages of low-production cost and moderately conversion efficiency of PV solar
cells[1]. The control of grains as well as the grain boundaries is predominantly important to the
crystal quality and thus the solar cell efficiency. Flow in the molten phase is crucial for transport
of heat and mass convection in bulk crystal growth systems. Understanding transport of heat,
mass and momentum is especially essential in bulk crystal growth processes. To grow high
quality bulk crystals, i.e. the crystals with acceptable defect density and good dopant uniformity,
understanding of transport processes coupled with the melt and gas chemistry is crucial[2].
Direct experimental investigation and in-situ observation of species transport are quite difficult
due to the high-temperature environment. Therefore, crystal growth modelling attracts much
attention in developing the technology and in finding an effective way to control mass transport
during crystal growth. The work is broadly categorized into following:
 To study the melt flow properties ofsmall-scale molten silicon based on the
dimensionless numbers such as Marangoni, Peclet and Reynolds numbers.
 To investigate the melt flow properties for large scale molten silicon based on the
dimensionless numbers like Rayleigh, Reynolds and Prandtl numbers.
 To study the non-metallic impurities such as carbon, oxygen, nitrogen and their
inclusions based on Schmidt number during directional solidification and its effect on the
solar cell efficiencies.
 To analyse the generation of stress and dislocation densities in grown mc-silicon at
various growth stages in industrial scale DS system.
 To introduce the bottom groove DS furnace and investigate some of the thermo-
mechanical properties in grown mc-silicon ingot.
Also, many modifications are made on DS system for developing high performance of mc-
silicon ingots such as heater modification, varying insulation movement, crucible rotation,
crucible vibration and magnetic field application.

REFERENCES
[1] B. K. Sahu, Renewable and Sustainable Energy Reviews,59, 927-939 (2015).
[2] J. Friedrich et al. Journal of Crystal Growth,447, 18–26 (2016).

P-22
ZN+CU CO-DOPED TIO2 SEMICONDUCTOR NANOSTRUCTURE-AN EFFECTIVE
CATALYST FOR METHYLENE BLUE DYE

B.Manikandan1, K. R. Murali2, and Rita John1*.


1
Department of Theoretical Physics, University of Madras, Chennai-25, India
2
Electrochemical Material Science Division, CSIR- CECRI, Karaikudi, India
*Corresponding author: ritajohn.r@gmail.com

ABSTRACT
The present study, describes the structural, optical and the photocatalytic activity ofZn+Cu co-
doped TiO2nanostructure. The synthesized undoped and co-doped materials are characterized by
X-Ray diffraction, FTIR, Raman,UV-Vis techniques. In addition to this, photocatalytic activity
of prepared nanostructureisanalysedusing methylene blue dye. The XRD study confirm mixed
phase of anatase and brookitefor Zn+Cu co-doped TiO2. Thecrystallite size was estimated using
NCRTABAS-2020 35
Scherrer formula and the calculated size is found to increase for co-doped TiO2. The obtained
crystallite size clearly shows that doping element causes the dislocation density and strain in the
materials. The observed peaks of FTIR spectraauthenticate the presence of TiO2metal oxide in
the synthesized material.The presence ofTi-Ostretching modes is further confirmed from Raman
spectrum.The absorption edge and optical band gaps are studied using UV-Visible analysis.
Theobtained absorption edges are at318 and 349 nm for undoped and Zn+Cu co-doped TiO2
respectively. The Tauc relation is used to calculate bandgap and the calculated values are 3.02
and 2.89 eV for undoped and co-doped TiO2 respectively.The preparedZn+Cu co-doped
TiO2nano nanostructure shows better photocatalytic activityfor methylene.
Keywords: Sol-Gel; XRD; co-dopant; UV-Vis;FTIR; Raman; Photocatalysis

P-23
GROWTH AND CHARACTERIZATION OF 4-AMINOPYRIDINIUM 4-
NITROPHENOLATE 4-NITROPHENOL (4AP4NP) SINGLE CRYSTAL FOR
NONLINEAR OPTICAL (NLO) APPLICATIONS

T. Kamalesh1*, P. Karuppasamy1, Muthu Senthil Pandian1, P. Ramasamy1, Sunil Verma2, 3


1
SSN Research Centre, SSN College of Engineering, Kalavakkam, Tamil Nadu.
2
Laser Materials Development Devices and Division, Raja Ramanna Centre for Advance
Technology (RRCAT), Indore-452013, Madhya Pradesh, India
3
Homi Bhabha National Institute, Anushakti Nagar, Mumbai-400094, Maharashtra
*Corresponding author: kamaleshkamal918@gmail.com, senthilpandianm@ssn.edu.in

ABSTRACT
Organic nonlinear optical (NLO) single crystals of 4-aminopyridinium 4-nitrophenolate 4-
nitrophenol (4AP4NP) were developed by the slow evaporation solution growth technique
(SEST) at room temperature. The unit cell parameters of 4AP4NP single crystals were confirmed
by single crystal XRD measurement. The crystallinity and phase purity were analyzed by the
Powder X-ray diffraction (PXRD). Optical quality and optical band gap energy were analyzed by
UV-Vis-NIR spectral analysis. Thermal and mechanical stability of the grown crystal were
analyzed using TG/DTA. The second harmonic generation (SHG) of the grown crystal was
analyzed by Kurtz-Perry technique.

Figure. 1 Grown 4AP4NP single crystals

ACKNOWLEDGEMENTS
This work was supported by the BRNS project (Ref. 34/14/06/2016-BRNS/34032),
Government of India
NCRTABAS-2020 36
P-24
FABRICATION OF HOLE-TRANSPORT-FREE PEROVSKITE SOLAR CELLS (PSC)
USING 5-AMMONIUM VALERIC ACID IODIDE (5-AVAI) AS ADDITIVE AND
CARBON AS COUNTER ELECTRODE

N. Santhosh*, Muthu Senthil Pandian, P. Ramasamy


SSN Research Centre, SSN College of Engineering, Chennai-603110, TN
*Corresponding author: santhosh.10409@gmail.com

ABSTRACT
Perovskite solar cell (PSC) was fabricated in ambient atmospheric conditions with carbon as
counter electrode (CE). 5-ammonium valeric acid iodide (5-AVAI) cation was synthesized and
used as an additive within the perovskite precursor. The light absorber, perovskite precursor was
infiltrated on to the TiO2/ZrO2/Carbon layer and annealed at 60 °C for 30 min. The perovskite
layer was characterized by Ultraviolet-visible spectroscopy, Photoluminescence (PL)
spectroscopy, and optical microscope analysis. The carbon perovskite solar cell was
characterized by Photocurrent-Voltage (J-V) measurement and Incident Photon to Current
Conversion Efficiency (IPCE) measurement. The device power conversion efficiency of the
perovskite solar cell was found to be 6.6% with an active area of 0.25 cm2. The lifetime of
electron injection is found to be 2.5 ms, which is noticed from bode plot. From electrochemical
impedance spectrum, lower charge transport resistance (Rct) is observed at the interface of
perovskite/carbon CE. The shelf-life of the device was investigated under ambient condition and
its performance remains over ~90% even after 75 days. This triple layer device architecture
shows a hopeful photovoltaic technology towards commercialization.
18
16 Fresh Cell
75th day
Current density (mA/cm2)

14
12
10
8
6
4
2
0
0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
Voltage (V)

Figure 1. J-V curve of carbon based perovskite solar cells (PSC)

ACKNOWLEDGMENTS
The authors are grateful to DST-SERI (DST/TMD/SERI/S76(G)), Government of India,
for the financial support.
NCRTABAS-2020 37

P-25
FIRST PRINCIPLES CALCULATIONS ON GOLD (AU) AND PLATINUM (PT) TO
INVESTIGATE TOPOLOGICAL SEMIMETAL PHASES

Rita John1 and D. Vishali2,*


,1,2
Department of Theoretical Physics, University of Madras, Chennai, TN, India.
*Corresponding author: vishalideenadayalan@gmail.com

ABSTRACT
Topological semimetals are the new class of quantum materials which exhibit exotic
properties like quantum anomalous Hall effect, Fermi arc surface states, negative
magnetoresistance due to the chiral anomaly. In topological semimetals, non-trivial features arise
in bulk states different from that oftopological insulators and superconductors. Dirac semimetal,
Weyl semimetal, and Nodal-line semimetal are the three major classifications of topological
semimetals. Dirac semimetals are characterized by four-fold degenerate Dirac points near Fermi
energy whereas Weyl semimetal possesses two-fold degenerate Dirac points (Weyl nodes or
Weyl points). These degenerate energy bands are created by the band crossings near Fermi
energy. Band crossings can occur in two ways there are accidental band crossing and symmetry
enforced band crossing. However, additional crystal symmetry requirements are needed to
protect these band crossings.In the present work, we study the effect of crystal symmetry to
obtain the topological semimetal phase in gold, platinum using quantum espresso. Gold and
platinum possess the Fm3̅m space group with the three mirror planes and three C4 rotation axis.
Topological Weyl semimetal phase arises when either inversion symmetry or time reversal
symmetries are broken and Dirac semimetal arise if both symmetries are present. In the case of
the Weyl semimetal, time reversal symmetry is broken to obtain Weyl nodes. In the absence of
spin orbit coupling, Au and Pt feature the nodal rings above the Fermi energy. The inclusion of
spin orbit coupling (SOC) gives rise to the Dirac points with the disappearance of nodal rings.
The resultant Dirac points can be brought near the Fermi energy by doping. Our results suggest
the possibility of the topological semimetal phase in gold and platinum with SOC and protected
by the crystal symmetry.
REFERENCES
[1] Zihao Gao, Meng Hua, Haijun Zhang, Xiao Zhang, Physical Review B 93, 205109 (2016).
[2] A.A.Burkov, Nature Materials, 15, 1145-1148 (2015).
[3] Claudia Felser et al., Nature Communications, DOI: 10.1038/ncomms10167(2015).
NCRTABAS-2020 38

P-26
IMPACT OF SPIN ORBIT COUPLING ON THE ELECTRONIC PROPERTIES OF
BINARY COMPOUNDS HGTE AND CDTE USING GGA AND TB-MBJ

*R. Anubama and Rita John


Department of Theoretical physics, University of Madras, Chennai, TN
*Corresponding author:anuraj1954@gmail.com

ABSTRACT
The electronic properties of binary compounds HgTe and CdTe are investigated using Full
Potential Linearized Augmented Plane Wave (FP-LAPW) within Density Functional Theory
(DFT). The exchange and correlation effects are treated using Generalized Gradient
Approximation (GGA) and Tran–Blaha modified Becke–Johnson (TB-mBJ) potentials. Spin-
Orbit Coupling (SOC) is a relativistic effect which links the spin and angular momentum of an
electron. It plays a significant role in correlated materials such as the atoms with large number of
protons. Due to the presence of heavy elements in these binary compounds, it is necessary to
include the relativistic effects. When SOC is not included, HgTe shows a topologically non-
trivial state while CdTe is a normal semiconductor using GGA. The inclusion of SOC leads to
inverted band order between a valence and conduction bands in HgTe. This is called band
inversion (i.e., s-like Γ6 band lies belowp-like Γ8 band). On the other handCdTe has a normal
band ordering like the common semiconductor GaAs. The band gaps of electronic structures of
these compounds are improved by using TB-mBJ. HgTe preserves the topologically non-tivial
state using TB-mBJ in both non-SOC and SOC cases. There is no band inversion in CdTe
causing no topological phase transition. The present work is compared with other reported results
and found to be in good agreement [1,2].
Key words: Density Functional Theory, binary compounds, spin orbit interaction, topological
nontrivial phase.
REFERENCES
[1] Wanxiang Feng et el., Half-Heusler topological insulators: A first-principles study with the
Tran-Blaha modified Becke-Johnson density functional, Pyhsical Review B 82, 235121,
2010.
[2] Binghai Yan and Anne de Visser, Half-Heusler topological insulators, Materials Research
Society, 39, 2014.

P-27
TITANIUM BASED INTERMETALLIC COMPOUNDS

1,*HannahRuben and 2Ancy


1 Department of Physics, Women’s Christian College, Chennai, TN
2Post Graduate Department of Physics, Women’s Christian College, Chennai, TN

*Corresponding author:hancath79@gmail.com

1. Introduction
Aerospace applications demand the increased use of alloys which are strong, stiff, ductile and
light weight. The ordered Ti-based intermetallic compounds having cubic L12 structure are found
NCRTABAS-2020 39
to be ductile and are well suited for aerospace applications [1]. Hence an attempt has been made
to study the electronic and structural properties of Ti3X ( X= Al, Ga, In) in cubic (L12 ) structure
using first principle calculations.

2. Method of Calculation

The total energy and band structure studies are made within the atomic-sphere approximation by
means of Tight Binding-Linear Muffin Tin Orbital method (TB- LMTO) [2]. The potential
is calculated within the density-functional prescription under local-density approximation (LDA)
using the parameterization scheme of von Barth and Hedin (Von Barth et al 1972) [3].
The tetrahedron method for the Brillouin zone (i.e., k space) integration has been used .Theself
consistent iterations were carried out with an accuracy of 10–4 Ry for eigen values, using 72 k
points in the IBZ of cubic structures.

3. Total energy calculations

The optimized lattice parameter of Ti3X (X= Al, Ga, In) in cubic (L12) structure is determined
using energy minimization procedure. The total energy curve of Ti3X (X= Al, Ga, In) alloy in
cubic (L12 ) structure is plotted against V/Vo and shown in Fig.1. The optimised lattice parameter
corresponding to minimum total energy is extracted from the energy curve and tabulated in
Table.1.
Alloys Theoretical Present Total N(EF) Cohesive
values (Å) Experimental work Energy (states/ Energy
values (Å) (Å) (Ryd/f.u) eV) Ecoh(eV/atom)
3.997 [4]
……….. - 3.837 1.2806
Ti3Al 3.99340 5598.67
3.945[4] ………... - 3.746 2.1842
Ti3Ga 3.94335 8997.95
4.078[4] 4.22[4] - 2.763 3.8215
Ti3In 4.06941 16869.10

Table. 1: Experimental and Theoretical lattice constants (a, b, c) in Å, N (EF) in states/eVand


cohesive energy in eV/atom of Ti3X (X= Al, Ga, In) alloy in cubic (L12 ) structure

Fig.1 Total energy as a function of volume for (a) Ti3Al, (b) Ti3Ga and (c) TiIn
in cubic (L12) structure
NCRTABAS-2020 40
From Table.1 it is observed that the theoretically obtained equilibrium lattice constants are
underestimated compared to other theoretical and experimental values. This is partly ascribed to
the local density approximation (LDA) used in the calculations. The cohesive energies of these
compounds have been calculated and tabulated in Table.1. The cohesive energy of Ti3X (X=Al,
Ga, In) shows a linear increase as X goes from Al to Ga, In.

4. BAND STRUCTURE
The correlation of structural stability with electronic structure is performed using band structure
calculations. They are plotted along high symmetry lines. Fig.2 shows the band structures of
Ti3X ( X= Al, Ga, In) in cubic (L12 ) for several symmetry directions in k-space. The band
structures of Ti3X ( X= Al, Ga, In) in cubic (L12 ) structure have same generic nature in B2 and
B19 phases respectively.

Fig. 2 Band Structure of for (a) Ti3Al (b) Ti3Ga and (c) Ti3In compounds

5. DENSITY OF STATES (DOS)


In order to have an insight about phase stability at microscopic level, the DOS curves are
calculated for Ti3X (X= Al, Ga, In) in cubic (L12) structure at their equilibrium volumes and are
plotted and shown in Figs 3. The overall topology of the DOS curve of Ti3X (X= Al, Ga, In) in
cubic (L12) structure is found to be in good agreement with the previous results reported by
Hong et al [5]. The Fermi level is observed to fall on a peak for Ti3X (X= Al, Ga) compounds as
shown in Fig.3(a) and Fig.3(b) resulting in the metastability of the system[6]. It is interesting to
note from Table.1 that Ti3In compound possess low N(EF) value due to falling of Fermi level on
the pseudogap in the DOS curve shown in Fig.3©. From these observations it is concluded that
Ti3In compound is expected to be more stable in L12 structure compared to Ti3X (X= Al, Ga)
compound.

Fig. 3
Density of states curve of for (a) Ti3Al (b) Ti3Ga and (c) Ti3In compounds

CONCLUSION
 The electronic and structural properties of Ti3X (X=Al,Ga,In) compounds in cubic (L12)
structure are studied.
NCRTABAS-2020 41
 The lattice parameter of Ti3X (X=Al,Ga,In) in cubic (L12) structure value is observed to
be underestimated compared to available experimental and theoretical values.
 Studies on band structure and density of state curve show that Ti3In compound is
expected to be more stable in L12 structure compared to Ti3X (X= Al, Ga) compound.

REFERENCES
[1] Yu.V.Milman,D.B.Miracle, et.al (2001), ‘Mechanical behaviour of Al3Ti intermetallic and
L12 phases on its basis’ Intermetallics,Vol.9,pp.839-845.
[2] Anderson O.K, and Jepsen O. (1984), ‘Explicit, first-principles tight-binding theory’, Phys.
Rev.Lett. , Vol.53, pp.2571-2574.
[3] U von Barth and L Hedin (1972), ‘A local exchange-correlation potential for the spin
polarized case’,Journal of Physics C: Solid State Physics, Vol.5, Number 13.
[4] Ravindran P, SubramoniamG andAsokamani.R (1995) ‘Phase stability studies of Ti3X( X=
Al, Ga, In) and Ni3 (Al,Nb) systems from electronic structure calculations’ , proc.of
International Conference on Advances in Physical Metallurgy, ICPM-94.
[5] Hong N.M., Holubar T., Hilscher G. (1994),‘Magnetic properties of RNi4B (R= rare earth
metal)’, Proc.of Joint. MMM &Intermag. Conf. Paper PQ-20.
[6] Xu J.H and Freeman A.J (1990), ‘Phase Stability and electronic structure of ScAl3 and ZrAl3
and of Sc-stabilised cubic ZrAl3 precipitates’, Phys.Rev. B, Vol. 41, pp. 12553-12561.

P-28
EVOLUTION OF STARS – A REVIEW

S.Nivedha and T.S.Renuga Devi*


Department of Physics, Women’s Christian College, Chennai, TN.
*Corresponding author: drrenugadeviwcc@gmail.com

INTRODUCTION
Stars take millions to billions of years to evolve. We can't observe how a star changes
with time as our life span is very short. Yet we can observe many stars in our Galaxy in all stages
of their evolution so that we can figure out the order in which they evolve. Today we even have
theoretical models for the evolution of the stars which helps to understand the observations even
better. This paper is a review on ‘Evaluation of Stars’.

OBSERVABLE FACTORS
Using telescopes it is made possible to observe the luminosity, mass, chemical
composition and temperature which in turn can be determined from the colour of the star(the
spectrum emitted by the photosphere); ranging from 3,000°C to 50,000 °C. The warmest stars
are blue and the coolest stars are red.

BIRTH OF A STAR
Cloud of gases called nebula which extends upto many light-years across the space and
contains enough mass to make several thousand stars. These nebulae consists of molecules of
Hydrogen and Helium, i.e., molecular cloud, the irregularity in the density of the gas causes a net
gravitational force that pulls the gas molecules closer together. As the collapse continues, they
NCRTABAS-2020 42
come closer and their potential energy is converted into kinetic energy hence they crash into
other molecules or atoms due to friction, this kinetic energy is converted into thermal energy
which results in increase in temperature. Eventually, they separate into many smaller clouds
which may become a star. The core of the gas collapse much faster than the outer part. In order
to conserve the angular momentum the cloud begins to rotate faster and faster. When the core
reaches a temperature of about 2,000 K the molecules of hydrogen gas breaks into atoms. When
the core reaches 10,000 K the fusion begins to happen. Further, it collapses leading to a stage is
called ‘PROTOSTAR’, which is the early stage of a star. This stage could last about 500,000
years.

STABILIZATION OF A STAR
When the pressure and temperature in the core is enough to sustain nuclear fusion, the
pressure due to fusion acts against the gravitational pull. The properties and the fate of the star is
determined by the gas that it had initially. The star will continue to shine as long as the
gravitational pull is perfectly balanced by the outward pressure produced due to nuclear fusion,
becoming a main sequence star.

LOW MASSIVE STARS


Stars with 1 solar mass, tend to remain in Main Sequence for about 10 billion years, until
the Hydrogen fuses to form Helium. The Helium core now starts to contract further and further.
Now the core is hot enough for the helium to fuse to from Carbon. The outer layer now expands,
cools and shines less brightly. This expanding star is now called RED GIANT.
This Red Giant phase is common for both stars with low and high masses. If it is a low
massive star as it runs out of fuel (when there is no more Helium to fuse into higher element).
The outer layer drifts away as a gaseous shell from the core and it becomes a Planetary Nebula.

FORMATION OF ELEMENTS IN STARS WITH HIGH MASS


When high mass stars (3x times the solar mass) undergo nuclear fusion, the Hydrogen
gets fused into Helium and it becomes a Red Giant and the core becomes much hotter than the
low mass stars, which causes the Helium to fuse into Carbon with each layer of the core going
inwards is hotter and can fuse into heavier elements. Hence Carbon gets fuses into Oxygen, if it
still has sufficient amount of elements to fuse it gets converted into Neon and then Silicon
eventually fusing into Iron, which is the heaviest element that can be fused in the core of any
star. As the fusion occurs, the star expands due to the release of energy, with each of these layer
performing a particular type of fusion until the fuel is exhausted resulting in explosion which
ejects most of the heavy nuclei inside back into space; this massive explosion is called as
SUPERNOVA, a vibrant and energetic phenomenon which is too bright that it outshines the stars
of an entire galaxy. All the heavy elements are synthesized during this event, elements with an
atomic number greater than 26 are fused only during either during supernova or a rare event of
collision of neutrons stars or the collision of a black hole and a neutron star.

FINAL PHASE OF A STAR


Stars with low mass the remaining core of the Planetary Nebula becomes a WHITE
DWARF, this star eventually cools and dims. As years goes by, it loses energy and stops shining
becoming and becomes a BLACK DWARF.Talking about massive stars with mass of about 3 to
50 times that of our sun, if the core survives the Supernova explosion, depending on its core's
NCRTABAS-2020 43
mass i.e., if that remaining core is about 1.5 to 3 solar masses it contracts to become a tiny and
dense Neutron Star. If the core is much massive than 3 solar masses then it collapses or
contracts into a Black Hole.
REFERENCE
[1] ScillaDegl’Innocenti, Stellar evolution and the Standard Solar Model, EPJ Web of
Conferences, 227, (2020),10thEuropean Summer School on Experimental Nuclear
Astrophysics.
[2] Amaral, A., & Percy, J.R.2016, An Undergraduate Research Experience on Studying
Variable Stars, (JAAVSO), 44, 72-77, .Journal of the American Association of Variable Star
Observers.
[3] CrutisPadgett , 20,2 ,(2012),Evaluation of star identification techniques ,Journal of
Guidance, Control and dynamics.

P-29
COMPARITIVE STUDY BETWEEN HALL EFFECT SENSOR AND TUNNEL
MAGNETO RESISTIVE SENSOR (TMR MAGNETOMETER)

G. Yuvasri
Department of Physics, Women’s Christian College, Chennai,TN
*Corresponding author:gopiyuva166@gmail.com

INTRODUCTION
Magnetic sensor is a device that responds to the change in magnetic field. It has applications in
diverse fields such as factories, automobiles, airplanes etc. In the present work, a comparative
study between Hall Effect sensor (HES) and Tunnel Magneto Resistive (TMR) sensor has been
done to explore their efficiency due to sensitivity, temperature dependence and noise – frequency
measurements. Hall Effect sensor works on the principle of Hall Effect (i.e.) if a magnetic field
is applied perpendicular to the flow of current through a conductor then a voltage called Hall
Voltage is set up in the direction perpendicular to both magnetic field and current. This voltage is
created due to a Lorentz force. [2] Magneto Resistive sensor works under the principle of
magneto resistivity. When a magnetic field is applied through a magneto resistor it experiences a
change in resistance. The change in resistance increases with increase in magnetic field. [3].
COMPARISON BETWEEN HALL EFFECT SENSOR AND TMR SENSOR
The major difference between the two sensors is, Hall Effect sensor responds to a perpendicular
magnetic field whereas the other responds to parallel magnetic field.
SENSITIVITY
NCRTABAS-2020 44
Inexpensive Hall Effect sensor is usually made of silicon. Fig.1 (a) shows the response curve of
Hall Effect sensor made of silicon. The sensitivity of Hall sensor is calculated by considering the
variation of Hall voltage with respect to magnetic field. The output Hall voltage increases with
the increase in applied perpendicular magnetic field. As the magnetic field is further increased it
attains a saturation region as the flux density decreases which results in decrease in sensitivity.
Therefore the response curve will become non linear.

Fig.1 (b) shows the response curve of TMR sensor which is non - linear with magnetic field. The
sensitivity of TMR sensor is calculated by varying the magneto resistance with magnetic field.
Since the hysteresis is very small this curve shows a perfect linearity in the central region of the
curve i.e., near zero field region. Hence very high sensitivity is observed in this region. The
TMR sensor has three layers: - free layer, barrier layer and pinned layer placed over one another.
When the free layer magnetization is anti parallel to the pinned layer magnetization the
resistance is high (RH) and when it is parallel the resistance is low (RL). When their
magnetizations are perpendicular the resistance is half way between RH and RL.

The slope of the response curve gives the sensitivity of the sensor. The slope of the response
curve of TMR is observed to larger than Hall Effect sensor. [5]. Hence it is concluded that the
sensitivity of TMR sensor is larger compared to Hall Effect sensor.

The Fig. 2(a) shows the variation of Hall voltage with respect to varying temperature. It is
observed that the Hall voltage does not vary much with respect to different temperature; hence
the sensitivity of Hall sensor is maintained constant. The Fig. 2(b) shows the variation of TMR
sensor output with respect to varying temperature. The output of TMR sensor tends to decrease
with increase in temperature; hence its sensitivity may tend to decrease further. [6].
TEMPERATURE DEPENDENCE

NOISE – FREQUENCY MEASUREMENTS


NCRTABAS-2020 45
Fig.3 shows the spectral noise measurements of Hall and TMR sensor. The noise ratio of Hall
Effect sensor is greater than the TMR sensor. Therefore the Hall Effect sensor has high noise
ratio compared to TMR sensor. Hence the output of TMR sensor will be better than the Hall
Effect sensor. [7]

APPLICATIONS AND ADVANTAGES


Hall Effect sensors are used for proximity detection, speed detection, current sensing, power
sensing. Magneto Resistive sensors are used for wheel speed detection, metal detection, earth
magnetic field detection, and bio sensors. Both Magneto Resistive and Hall Effect sensors can
operate without any contact with the physical element and very sensitive to interfering magnetic
field. But Magneto Resistive sensor has poor temperature characteristics than Hall Effect sensor.

CONCLUSION
From the above comparative study, it is concluded that the TMR sensor is better than Hall Effect
sensor due to its high sensitivity and low noise ratio, whereas temperature characteristics of Hall
Effect sensor is better than TMR sensor.

REFERENCES
[1] https://www.physics-and-radio-electronics.com/electronic-devices-and-circuits/passive-
components/resistors/magnetoresistor.html
[2] http://sensors-actuators-info.blogspot.com/2009/08/hall-effect-sensor.html
[3] http://sensors-actuators-info.blogspot.com/2009/08/magnetoresistive-sensor.html
[4] https://ieeexplore.ieee.org/document/1634415
[5] http://www.dowaytech.com/en/1776.html
[6] https://www.researchgate.net/publication/221844028_Hall_Sensors_for_Extreme_Temperat
ures
[7] Characteristics of TMR Angle Sensors - AMA Science.
[8] Wanxiang Feng et el., Half-Heusler topological insulators: A first-principles study with the
Tran-Blaha modified Becke-Johnson density functional, Pyhsical Review B 82, 235121,
2010.
[9] Binghai Yan and Anne de Visser, Half-Heusler topological insulators, Materials Research
Society, 39, 2014.

P-30
TIME TRAVEL AND ITS POSSIBILITIES

Divyanshi Dubey
Department of Physics, Women’s Christian College, Chennai, TN
*Corresponding author:hancath79@gmail.com

ABSTRACT
Scientists have varied opinions on possibility of time travel. Time travel in simple words can be
defined as the phenomenon where one can jump or travel from one point in time to the other.
Time travel can be done in two directions - to the past and to the future. Mathematically
speaking, time travel is indeed possible in the principles of quantum mechanics. Hence in the
NCRTABAS-2020 46
present paper, an attempt has been made to explore whether time travel is feasible or not in the
real world.

SPECIAL RELATIVITY AND TIME TRAVEL TO THE FUTURE


In the year 1887, Albert Michelson, devised a sophisticated experiment, infamously known as
“Michelson and Morley experiment” and found an astonishing result. He found that velocity of
light beam passing us in direction of earth’s motion as well as in direction opposite to that of
earth’s motion was same. The experiment’s outcome indicated that the velocity of light is same
for all observers, even if they are moving relative to each other. With all this remarkable
information in his hands, Einstein came up with his two infamous postulates of special relativity
in 1905. First- the law of physics look the same to every observer in uniform motion (constant
speed and constant direction) and second- the velocity of light through empty space should be
same as witnessed by every observer in the uniform motion. According to Einstein’s special
relativity, universal time does not exist. Thus time is different for different observers moving
with different speeds and this opens the way to time travel.

THE TWIN PARADOX AND THE TIME TRAVELERS


The twin paradox says, if one of the two twins (say twin1) heads towards a distant star, at the
velocity very close to that of light and returns, then she will age less than her other twin (say
twin2) on the earth. This is because the two twins had different experiences and it is the eye
opener to the paradox. Twin1 stays on the earth, and hence her motion is uniform and satisfies
Einstein’s first postulate. But twin2 accelerates and changes direction while travelling to distant
star and back, so she will not able to satisfy Einstein’s first postulate. Her clock’s light beams
will travel larger distance and hence she will age less. Conclusion of this paradox is that
accelerating clocks tick slower, so astronaut would age slower than us because they are
accelerating

TIME TRAVEL TO THE PAST AND THE GRANDMOTHER PARADOX


The speed limit of the universe is the speed of the light. So one cannot go to past by defeating
the light beam. We can travel to past if we take shortcuts and defeat the light beam. Einstein’s
theory of gravity shows that spacetime may be curved to form a closed geometry like a cylinder.
Curving of the spacetime opens up the path to the past. The possibility of time travel to the past
brings about an interesting question. What if you go back in time and kill your own
grandparents? Your parents would never be born, implying you wouldn’t exist. This is the
infamous grandmother paradox. Two solutions to for this paradox would be possibility of
parallel universe and the self consistency of the timelines.

COSMIC STRINGS
Cosmic strings are thin strands of high-density material left over from the early universe. Cosmic
string would have width narrower than the atomic nucleus and a mass about 10 million tons per
centimeters. They are constantly under the tension and extend infinitely. They have no ends, so
they are either infinite in length or exist in form of closed loops. Since they are so massive they
should warp the space time around them. One solution of the spacetime geometry, satisfying
Einstein’s equations, around a cosmic string which warps the spacetime around it is like a cone
(practically it looks like pizza with a slice missing). The circumference of the string would now
be less than that expected by Euclidean geometry. If one travels to a nearby quasar from the
NCRTABAS-2020 47
earth, we can take two paths (two sides of missing slice). The two paths are not equal, so we can
take the shorter path i.e. a shortcut and beat the light beam. The moment you travel faster than
light, you are travelling to the past.

BLACK HOLES - THE NATURAL TIME MACHINES


Two infinitely long cosmic strings passing by each other at the requisite high speed form a
region of time travel at the point where they cross (intersect) each other. Another way to travel to
the past is by manipulating a cosmic string loop such that it collapses-forming two straight
sections of loop that pass by each other at the speed high enough to create the requisite
conditions for the time travel. But a huge problem exists- such a massive string loop would
become so compact while it collapses that it would be in danger of forming black holes. When
the cosmic loop gets collapsed, the string segments pass each other in such a way that the string
loops have some angular momentum. So when such a loop collapses, it must form a rotating
black hole; the possible region of time travel might be trapped inside such black hole. If a person
enter the black hole three things can happen: a) he can be ripped apart by tidal forces, b) he can
travel to the past only to be ripped by highly curved space time afterwards or c) he can travel
back in time and emerge in a different universe but he can never return back to the original
universe. We don’t know what happens within singularity so maybe one can return after past
expedition but it still remains a mystery. Time travel to the past would appear difficult at best, if
we want to search places where extreme conditions exist, we can look to the interiors of the
black holes or the beginning of the universe. Black holes are indeed one of the places to look for
a naturally occurring time machine!

CONCLUSION
1. The NASA’s parker solar probe, an object accelerated by human acquired a maximum speed
of 153,545 mph (or 68.6 kilometres per second). But that is very much lesser than speed of
light (300,000 km/s). So currently we cannot travel to the future.
2. Travel to the past is even more dramatic. Lots of options like black hole, worm hole, warp
drives or flying around an infinitely long cylinder rotating nearly at the speed of light are
still available. But to keep a worm hole or warp drive open we need negative density matter
(matter which weight less than nothing) is required. “Also humans may not be able to
tolerate time travel at all. Travelling nearly at the speed of light would only take a
centrifuge, but that would be harmful” said Jeff Tollaksen, a professor of physics at
Chapman University, in 2012.

REFERENCES
[1] J. Richard Gott 2001, Time travel through Einstein’s universe, phoenix publications, great
Britain
[2] Mr. Tompkins paperback edition (need other information from the library)
[3] https://www.wired.com/story/just-how-fast-is-the-parker-solar-probe-astonishingly-fast/
[4] Wikipedia
[5] https://www.space.com/40716-time-travel-science-fiction-reality.html
NCRTABAS-2020 48

P-31
ANTIMATTER – THE FUTURE GAME CHANGER

Shreya Hembrom
Department of Physics, Women’s Christian College, Chennai,TN
*Corresponding author:hancath79@gmail.com

ABSTRACT
Antimatter has been into popular scientific investigation since its inception in 1896 when it was
first postulated by Arthur Schuster. It received backing theories in 1928 when Paul Dirac
predicted its existence and won a Nobel Prize along with Erwin Schrödinger in 1933. Later in
1936 the American physicist Carl D. Anderson won a Nobel Prize for the discovery of positrons.
Thus many scientists have contributed their life for the discovery of antimatter. Had there ever
been an equal amount of antimatter along with matter, everything in the universe would have
been annihilated. In the present work, an attempt has been made to study the importance of
various antimatter and its applications.
TABLE: Relationship Between Particles and Antiparticles
S.No. Parameter Relationship
1. Mass same
2. Spin same
3. Charge opposite sign but same magnitude
4. Magnetic Moment opposite sign but same magnitude
5. Mean life in free decay same
6. Creation in pairs
7. Annihilation in pairs
8. Total Isotopic spin(l) same
rd
9. 3 component of (Iz) same magnitude but of opposite sign
10. Intrinsic Parity same for bosons, but of opposite for fermions
11. Strangeness(S) of opposite sign but same magnitude

ANTIMATTER PRODUCTION
As antimatter is hard to be found naturally and is solely dependent on scientific procedures. Due
to technological limitations even at CERN, the rate of production of antimatter is only from 1 to
10 nanograms. In 2008, antiproton production (picograms) was valued to be 20 million US$.
Moreover, storage of antimatter is typically done by trapping electrically charged frozen
antihydrogen pellets in Penning or Paul traps.Thus the production of antimatter is expensive and
containment of it is very difficult due to its inability of being stored in a system made of matter.

ANTIMATTER EXAMPLE
NCRTABAS-2020 49
In 1995, physicists at CERN announced that they had successfully created the first atoms of anti-
hydrogen at the Low Energy Antiproton Ring (LEAR) travelling with 10c m/s [c = 3*108m/s]
ANTIMATTER ROCKETS
An antimatter rocket is a recommended class of rockets that use antimatter as their power source.
The advantage of the rocket is that a large fraction of the rest mass of a matter/antimatter mixture
may be converted to energy thus allowing antimatter rockets to have a far higher energy
density and specific impulse than any other proposed class of rocket.
TYPES OF ANTIMATTER ROCKETS
Antimatter rockets can be of varying types, which includes ones that:
1. Directly use the products of antimatter annihilation for propulsion.
2. Heat a working fluid or an intermediate material due to annihilated antiprotrons which is
then used for propulsion.
3. Heat a working fluid or an intermediate material to generate electricity for some form of
electric spacecraft propulsion system.

THEORIZED ANTIMATTER WEAPONS


A gram of antimatter can produce andetonation equivalent to a nuclear bomb. However, humans
have produced only ainfinitesimal amount of antimatter. All of the antiprotons created at Fermi
lab's Tevatron particle accelerator add up to only 15 nanograms. Those made at CERN quantity
to about 1 nanogram. Just 1gm of antimatter-matter annihilation can release 1.8*1014J as per
Einstein’s mass-equivalence equation E=mc2, thereby releasing 42.96 kilotons of energy roughly
reacting like a 20 kiloton bomb does! As of now, a weapon of this kind is greatly debated in
regard to its potential as a military weapon.
ANTIMATTER APPLICATION
1. PET (positron emission tomography) uses positrons to produce high-resolution images
of the body.

REFERENCES
[1] www.projecctrho.com/public_html/rocket/enginelist3.php
[2] https://home.cern/science/physics/antimatter
[3] Principles of Modern Physics-Antimatter-A.K Saxena
[4] https://en.wikipedia.org/wiki/Antimatter
NCRTABAS-2020 50

P-32
BIO- DEGRADABLE MATERIALS FOR GREEN AND SAFETY FOOD PACKAGING

B. Rajani1,*, U. Manoj Kumar2


1
Assistant Professor, Alpha Arts and Science College, Chennai, TN.
2
III Yr B.Sc. Electronics & Communication Science, Alpha Arts & Science College, Chennai.
*Corresponding author:rajini1212@gmail.com

ABSTRACT
Today, environmental safetyis an alarming factor due to increased population and pollution. To
meet the needs of growing population, food packaging is a challenging problem because plastic
materials use Petro based synthetic polymers which are not biodegradable and cause
environmental pollution. These polymers, disturb wildlife unsympathetically. Also, in future
there will be shortage of petrochemical materials and demand for eco- friendly bio-degradable
materials to replace plastics will increase. Biopolymers can be degraded by microorganisms, and
the products are non-toxic. But the usage of biopolymers is limited due to the poor physical and
barrier properties. These properties can be enriched by adding strengthening nano particles or
fillers to form composites. This article analyses some biopolymers and nano particles used to
form bio-nanocomposite materials.
Key Words – Food packaging, Bio Polymers, nano composites.

1. INTRODUCTION
Million tons of foods are wasted annually around the globe. Food waste may rise to over 200
million tons by 2050. Recently, packaging was identified as the key challenge of food
consumption. Packaging plays key role in maintaining food quality during storage, by
controlling gas and vapour exchanges with the exterior atmosphere and food chemical
contamination. Packaging is generally wrongly considered as an additional economic and
environmental cost rather than an added value for waste reduction.When a food product is
consumed or wasted, packaging is discarded leading to environmental problem. Plastic based
packaging materials are mostly oil-based. Plastic world production is increasing tremendously
every year. After the usage of food packaging, 40% ends up in landfill which, accumulate in
soils. 32% leak out in collecting and sorting systems and ends in the soil and ocean. This marine
and soil litter degrades into micro- and nano-sized particles which easily enter into living
organisms such as fish which then fed upto human beings. If production and use continue there
may be more plastic than fish in the ocean.
2. FUNDAMENTAL ROLE OF FOOD PACKAGING
The key function of Food packaging is to preserve food quality, to protect the product from dirt,
insects, humidity and breakage, to reduce food-borne diseases.
3. CLASSIFICATION OF BIO POLYMERS
NCRTABAS-2020 51

3.1.-Polymers extracted directly from Bio resources


Ex: Proteins like Soya protein isolate (SPI), Wheat protein isolate (WPI), Corn, Gelatin.
Carbohydrates like Starch, Cellulose.
Starch: Starch is a raw material and renewable source, due to its recurring availability from many
plants and its low cost. In the thermoplastic starches production, plasticizers are used to provide
stability to product. Corn isthe major source of starch for bio-plastics. Starch is also available
from potato, soya, wheat, rice, barley and oats.In the case of oil-based packaging, bio-sourced
bio-plastic (Bio-PE, PLA, and more) use food resources like corn or cane sugar. These increase
food security and pressure on agricultural land. But these are not biodegradable and fit only for
industrial composting (PLA). Waste management is required.

3.2 .Polymers produced by chemical synthesis of bio mass:


Polymeric materials synthesized by polymerization procedure suchas aliphatic aromatic
copolymers, aliphatic polyesters using renewable bio-based monomers.
Ex: Polymers produced from Bio-mass like Polylactic acid (PLA). Polycaprolactone (PCL),
Polyvinyl alcohol (PVA), polyglycolic acid (PGA)

Poly lactic acid (PLA)


Polylactic acid is thermoplastic aliphatic polyester. Largely used in the production of renewable
packaging materials. It is one of the most important bio-degradable polymers in disposable
packaging due to its noble mechanical properties. But it is limited due to its poor durability, slow
degradation. But when PLA is mixed with organ-philic clay at the nano scale, nano composites
are formed with increased gas barrier, mechanical properties like young’ modulus and heat
resistance.

Polymers produced by chemical synthesis of Petroleum based products


Ex: Polymers produced from petroleum like Polycaprolactone (PCL), Polyvinyl alcohol (PVA),
polyglycolic acid (PGA), Poly capro lactone (PCL). Poly caprolactone is made by ring-opening
polymerization of 3-capro lactone using anionic, cationic catalysts or by free radical ring-
opening polymerization of 2-methylene-1-3-dioxepane. It is a semi crystalline polymer. PCL
exhibits high elongation at break. Due its physical properties and marketable availability, it is
widely used in packaging.Because of its low meltingpoint, it is blended withotherpolymers
andwiththenanofillers, such asnano-composites based on polycaprolactone/organicallymodified
silicate.

3.3- Polymers produced by microorganism or bacteria


Ex: Poly hydroxyl butyrate PHB, bacterial cellulose, curdian and pullan.
Poly hydroxyl butyrate ( PHB)
Poly hydroxyl butyrate is an ec0-friendly polymeric materialwhich is poly hydroxyl
alkanoate(PHA) used infood packaging. PHB is biodegradable plastic, produced by
microorganisms (suchaseutrophus or Bacillus). Hypothetically, polyhydroxyl butyrate ,PHB has
many advantages over petrochemically derived plastics in packaging applications,since it is non-
toxic and well-matched with many foods, such as dairyproducts,fresh meat products and ready
meals.Its high melting point and crystalline nature are solved by the addition of hydroxy-
NCRTABAS-2020 52
valeratemonomerstoproducepoly hydroxyl butyrate-co-valerate (PHBV). PHBV degrades
intocarbon dioxide and water under aerobic conditions. Research is going on to improve
mechanical properties of PHB by theaddition of eco-friendly benign material to..

4. CONCLUSION
We require more than 230 million tonnes of packaging every year and the industries demand
efficient and cost-effective packaging. Hence systematic research on the side-effects of
biodegradable packaging can make it more ecological in the long run.Also, there is a need to
educate people about the proper discarding of biodegradable packaging, so that they won’t end
up into landfill.

REFERENCES:
[1] D.Z. BUCCI, L.B.B. TAVARES, I. SELL: PHBpackaging for the storage of food
products, Polymer Testing, 24, 564–571, (2005).
[2] K. Krishnamurthy, A. Demirci, V. M. Puri, C.N. Cutter: Effect of packaging materials on
inactivation of pathogenic microorganisms on meat during irradiation, Transactions of the
ASAE. Vol. 47(4): 1141-1149 (2004).
[3] Dr. Shakeel Ahmed : Bio-based materials for food packaging- Green and Sustainable
Advanced packaging materials- Springer.

P-33
INTERACTING ANTI-SYMMETRIC TENSOR FIELD THEORIES

K Ekambaram 1,*,A.S.Vytheeswaran2
1
Kanchi Shri Krishna college of Arts and Science,Kanchipuram, Tamil Nadu
2
Department of Theoretical physics, University of Madras, Tamil Nadu
*
Corresponding author: kekam115@gmail.com

ABSTRACT
Anti-symmetric Tensor fields interacting with vector fields in Abelian and non-Abelian
forms are disscussed.Relevant gauge invariant quantities containing the Hamiltonian are
obtained. Also make some remarks about gauge theory in two forms.

INTRODUCTION
Antisymmetric tensor fields have present in various areas of Physics, as mediating fields in
String theories, in Black Hole theory. Also, these fields have been considered to an alternate
mechanisms for generating masses for gauge fields. Antisymmetric tensor fields are also the
subject of duality mechanisms, which explore their equivalences with other theories.Here give
the theory of gauge invariances of interacting Anti-symmetric tensor fields with vector fields in
abelian and non-abelian forms.

THE ABELIAN INTERACTION THEORY


The Abelianantisymmetric tensor field [2] Bµνinteracting with a vector field Aµ is described by
the Lagrangian density,
NCRTABAS-2020 53

(1)
µν ν ν µνλ µ νλ ν λµ λ µν
withF = ∂ A − ∂ A and H = ∂ B + ∂ B + ∂ B . This Lagrangian is invariant under the
µ µ

separate Aµ and Bµνgauge transformations. Using the Lagrangian above, the canonical
momentum fields are πµ and πµν for the Aµ and Bµνrespectively. Then the canonical Hamiltonian
is

(2)
The constraints of this system are
π0 ≈ 0; π0i ≈ 0; − ∂iπi ≈ 0; . (3)
The above constraints are all of the first class, these are the generators of the gauge
transformations given above. The theory can be handled further by standard methods such as
gauge fixing, followed by canonical quantisation of path integral methods.

THE NON-ABELIAN INTERACTION THEORY

The non – AbelianAntisymmetric tensor field [3] (Ba)µν interacting with vector field (Aa)µ is
described
by,

, (4)
a µν
where using the covariant derivative ( , we have (F ) =
µ ν a ν µ a a µνλ µ νλ a ν λµ a λ µν a
(D A ) −(D A ) and(H ) = (D B ) + (D B ) + (D B ) . Here a,b,care group indices, and
µ,ν,λ= 0,1,2,3 are Lorentz indices.The above Lagrangian is invariant under the gauge
tranformations, Aaµ and .However,theLagrangian is not invariant under B
. This is unlike the Abeliancase. This non-invariance is due to
the non-Abelian nature of the fields. In phase space, the canonical momentum fields and
conjugate to the Aaµ and Baµνrespectively, are obtained from the Lagrangian in the standard way.
The canonical Hamiltonian, directly from the Lagrangian is,

. (5)
The full set of constraints are,
1) 0; (2) 0;

(5)(𝜓𝑎 )i(𝑥⃗)=1/2g𝑓 𝑎𝑏𝑐 (𝐹 𝑏 ) 𝑓𝑘 (𝐻𝐶 )𝑖𝑗𝑘 − 𝑔𝑓 𝑎𝑏𝑐 (𝜋 𝑏 )𝑖𝑗 [(𝜋 𝑐 )𝑗 − 𝑚/2 ∈𝑗𝑘𝑙 (𝐵 𝐶 )𝑘𝑙 ](𝑥⃗)≈ 0

This set of constraints given earlier define a constraint surface Γ in the phase space.By using the
Dirac’s procedure[1], classify all these constraints. The constraints and Λaare of the first
class and Λaiand are of the second class. Further, the constraints have non-zero Poisson
bracket among themselves.
NCRTABAS-2020 54

The above non-abelien interaction theory is much more complicated in structure than in the
Abeliancase.The second class are eliminated by using by use of Dirac brackets.Some of the Dirac
brackets are same as Poisson brackets and some of the expressions different due non-abelian
nature of fields[5].These Dirac brackets replace the corresponding Poisson brackets in various
calculations, so that the second class constraints are no longer considered. For quantisation, the
Dirac brackets are considered to be taken over to be commutators, so that the quantised theory
also has no second class constraint operators.

In the literature there a method called gauge unfixing method[4], in this method the second class
contraints can be converted into first class by working within the original phase space in two
cases[6].In first case, Λaihave zero Poissonbrackets among themselves,to get new gauge theory
redefine the Λaiasχai,discard the ψiaas constraints.In this case the newly obtained gauge invariant
Hamiltonian.TheLagrangian also obtained by inverse Legendre transformation.ThisLagrangian
has additional terms compared to old Lagrangian.In second case ψiaas not redefine as first case,
since ψiaisnon zero Poisson bracket themselves.So use iterative procedure by using projection
operator to get a new gauge theory.But it much more complicated because of non zero Poisson
bracket nature of ψia.The gauge invariant Hamiltonian are obtained and found to be infinite series
in form.In every step of iterative procedure by the application of projection operator obtained
gauge invariant Hamiltonian with corresponding gauge invariant fields.Needless to say,result is
also in infinite series form.

REFERENCES
[1] P.A.M.Dirac, Lectures on Quantum Mechanics, Yeshiva University
(1964)
[2] A Lahiri, Modern Physics Letters. A, Vol.8, No.25 (1993) 2403.
[3] E.Harikumar,AmitabhaLahiri and M.SivakumarPhys.Rev. D 63 (2001) 105020.
[4] A.S.Vytheeswaran, IJMP A 13 (1998) 765, and references therein.
[5] Ekambaram K and VytheeswaranA S 2017 Int. J. of Mat. Sc. 12 166.
[6] K.Ekambaram and A.S.Vytheeswaran, Journal of Physics:Conference Series,
1000(2018)012141.

P-34

THE ELECTRONIC AND OPTICAL PROPERTIES OF AB STACKED BILAYER


SILICENE – A FIRST PRINCIPLES APPROACH

Benita Merlinb, Rita Johna,*, SarathSanthoshc


a
Department of Theoretical Physics, University of Madras, Guindy Campus, Chennai, TN.
b,c
Department of ECS, Alpha Arts and Science College, Chennai, TN.
*Corresponding author:ritajohn.r@gmail.com

ABSTRACT
Silicene, a 2D material finds potential application in electronic and optical industry due to its
electronic band structure similar to graphene. This study offers an analysis of optical properties
of AB stacked bi-layersilicene based on Density Functional Theory. The complex dielectric
NCRTABAS-2020 55
function and complex refractive index are calculated in both parallel (||) and perpendicular (  )
polarization directions of the electromagnetic field. The optical observables like absorption,
reflection, and electron loss function have been studied. The static dielectric function is lesser
than graphene when light is polarized along the || polarization direction. It is lesser than its host
material. The oscillatory behaviour of silicene is found to be more into UV than IR and visible
regions. Several peaks are obtained in the absorption spectra due to the presence of parallel
bands, critical points, bands extrema and Van Hove singularities. The real part of dielectric
function reveals the existence of plasma frequency in both polarization directions indicating the
metallic nature of silicene throughout the optical spectrum. Study on refractive index clearly
displays the birefringence characteristics. Refractive index ‘n’ is greatest along the direction
when light is polarized || to the basal plane as in the case of single layer silicene. Reflectivity and
electron loss function are studied. The value of reflectivity is less than one. It is greatest along
the  polarization direction. Electron loss function reveals that the peaks of silicene exhibit a red
shift from  to || polarization direction. The peak strength endorses that the energy of plasmons
in silicene is very high compared to single layer. The collective excitations of π plasmons in AB
stacked are weakened, while π + σ plasmons are found to be escalated. The in-depth
investigations arrive at fine results which would enable the prediction of its potential applications
in the optical and optoelectronic industries.
Keywords: AB stacked bilayer silicene, band structure, DOS, optical properties.

P-35
POWER DOMINATOR CHROMATIC NUMBER FOR SOME TREE GRAPHS
Dr. A. Uma Maheswari1, BalaSamuvel J2,*
1
Associate Professor, Dept.of mathematics, Quaid- E- Millath Government College For Women, Chennai
2
Research Scholar, Dept. of Mathematics, Quaid-E- Millath Government College For Women, Chennai
*Corresponding author:bsjmaths@gmail.com

ABSTRACT
Let G = (V, E) be a finite, undirected and connected graph without loops and multiple
edges.Power dominator coloring of G is a proper coloring of G, such that each vertex of G power
dominates every vertex of some color class. The minimum number of color classes in a power
dominator coloring of the graph, is the power dominator chromatic number 𝝌𝒑𝒅 (𝑮). The main
Purpose of this article is to investigate the Power Dominator Chromatic Number for some tree
Graphs such as Coconut Tree and Bamboo Tree.

Keywords: Coconut Tree, Bamboo Tree, Coloring, Power dominator coloring,

AMS Mathematics Subject Classification (2010): 05C15, 05C69

INTRODUCTION
We consider the graphs G(V, E) that are finite, connected, undirected with no loops and multiple edges.
DominationandColoring are two main ranges of study in graph theory and these areas have been
comprehensively explored by considering different variants, (see, [1],[2],[3],[4]).
While formulating a problem related to electric power system in graph mthematical terms, Haynes et
al. [[5]] introduced the concept of power domination. A vertex set S of a graph G is defined as the power
dominating set of graph if every vertex and every edge in the graph is monitored by S, with following a
NCRTABAS-2020 56
set of rules for power monitoring system. Based on the concepts of coloring and power domination, K.S.
Kumar & N.G. David ([6])presented a new coloring variant called power dominator coloring of graph.

I. PRELIMINARIES
Definition 1: Proper Coloring
A proper coloring [4] of a graph G is an assignment of colors to the vertices of the graph such that no two
adjacent vertices have the same color and the chromatic number 𝜒(𝐺) of the graph G is the minimum
number of colors needed in a proper coloring of G.

Definition 2: Dominator Coloring


A dominator coloring [[1],[4],[7]] of a graph is a proper coloring such that each vertex dominates every
vertex in at least one color class consisting of vertices with the same color. The chromatic number 𝜒𝑑 (𝐺)
of a graph G is the minimum number of colors needed in a dominator coloring of G

Definition 4: Power Dominator Coloring


The power dominator coloring ([6],[8]) of G is a proper coloring of G, such that every single vertex of G
power dominates all vertices of some color class. The minimum number of color classes in a power
dominator coloring of the graph, is the power dominator chromatic number. It is denoted by 𝝌𝒑𝒅 (𝑮).

Definition 5:Coconut Tree


A Coconut Tree 𝐶𝑇(𝑚, 𝑛) is join of the path 𝑃𝑚 and a star𝐾1,𝑛 at either one of the pendent vertex of star
𝐾1,𝑛 .

Definition 6: Bamboo Tree


Consider 𝑖 copies of paths𝑃𝑚 of length 𝑚 − 1 and stars 𝐾1,𝑛 with 𝑛 pendant vertices.Identify one of the
two pendant vertices of the jth path with the center of the jthstar. Distinguish themore pendant vertex of
every path with a single vertex 𝑢0 (𝑢0 is not in any of the star andpath).The graph obtained is a regular
bamboo tree.
i.e A bamboo tree is a rooted tree consisting of branches of equal length,the end points of which are
identified with the end points of stars of differentsize. A bamboo tree is called regular if the sizes of the
stars considered areequal.
Here we study the power dominator chromatic numbers of some for some tree Graphs such as Coconut
Tree and Bamboo Tree.
II. MAIN RESULTS
Theorem 1

For any 𝒎 ≥ 𝟐, 𝒏 ≥ 𝟑, the power dominator coloring for a Coconut Tree𝑪𝑻(𝒎, 𝒏), 𝝌𝒑𝒅 (𝑪𝑻) = 𝟑
Proof
Let graph 𝐶𝑇(𝑚, 𝑛) be the Coconut tree is join of the path 𝑃𝑚 and a star 𝐾1,𝑛 at either one of the pendent
vertex𝑣𝑗 of star 𝐾1,𝑛 . The vertices of the Star 𝐾1,𝑛 are𝑣0 , the apex vertex, 𝑣1 , 𝑣2 , 𝑣3 , … , 𝑣𝑛 being the
pendent vertices and the path 𝑃𝑚 has the vertices 𝑢1 , 𝑢2 , 𝑢3 , … , 𝑢𝑚 . The following technique will make
sure the proper power dominator coloring of 𝐶𝑇(𝑚, 𝑛). Assign the color 𝐶1 to all the pendent vertices of
star 𝐾1,𝑛 , apex vertex with color 𝐶2 . All the odd indexed vertices of path 𝑃𝑚 say 𝑢1 , 𝑢3 , … , 𝑢2𝑚+1 are
colored with 𝐶3 and All the even indexed vertices of path 𝑃𝑚 say 𝑢2 , 𝑢4 , … , 𝑢2𝑚 are colored with
𝐶1 respectively. Thenall the pendent vertices𝑣1 , 𝑣2 , … , 𝑣𝑛 power dominate 𝑣0 with 𝐶2 , and any vertex in
path 𝑃𝑚 will power dominate the apex vertex 𝑣0 with 𝐶2 .

Thus any vertex of 𝐶𝑇(𝑚, 𝑛)power dominates some color class. The power dominator coloring for a
Coconut tree 𝐶𝑇(𝑚, 𝑛)is 3. That is 𝜒𝑝𝑑 (𝐶𝑇) = 3.
NCRTABAS-2020 57
Theorem 2

For any 𝒎 ≥ 𝟐, 𝒏 ≥ 𝟑, the power dominator coloring for a BambooTree 𝑩𝑻(𝒎, 𝒏), 𝝌𝒑𝒅 (𝑩𝑻) =
𝒎+𝟐
Proof
Let graph 𝐵𝑇(𝑚, 𝑛) be the Bamboo Tree is a rooted tree consisting of branches of equal length,the end
points of which are identified with the end points of stars of differentsize. A bamboo tree is called regular
if the sizes of the stars areequal. The vertices of the 𝑖 𝑡ℎ copy of the Star 𝐾1,𝑛
are𝑣0𝑖 , the apex vertex, 𝑣1𝑖 , 𝑣2𝑖 , 𝑣3𝑖 , … , 𝑣𝑚
𝑖
being the pendent vertices and the path 𝑃𝑚𝑖 has the
vertices𝑢1𝑖 , 𝑢2𝑖 , 𝑢3𝑖 , … , 𝑢𝑚
𝑖
, where 𝑖 refers to the𝑖 𝑡ℎ copy of the Path 𝑃𝑚 and the root vertex of all 𝑖 copies
of path 𝑃𝑚 be 𝑢0 . The following procedure will make sure the proper power dominator coloring of
𝐵𝑇(𝑚, 𝑛). Assign the colors𝐶1 , 𝐶2 , … , 𝐶𝑚 to corresponding apex vertices𝑣0𝑖 of star in the bamboo tree
𝐾1,𝑛 . All the pendent vertices of𝑖 𝑡ℎ copystar 𝐾1,𝑛 , will be colored with 𝐶𝑚+1 . All the odd indexed vertices
of path 𝑃𝑚𝑖 say 𝑢1𝑖 , 𝑢3𝑖 , … , 𝑢2𝑚+1
𝑖
are colored with 𝐶𝑚+2 and all the even indexed vertices of path 𝑃𝑚𝑖 say
𝑢2𝑖 , 𝑢4𝑖 , … , 𝑢2𝑚
𝑖
are colored with 𝐶𝑚+1 respectively. Thenall the pendent vertices𝑣1𝑖 , 𝑣2𝑖 , 𝑣3𝑖 , … , 𝑣𝑚
𝑖
power
dominate 𝑣0 with 𝐶𝑖 , and any vertex in 𝑖 copy of the path 𝑃𝑚 will power dominate the apex vertex 𝑣0𝑖
𝑖 𝑡ℎ

with 𝐶𝑖 .

Thus any vertex of 𝐵𝑇(𝑚, 𝑛) power dominates some color class. The power dominator coloring for a
Bamboo tree 𝐵𝑇(𝑚, 𝑛)is 𝑚 + 2. That is 𝜒𝑝𝑑 (𝐵𝑇) = 𝑚 + 2.

III. CONCLUSION
The concept of power dominator coloring relates coloring problems with power dominating sets
in graphs. Computing the Power Dominator Chromatic Number for some tree Graphs such as
Coconut Tree and Bamboo Tree has been the main focus of this paper. There is the scope for
finding the power dominator chromatic number for more graphs.

REFERENCES
[1] R. Gera, “On the Dominator Colorings in Bipartite Graphs,” Discuss. Math. - Graph
Theory, vol. 32, no. 4, pp. 677–683, 2012.
[2] C. Kaithavalappil, S. Engoor, and S. Naduvath, “On Equitable Coloring Parameters of
Certain Wheel Related Graphs,” Contemp. Stud. Discret. Math., vol. 1, no. No 1, pp. 3–
12, 2017.
[3] A. Vijayalekshmi and J. V. A. Sheeba, “Total Dominator Chromatic Number of Wheels
and Ladder Graphs through Computer Programming,” vol. 10, no. May, pp. 1173–1188,
2019.
[4] S. Arumugam and J. A. Y. Bagga, “On dominator colorings in graphs,” vol. 122, no. 4,
pp. 561–571, 2012.
[5] T. W. Haynes, S. M. Hedetniemi, S. T. Hedetniemi, and M. A. Henning, “Domination in
Graphs Applied to Electric Power Networks,” SIAM J. Discret. Math., vol. 15, no. 4, pp.
519–529, 2002.
[6] K. S. Kumar, N. G. David, and K. G. Subramanian, “Graphs and Power Dominator
Colorings,” Ann. Pure Appl. Math., vol. 11, no. 2, pp. 67–71, 2016.
[7] K. Kavitha and N. G. David, “Dominator Coloring of Some Classes of Graphs,” Int. J.
Math. Arch., vol. 3, no. 11, pp. 3954–3957, 2012.
[8] A. UmaMaheswari and Bala Samuvel J, “Power Dominator Chromatic Number for some
Special Graphs,” Int. J. Innov. Technol. Explor. Eng., vol. 8, no. 12, pp. 3957–3960, 2019.
NCRTABAS-2020 58

P-36
TECHNOLOGY IN ACCELERATOR PHYSICS

Abiya Jose
Department of Physics, Women’s Christian College, Chennai
Corresponding author:hancath79@gmail.com

INTRODUCTION
Acceleration physics has always been a field of research and a fore runner in the area of
development. It has shown its growth by building big accelerators, colliders, detectors. The
application of many theoretical aspects of physics can be seen in the areas of electronics,
spectroscopy, and superconductivity.
COMPONENTS OF AN ACCELERATOR
Most of the accelerators at present are synchrotron in which the applied magnetic field is
increased with increase in the energy of the accelerating particle. The main components of a
synchrotron include booster, source, storage ring, radio frequency (RF) cavities and main
injector. The particle is injected into a circular accelerator and accelerated to high energy before
transferring it into booster which further boosts it's energy then finally allowed to collide with
any another beam of particles.
RADIO FREQUENCY (RF) CAVITIES
The particles are accelerated by supplying electromagnetic fields in the form of radio
frequencies. For this purpose the accelerators are provided with metallic chambers called RF
cavities. The charged particles in this electromagnetic field will receive an electric pulse which
increases the particle's energy and in turn accelerates with high speed. They are used to bunch
the beam together by reducing or increasing the acceleration by changing the direction of the
electric field.
RF CAVITIES INFLUENCE ON LUMINOSITY
Luminosity is a measure of the number of collisions that occur; it is a quantity which gives the
performance of the collider. It is the proportionality factor between the number of events per
second and the cross section
𝑑𝑅
= ℒ𝜎p
𝑑𝑡

From above figure we observe that the Rf waves applied to the accelerators have an impact on
the luminosity of the colliders. The Tevatron of the Fermi lab has the radio cavities operating at
53MHz. It brought 36 bunches of protons and antiprotons to collider. The kekb accelerator
during belle1 experiment had the frequency of 508.887MHz whereas Cern has 16 RF cavities
NCRTABAS-2020 59
each resonating at 400MHz. The kekb was shut down for updation. The super kekb on 2018
released the results of high intense luminosity of 8×10*35 but with the same frequency as that of
Kekb. This was possible due to the implementation of crab RF cavities. The crab cavities have
the ability to tilt the proton bunches in each team thus maximising their overlap at the collision
point. This forces every single proton to pass the through the whole length of the opposite bunch,
thus increasing the probability of collision which in turn increases the luminosity. By 2025 Cern,
HL LHC is expected to implement this technology to increases the luminosity by tenfold from
the present value.
SILICON DETECTORS
The Silicon detectors are widely used in the field of particle physics in tracking detectors for
precise determination of particle tracks and decay vertices. These detectors measure the position
of the charged particles and by using track reconstruction software, the path of flight, momentum
of the particle and decay vertex can be deduced. The high inherent radiation resistance of these
detectors enable them to be used in harsh and extreme conditions. The ionizing radiation is
measured by the free electron-hole pair produced in the semiconductor by the ionization
radiation. The number of electron-hole pair is proportional to the energy of the radiation. The
silicon vortex detectors played a role in the discovery of top quarks. The CMS and atlas use
silicon strip detectors for the entire tracking system. In silicon strip detectors thin strips of about
20µm wide and interstrip distance of 50-100µm are the implants. These implants acts as charge
collecting electrodes and are placed on a low depleted silicon wafer thus forming a one
dimensional array of diodes. The hybrid pixel detectors are also a part of the inner tracking
device of the LHC. In this detector the strips are further divided into short pieces called pixels.
The main advantage of the pixel detector is the separation of sensors which require high
resistivity from the low resistive electronic parts. They can be later coupled to get the required
details.

ASIC
ASIC, Application-Specific Integrated Circuit is customized integrated chips for a specific
purpose. ASIC chips are fabricated using metallic oxide semiconductor. There are three levels of
customization – gate arrays, standard cell and full custom design. In gate arrays only the
metallisation layer allowing interconnections between different areas in the chip are customized
whereas in standard cell the mask is customized and in full custom design the design of ASIC at
transistor level is customized. In order to detect the colliding experiments accurately the
detectors need special chips which can uniquely identify each event. ASIC chips are reliable as
they can process faster; they also use only less electrical power. Thusmicrometre precession
detectors can work more efficiently by increasing the number of channels collecting information.
CONCLUSION
The improvement of electronics and technology has helped in the exponential growth of
research in particle physics and in exploring new depths of high energy physics.
REFERENCE
[1] https://googleweblight.com/i?u=https://home.cern/science/accelerators/large-hadron-
collider&grqid=-t5n85Sf&s=1&hl=en-IN&geid=1084
[2] KEK High Energy Accelerator Research Organization: http://www.kek.jp/enKEK B-Factory
Design Report, KEK Report 95-7, August 1995.
[3] T. Kageyama, et al., Reports of the First AsianParticle Accelerator Conference, Tsukuba,
Japan, 1998.
NCRTABAS-2020 60

P-37
FUTURE OF ENVIRONMENTAL MONITORING: ROLE OF FERRITE
NANOPARTICLES AS GAS SENSOR

S. Jayanthi1, V. Saravanasundar2,* and G. Rajarajan3


1, 2
Department of Electronics and Communication Science, Hindustan College of Arts and
Science, Chennai – 603 103, TN
3
Department of Physics, Hindustan Institute of Technology &Science,
Chennai,TN
*Corresponding author:saravananjan4@gmail.com

ABSTRACT
Currently around the world, especially the developing countries facing the greatest issues from
environmental pollution through toxic effluents. It is one of the utmost concerns to society,
which threatens the lives of people at stake when crossing their respective threshold limit. The
toxic effluent gases such as CO, NO, SO, etc. are the most common toxic effluents of
environmental pollution. The most dangerous among them is CO, a colourless, odourless,
tasteless yet harmful pollutant that engenders through unfinished burning of combustibles,
damages the human’s respiratory system very seriously. Like all the mentioned gases are having
own effect over human life by affecting vital organs. Henceforth, monitoring and regulate the
amount of these toxic effluents in the atmosphere. The researchers around the world find it
difficult and the biggest challenge for them to unveil a reliable cost-effective sensor. The demand
for the sensors to detect these toxic effluents in the form of gas present in the environment is
huge today. Development of new unique functionality in materials is always mesmerizing for
researchers around the world. Metal oxides are well known for their sensing performance and
use frequently for monitoring the environment. Among the metal oxides materials, the nano-
structured ferrites are one of them. The ferrite nanoparticles are possessing immense application
in all fields of science and technology over the years and they play a vital role in the
advancement of the human lifestyle in the current century. Amongst the applications, gas sensing
is a vital one. Researchers have broadly studied the gas sensing application of ferrite
nanoparticles over the last few decades for the presence of various toxic, harmful, flammable and
explosive gases for its early detection and warning. This article intended to provide an overview
of the latest development and future of ferrite nanoparticles based gas sensors to save the
environment.
NCRTABAS-2020 61

P-38
RECENT TRENDS IN LASER TECHNOLOGY

G.Vinitha
III semester, Dept. of Science and Humanities, Rayalaseema University, Kurnool.
Corresponding author:vinithagan20@gmail.com

ABSTRACT
Last fifty years, laser technology made great progress and its numerous applications make it
essential in everyday life. Still this technology is open to numerous developments. There is a
particular focus in the field of medicine, for diagnosis, for modified therapies, and as a research
tool in biology. Whereas it’s usage is now well- established in ophthalmologic and dermatologic
treatments, and surgery. Laser technology emerged in late 1990s with the development of
devices able to perform fine dissections of biological tissues using a laser beam. The laser
technology based micro dissection provides a speedy, precise method of removing targeted cells
from complex biological tissues even at the nano-scale level in the field of cancer. This article
gives an overview on the recent innovations of laser technology.

INTRODUCTION
"LASER" is an acronym for "Light Amplification by Stimulated Emission of Radiation". The
first laser was made in 1960 by Theodore H. Maiman at Hughes Research Laboratories. Lasers
are used in barcode scanners, optical disk drives, laser printers, fiber-optic communication,
semiconducting chip manufacturing (photolithography), cutting and welding hard materials,
military devices for marking targets, measuring range and speed. In the medical field Lasers are
used in surgery and skin treatments. It is a very intense beam of light or infrared radiation. It has
the characteristics of Monochromaticity, coherence and focus to a narrow path over greater
distances.

ADVANTAGES AND DISADVANTAGES


It has a high information-carrying capacity and hence is used in the communication domain for
transmission of information. It is free from electromagnetic interference. This phenomenon is
used in optical wireless communication through free space for telecommunication and computer
networking. It has very minimum signal leakage. Laser-based fiber optic cables are very light in
weight and hence are used in fiber optic communication systems. It is less damaging compared
to X-rays and hence widely used in the medical field for the treatment of cancers. It is used to
burn small tumours on the eye surface and also on the tissue surface.
It is expensive and hence more expenditure on the patients requiring laser-based treatments. It is
costly to maintain by doctors and hospital management. It increases the complexity and duration
of the treatment based on laser devices or equipment. Lasers cannot be used in many dental
procedures like filling cavities between teeth etc. The laser beam is very delicate to handle in the
cutting process. The slight mistake in adjusting distance and temperature may burn the metals
and is harmful to human beings and may burn them during contact.
NCRTABAS-2020 62
RECENT TRENDS
Lasers in the fashion industry:Infashion industry, technology with Lasers has incorporated
efficient and cost-effective methods for producing clothing. Laser cutting has been used to cut
thicker materials like acrylic plastic and metals. Laser cutting hasalso providing finer cuts for
delicate and softer materials like thin fabrics without burning the material or reducing the quality
of the cut.Lasers are also used to engrave designs on thicker or stronger materials like leather and
even denim. The result is a more accurate and clear design without exposing the material to
unnecessary stress.

Lasers for agriculture:A laser land leveller is a machine equipped with a laser guided drag
bucket is more effective and faster in ensuring a flat surface before sowing which saves water
and reduces green house gas emissions.Laser cutting machines for crops are saving time with
higher efficiency and low cost.
Lasers in the automotive world:From xenon and LEDs, laser technology has allowed for the
production of laser headlights for cars. Laser headlights are impressive because they are four
times brighter than the LED headlights presently seen on some vehicles today. They can be
compressed without reducing the range of visibility.A few automotive companies have already
begun integrating laser headlights on their new releases. For example, BMW has released the
new BMW 7 series which includes a laser diode as the source of headlights which provides up to
600 meters of visibility, far better than LED headlights.
Lasers in the house-hold:Lasers are not only used in computers and in cars, but also in everyday
tasks that people perform at home.Smart dishwashers and washing machines are installed with
sensors that use laser technology. These sensors can analyze the clothing and the type of stains,
allowing for suitable water and detergent management. It also determines whether an additional
rinse is needed based on how soapy.Laser technology is also used in Microwave ovens to
automatically adjust the cooking settings depending on the food being cooked.
NCRTABAS-2020 63

CONTRIBUTED PAPERS FROM


ELECTRONICS
NCRTABAS-2020 64

E-1
STUDY OF INTELLIGENT TESTBENCH FOR WORKLOAD CHARACTERIZATION
FOR VARIOUS MULTICORE EMBEDDED PROCESSORS

A.Gopinath1,2
1
Assistant Professor, DRBCCC Hindu College, Pattabiram, Chennai, India
2
Research Scholar, Bharathiar University, Coimbatore, India
Corresponding Author:gomathi_gopinath@yahoo.com

ABSTRACT
Due to the technological advancement of complementary metal-oxide-semiconductor
based high-speed ICs working at higher frequencies are challenging in energy saving
applications. Among these technological developments, multi core processors are important in
operation which is frequently used in embedded electronic devices. Intelligent power saving
calculation and study of these processors are lacking. In this paper, the study of dynamic
frequency scaling and voltage scaling of these multi core architectures offers very good scope for
energy saving applications. Energy Based Workload Characterization (EBWC) is used here for
these multi core architectures workloads calculations. The proposed new test bench works with
the principle of Support Vector Machines (SVM) mechanism and the energy consumption is
much reduced in this type of calculation. EBWC has been validated with the different
benchmarks such as Internet of Medical Things (IoMT), BEEBS, Mi Bench (Media Bench) and
the implementation of SVM proves to be a vital role and it reduces the energy by 20-25% of its
original consumption.
INTRODUCTION
In today’s world, advent of the Multi core Heterogeneous Architectures finds its place in
all technologies such as Internet of things, Wearable Implantable devices, Mobile Applications.
Even though these kinds of processors almost replaces the traditional kind of processors, due to
its Speed, Energy and performances. Even though these kinds of architectures have many
advantages, knowledge about exploring its features for an effective usage remains in the darker
side. Since Workloads increases day by day, consuming less Energy, achieving the high
performances remains to be real challenge in the research. To Explore the Full capability of
Multi Core Heterogeneous Architectures in accordance with the Workloads, an intelligent
computing framework is needed to design an efficient embedded system designs.
REVIEW WORK
Jian Chen and Lizy K. John proposed an energy-aware scheduling mechanism that employs
fuzzy logic to calculate the suitability between programs and cores. The scheduling mechanism
achieves up to 15.0% average reduction in energy-delay[1]. Jian Chen and Lizy K. John
introduced a technique to leverage the inherent characteristics of a program for scheduling
decisions in heterogeneous multi core processors. The outcomes of the proposed system
achieves 24.5% reduction in energy delay product, 6.1% decrease in energy, and 9.1%
improvement in throughput [2].Davide Cerotti proposed two approaches to characterize multi
threaded applications in multi core environments described by a limited number of parameters.
They used wider modeling technique that aims to characterize generalized performance metrics
NCRTABAS-2020 65
of multi core CPU with a limited complexity. They worked on how such parameters can be
derived from measurements originating from executions of real applications on real multi core
machines have been given [3].Daniel Shelepov introduced a Heterogeneity-Aware Signature-
Supported scheduling algorithm that does the coordinating using per-thread architectural
signatures, which are compacted summaries of threads’ architectural properties collected offline.
The resulting algorithm does not depend on dynamic profiling, is able to achieve performance
comparable to the best (oracle) static assignment. Contrary to our expectations, we found that
lack of phase awareness in HASS has attempted to its advantage, because it save it from many
problems linked to phase changes that were uncovered by our implementation of IPC-driven
algorithm [4]. Antonio Pullini, Francesco Conti, Davide Rossiy, Igor Loiy, Michael Gautschi,
Luca Benini the adaptable architecture of Mia Wallace can be exploited to proficiently run
Convolution Neural Networks, which gave best in class brings about visual, sound and
classification of signal and would be very attractive in IoT[5].
CONCLUSION
EBWC mechanism is the visualization computing framework used for the workloads
performance relationship for the Multi Core Heterogeneous Architectures to the users. It has
been tested with the different families of the architectures with the ARM. Intelligent framework
with the prediction mechanism makes energy consumption is achieved by 20-25%.

REFERENCES
[1] Jian Chen and Lizy K. John, “Energy-Aware Application Scheduling on a
HeterogeneousMulti-core System,” IEEE International Symposium, 30th September 2008.
[2] Jian Chen and Lizy K. John,” Efficient Program Scheduling for Heterogeneous Multi-core
Processors,”Design Automation Conference, IEEE 2009.
[3] Davide Cerotti , Marco Gribaudo , Mauro Iacono , PietroPiazzolla, “Workload
Characterization of Multithreaded Applications on Multicore Architectures,” 28th European
Conference on Modeling and Simulation.
[4] Daniel Shelepov, Juan Carlos Saez Alcaide, Stacey Jeffery, “HASS: A Scheduler for
Heterogeneous Multicore Systems,” research gate.
[5] Antonio Pullini, Francesco Contiy, Davide Rossiy, Igorlioy, Michael Gautschi, Luca Benini,
“Heterogeneous Multi core System-on-Chip for Energy Efficient Brain Inspired
Computing,” IEEE Transactions on Circuits and Systems, 2017.
E-2
INTELLIGENT CONTROL OF ELECTRICAL SYSTEM FOR ENERGY
CONSERVATION USING PIR SENSOR

R.Vajubunnisa Begum1,*, H. Jasmin 2, H B Ayesha 3


1
Associate Professor, JBAS College for Women, Chennai,
2
Assistant Professor, JBAS College for Women, Chennai
3
II year Electronics and Communication Science, JBAS College for Women, Chennai,
Corresponding Author: *vaju6666@gmail.com

ABSTRACT
Energy conservation and considerable saving can be achieved by installing the project
“INTELLIGENT CONTROL OF ELECTRICAL SYSTEM FOR ENERGY CONSUMPTION
NCRTABAS-2020 66
USING PIR SENSOR”. It is uncommon to see the fans switched off in unoccupied classrooms,
offices and homes for many hours by which considerable amount of energy get wasted. Using
PIR (Passive Infrared sensor) sensor it can be automatically switched on or off by the help of
ARDUINO microcontroller.

INTRODUCTION
Energy conservation are taken to decrease the uasage of energy by using less of an
energy service. Previous research suggested that there always exists a gap between generation of
energy and energy demand. From the survey conducted by the ministry of power, it is found that
the improvement in efficiency of end user is essential. (Bisajit Biswas., etal., 2013). Application
of Energy conservation technology will lead to energy saving which means growing generation
of energy with available source. (Pradeep H. Zunake., Sati S. More, 2012)

BLOCK DIAGRAM

PROPOSED MODEL
Implemented the automated switching of real time application. Two fans from the
Department of Electronic Science lab of our college are selected as a control application.
A separate Energy meter is installed for measuring the energy consumed by these two
fans before incorporating the PIR. Then we have interfaced PIR sensor with Arduino board.
The Arduino board automates the switching of fans through a relay. The readings of
before installing our Energy conservation system is compared with the readings taken after
fusing our system to show the energy conservation. It is proved that about 50% of energy is
saved after implementing the technology of “Intelligent control of Electrical system using PIR
Sensor.
NCRTABAS-2020 67
GRAPH REPRESENTING DAY WISE ANALYSES

GRAPH REPRESENTING ENERGY CONSERVATION

Photograph of the energy meter Photograph of the circuit

CONCLUSION
The main goal of this research project is to save the electricity from being wasted when
not necessary by automatically switching ON of the appliances that are required based on the
occupancy, instead of blindly switching on all the appliances. It was found that about 50% of the
energy could be saved using Energy conservation system using PIR sensor. By implementing
this idea in educational sector would yield considerable energy conservation.
NCRTABAS-2020 68
In future, further research could be developed using high end MEMS Thermal Sensor or
Microbolometer. Unlike pyro-electric human presence sensors that rely on motion detection, the
thermal sensor can detect the presence of stationary humans by detecting body heat.

REFERENCE
[1] Biswajit Biswar, Sujoy Mukherjee and Aritra Ghosh “Conservation of energy conservation
in campus lighting in an Institution”, International Journal of Modern Engineering Research
(IJMER),volume 3, Issue 4, August 2013.
[2] Pradeep H Zunake and Swati S More “Energy Conservation in an Institutional Campus: A
Case Study”, International Journal of Advance in Engineering and Technology,volume 4,
Issue 2, September-2012.
[3] Abishek N Vaghela, Bhavin D Gajjan, Subhash J Patel “Automatic switch using PIR
sensor”, (IJEDR) International Journal of Engineering Development and Research, volume
5,Issue 1, 2017.

E-3
WIRELESS CHARGING CIRCUIT USINGINDUCTIVE COUPLING METHOD
Daniel Dias, Ramanunni.O.R1, Dr.R.Raj Mohan2,*
Under Graduate1, Assistant Professor2
Department of ECS, A.M.Jain College, Chennai, Tamil Nadu, India.
*Corresponding author:rrajmohan.teacher28@gmail.com

ABSTRACT
The transmission of energy, from one place to another place without using wires is in great
demand nowadays and don't future realistic. Conventional energy transfer is using wires and
cables. But, various technologies are used to build wireless transmission. There are different
methods of charging such as, Resonance coupling, Inductive coupling and Radio coupling.
The inductive coupling is used in this present work, where primary and secondary coils are
not connected with wires and energy transfer is due to mutual induction. This proposed work
contains two sections, first the transmitter section and the receiver section. Transmitter coil
converts the d.c power to high frequency AC signal and receiver coil receives the power and
convert it into a.c signal.

In this module of wireless battery charger, it has been used with two circuits. The first circuit
is transformer circuit used to produce voltage wireless method and then, the transmitter circuit
consists of dc source and an oscillator circuit. When d.c power is given to the oscillator the
current continue to start flowing through the two coils and drain terminal of transistor. The
second circuit also called as receiver circuit consists of receiver coil, rectifier circuit and
regulator. Receiver coil is placed at a distance near the inductor to induce ac power in the coil.
The following advantages are identified from the proposed work. It provides convenient and
safe way for charging of everyday devices and reduces costs associated with maintaining
mechanical connectors. Also, it prevents corrosion due to elements like oxygen and protects
from water and eliminates sparks and debris associated with wired contacts . It provides
protected connections; no corrosion when electronics are all enclosed, away from water or
NCRTABAS-2020 69
oxygen in the atmosphere. The limitations are, it can be applicable for only small distance,
field strength has to be under safety levels, initial cost is high, and wastage of heat. But by
using ultra-thin coils, higher frequencies and optimised drive electronics losses during transfer
can be reduced. This shows an efficient result. And compact chargers and receivers,
facilitating their integration into mobile devices or batteries with minimal changes required.
Keywords: Wireless charging, Inductive coupling, charger.

E-4
ENERGY EFFICIENCY OPTIMIZATION FOR RF POWEREDWIRELESS SENSOR
NETWORKS
S.Thiyagarajana,* and P.Gowthamanb.
a
Department of Electronics Science, Jaya College of Arts and Science, Thiruninravur
b
Department of Electronics, Thin film Centre, Erode Arts and Science College, Erode
*Corresponding Author: thiyagu_2@rediffmail.com

ABSTRACT
In order to extend the lifetime and energy constraints of wireless powered sensor
networks (WPSN), energy harvesting is the only effective means. In recent proposed WPSN, the
sensors do transmit information once scavenge energy from ambient radio frequency (RF)
signals. In this paper, simultaneous wireless transfer of information and power to a clustered
sensor network through non-orthogonal multiple access (NOMA) method so that the cluster head
node yields the energy wirelessly from RF signals communicate by its cluster members. Thus it
uses the harvested energy to earn the energy spent for data aggregating and forwarding. For such
networks, attaining high energy efficiency between harvested energy and, transmit and decode of
information trade-off is a crucial challenge. Hence, bio-inspired algorithms applied for the
optimal rate and power resource allocation in a clustered WSN as non-convex constrained
energy efficiency (EE) maximization problem in terms of harvesting time and transmit powers.

Keywords:EE,NOMA,RF,WPSN, and WSN.

1. INTRODUCTION
Wireless sensor networks are normally constituted many low-cost and low-power
homogenous or heterogeneous sensors, which do sensing, simple computations, and short-range
wireless communications. The lifetime of wireless sensor networks is restricted due to
limitations in energy resources and accessibility of the actual sensors [1]. Energy harvesting has
appeared as a significant technique to offer a power supply for green self-sufficient wireless
sensors, in which the energy acquired from intentional or ambient sources can be gathered to
refill the batteries with charges. Specifically, radio frequency (RF) energy harvesting becomes
more adaptable and perpetuating than solar or wind energy harvesting, since the RF signals
radiated by ambient transmitters are steadily obtainable. Several researches have explored the RF
signals for concurrent wireless information transmission (WIT) and wireless energy transfer
(WET) [2]. This effort focuses on the essential tradeoff between attainable throughput and the
harvested energy [3]. Other way of efforts on wireless powered sensor networks (WPSNs),
which choose the WET in traditional wireless communication systems .
NCRTABAS-2020 70
The EE maximization problem for Time division WPSNs is investigated [4], and a
Difference of Convex Functions (DC) programming based iterative solution technique was
formulated to solve the EE-maximization problem which was intrinsically non-convex because
of the coupling power allotment among sensors.

This paper attempts to pay importance on three significant aspects.


First, this paper studies  resource allocation of energy effectiveness for NOMA- WPSNs
by emerging a maximization of EE in terms of harvesting time and transmit powers. Second, it
studies two significant features in optimizing  for EE maximization.

Figure 1 (a) Network System

Figure 1(b) Slot System

This paper treats a slotted WPSN which comprises of one cluster head single-antenna
(CHSA) and M single-antenna sensors. As shown in Fig. 1(a), the CHSA transfers power as
currency to M sensors in downlink WET phase and receives information signals from M sensors
in uplink WIT phase. All sensors synchronize to the CHSA and function in a half-duplex mode.
Fig. 1(b) gives the slot system for the WPSN, the WET and WIT phases are operated with the
time period of 𝜏0 and 𝜏1. Assumed here the slot time T=1. So
𝜏0 + 𝜏1≤1 (1)

2. WET MODULE
Each sensor has an infinite energy storage device. Let 𝑒𝑖 (i=1,2,…,M) describe the initial
energy of sensor i. 𝑒𝑖=0 is set when zero energies left from the previous transmission. So, the
accessible energy at sensor i(i=1,2,…,M) after finishing WET phase is described as

𝐸𝑖=𝜉𝑖hiP0𝜏1+ ,∀𝑖, (2)

Where the parameter 𝜉𝑖 (0<𝜉𝑖<1) describes the conversion efficiency of energy which depends on
the hardware design of sensor i. P0 describes the transmit power of the CHAS and hias the
channel gain of the downlink between the CHAS and sensor i.

3. WIT MODULE
NCRTABAS-2020 71
NOMA scheme differs from the “harvest-and-transmit” protocol [5][6], with the sensors
transmit information to the CHAS by simultaneously receiving its harvested energies. The total
consumed power of sensor iduring WIT period is bounded by its available maximum power

𝜂𝑖𝑃𝑖 + 𝑃𝑖𝑐 ≤ 𝐸𝑖𝜏1, ∀𝑖, (3)

Where,Pidescribes the transmit power of sensor i, 𝜂𝑖 and 𝑃𝑖𝑐 describe two positive constants
related to power amplifier and circuit of sensor i, respectively.
It is defined that 𝝉 = (𝜏0, 𝜏1) and P= (P1,P2,…,PM). Then, attainable throughput for sensor
ican be evaluated

(4)
where σ2 is the noise variance of CHAS.

So to assure the QoS of sensors, fix the constraints of QoS that is a minimum required
rate Ri >0 for sensor i,

(5)
4. SIMULATION
The simulations are performed for a WPSN with a CHAS and four sensors to verified. The
distance between sensor i(i=1,2,3,4) and the CHAS is set di=2.5i (Unit: meter). Assuming the
channel reciprocity preserved for the downlink and uplink of sensor i, hi and giare10-1 di-2.
Simulation parameters are set to describe values of standard WPSN scenarios [3]:
Parameter Value Parameter Value
M 4 Pc 500mW
W 20KHz Pi 1W
σ 2
-110dB Pic 10mW
𝜉𝑖 0.9 Ri 2Kbits
Pmax 10W 𝜂𝑖 1
Table 1 Settings of WPSN

Algorithms Parameter Settings


PSO S=200, 𝜔 =1, c1=2, c2=2, Vmax=10-3 and N_Iter=300
FPO n=200 p=0.8 N_Iter=300,β=1.5,Lcoeff=0.01
Table 2 Settings of Algorithms

The EE corresponding to the best particle (i.e., 𝑥̂b(t)) at each iteration is measured for
both the algorithms as shown in Figure 2. Being PSO and FPO-based algorithms operate on the
CHAS which is typically with strong capability for computing and storing, the allotment of
resource in real time for WPSNs is practicable.
NCRTABAS-2020 72

Figure 2 Performance comparisons of algorithms for EE Maximization

5. CONCLUSION
In this paper, the resource allocation for simultaneous wireless information and power transfer in
clustered wireless sensor networks are studied, focusing to determine the optimal values for data
rate, allotted transmit power in such a way WPSN energy efficiency of data transmission is
maximized. Considering the circuit power consumption and the RF energy harvesting ability of
receivers into the objective function, the resource allocation estimation is deduced as a non-
convex optimization. Simultaneously the receiver adapts its optimal power splitting ratio in order
to attain energy efficiency maximum. Simulation results show that the algorithms converge with
lesser amount of iterative and are efficient to refill the energy of sensor nodes and improve
energy efficiency.

REFERENCES
[1] F. Akyildiz, W. Su, et al., “Wireless Sensor Networks: A Survey,” Computer Networks, vol.
38, no. 4, pp. 393-422, 2002.
[2] X. Zhou, R. Zhang, and C. K. Ho, “Wireless information and power transfer: architecture
design and rate-energy tradeoff,” IEEE Trans. Commun., vol. 61, no. 11, pp. 4754-4767,
Nov. 2013.
[3] Thiagarajan S, P Gowthaman, M Venkatachalam, PSO and FPO based Optimization of
Energy Efficiency for RF powered Wireless Sensor Networks, International Journal of Pure
Applied Mathematics, Vol 119, pg no 2099-2110, 2018
[4] X. Lin, L. Huang, et al., “Energy-efficient resource allocation in TDMS based wireless
powered communication networks,” IEEE Commun. Lett., vol. 21, no. 4, pp. 861-864, Apr.
2017.
[5] H. Ju and R. Zhang, “Throughput maximization in wireless powered communication
networks,” IEEE Trans. Wireless Commun., vol. 13, no. 1, pp. 418-428 Jan. 2014.
[6] X. Lu, P. Wang, D. Niyato, D. I. Kim, and Z. Han, “Wireless networks with RF energy
harvesting: a contemporary survey,” IEEE Commun. Surv.Tuts., vol. 17, no. 2, pp. 757-789,
Second Quarter 2015.
NCRTABAS-2020 73

E-5
NEURAL NETWORK BASED IMAGE RESTORATION TECHNIQUE

D.Beula1,* ,T.V.Gayathri2
1, 2
Assistant Professor, Department of ECS, NPSB College, Medavakkam, Chennai, TN.
*Corresponding author: npsbecs@gmail.com

ABSTRACT
This paper proposes an idea on image restoration using neural network method. This is a method
which includes Multilayer perceptron (MLP). MLP is trained with synthetic gray level images of
artificial degeneration of co-centred circles. Present proposed approach differs from the existing
method on the fact that here space relations are used and space relations are taken in different
scales. This aproach makes it easy for the neural network for thtakenx x ablishment of space
relation between pixels. By using the proposed model it is observed that there is a slight increase
in the brightness and contrast of the resulting image. The proposed work can be used in Medical
Magnetic Resonance (MRI) by using fuzzy classification technique. The advantage of this
method is that it removes the need for prior knowledge about existing noise in images.

1. INTRODUCTION
The recovery of an image from its probably degraded version is called as image restoration.
Existing methods usually require a priori knowledge of the degradation process to design the
solution that may compensate for the degradation problems caused by motion blur, atmospheric
turbulence, and optical diffraction [13]. The degradations may result from noise in the sensor,
loss of focus, object camera relative motion, and random atmospheric turbulence [12]. Such
noise sources in images are characterized by Gaussian-like distributions [11].

In general restoration techniques are oriented towards the recovery of the real image by applying
a restoration process to its degraded version [9, 1]. Some common methods for image restoration
include the inverse filter, the Wiener filter, the moving-average filter, the parametric Wiener
filter, the mean-squared-error filter, the band-pass filter and the singular value decomposition
technique [9], as well as, the regularization filter [1].Recently, some of the methods for image
restoration have been modified in an attempt to improve their solutions and reduce the
computational complexity [6].Due to the wide use as tools for information processing, Artificial
Neural Network (ANN) models have also be used to design new solutions to the image
restoration problem. They present some features that may lead to better results in the image
restoration process [14]. Such features are related to its plasticity and their parallel computing
power that have made them appropriate for applications in pattern recognition, signal processing,
image processing, computer vision, and several other application areas. There exists a number of
different neural network architectures well described in the computer science literature.

In this paper we present a novel neural network based multiscale restoration approach [4, 5]. The
method uses a Multilayer Perceptron (MLP) algorithm [10], trained with synthetic a 8-bit gray
level image of artificially degraded co-centered circles, with 256 x 256 pixels. In order to design
the training set for the neural network, the artificially degradedimage is submitted to a clustering
performed by a Kohonen neural network, using a threshold level of similarity for existing
NCRTABAS-2020 74
neurons (cluster centers). The algorithm adds a new neuron and assigns the corresponding input
vector as its corresponding weights when existing neurons are not able to overcome the threshold
level in the competition phase of the Kohonen neural network. This process leads to reduced
training data sets, that vary according to the threshold level settings. The learning phase of the
Multilayer Perceptron captures inherent space relations of degraded pixels and correspond them
to the non-degraded pixels. In the conducted experiments, the degradation effects are simulated
by applying the degradation model in [9]. The image is first convoluted with a lowpass Gaussian
filtering operation and then noise is added to it at 1% rate occurrence. The degraded image data
is provided as input to the MLP and the non-degraded image as the corresponding output in the
supervised learning process. The main difference of the present approach to existing ones relies
on the use of space relations taken from the vicinity of the considered pixel in different scales,
which makes it possible for the neural network to establish space and contextual relations among
the considered pixels in the image. This approach attempts to come up with a simple method that
leads to an optimum solution to the problem without the need to provide a priori knowledge of
existing noise in the images. In the conducted experiments, the trained neural network is
submitted to indoor, outdoor, and satellite degraded images to verify the generalization
performance on different image types. The results are compared to existing restoration
approaches (focusing the Wiener filter) by varying the similarity parameter in the
Kohonenclusterization algorithm used to reduce the input vector.

2. THE PROPOSED APPROACH


In this paper we present a neural network based image restoration technique using local spatial
information acquired in a multiscale approach. The design of the method involves three phases:
a) image information extraction; b) data clusterization to reduce the amount of data to form the
training set; and c) the MLP training to capture a general inverse reconstruction model.

2.1. TRAINING DATA


The proposed approach assumes the effects of the degradation sources mentioned before are
universal in nature, that is, in general images are subject to the same degradation sources, and
that they may be simulated on synthetic images. Thus, an 8-bit gray level image of co-centered
circles was created and submitted to an artificial degradation process as previously stated. Figure
1.a shows the degraded image on the left and the corresponding non-degraded version on the
right.

Figure 1: Multiscale approach: a) overall view of “linearization” in process and b) illustration of


the 7x7 window neighborhood subsample method.The training data set was created by
sequentially extracting 3x3, 5x5, 7x7 windows from the degraded imageposition in the original
nondegraded image. The 5x5 and 7x7 were subsampled to procedure two 3x3 windows to
pretend the multiscale approach. Then, three 3x3 windows data were then gather to form theinput
vector for training purposes. The pixel in the center of the corresponding 7x7 window in the non-
degraded image was chosen to be the desired output in the training process. Both the input vector
and desired output pixel formed a 28-member vector which was sequentially concatenated to
NCRTABAS-2020 75
other vectors of the same nature to form the training set (see Figure 1). This process resulted in a
very large training data set that may imply in a long time to train the MLP in the modelling
process. Thus, the previously cited clusterization process using a Kohonen neural network was
applied in a data mining like approach to reduce the amount of data for training purposes.

2.2. DATA MINING


The cited data mining process consisted of designing a clustering algorithm that was
implemented as a Kohonen neural network. A similarity Gaussian metric wasimplemented. The
idea was to establish a threshold limit to the competition phase of the learning processor of the
Kohonen neural. The winning neuron had to overcome the threshold (robustness) to have its
weight updated. If the winner neuron did not overcome this threshold, a new neuron had to be
inserted in the one-dimensional lattice to represent the input vector whose elements were
assigned to the equivalent elements of the weight vector (See Fig 2).

Figure 2: Clustering process


In this work two similarity thresholds were used: a 0.9 and 0.8 of a Gaussian transformation of
the distance of the weight vector of each neuron to the input vector.

Image Size size Similarity Similarity


image 80% 90%

circle 506×506 256.036 11.318 2.886

2.3. TRAINING PROCESS


In the training process the degraded image data was provided as inputs to the mutilayer
perceptron and the non-degraded image data as the corresponding desired output for the
supervised learning process. The MLP was designed with only 1 hidden layer with 14 neurons,
each neuron with a logistic sigmoidal activation function.The main objective to the present
approach is to derive a simple image restoration method based on an inverse degradation model
that may lead to an optimum solution to the problem.

3. EXPERIMENTS
NCRTABAS-2020 76
In this section, we present the results of some experiments conducted with the proposed
technique to restore degraded images. Different neural network architectures were trained and
then submitted to degraded versions of different images, such as: the Lenna image and an Ikonos
satellite image, submitted to the same noise sources used in the training image; and MR brain
images from a real data base containing multiple sclerosis lesions.The use of different image data
aimed to verify the adequacy and robustness of the neural network approach to the image
restoration problem as stated in [9].Multiscale Neural Network Method for Image Restoration A
comparison of the proposed method and the Wiener filter was performed through a quantitative
analysis by calculating image statistics representing the mean brightness and contrast: the image
mean (μf ), and the variance (vf ). In addition, the standard deviation (_), the Mean Square Error
(MSE), and the Signal-to-Noise-Ratio (SNR) are calculated [12]. The intensity mean, variance
and standard deviation are properties frequently used in image processing due to their relevance
to characterize the appearance of an image [9]. The mean is a measure of the averagebrightness
and the variance is a measure of contrast. The standard deviation is the square root of the
variance, which may be interpreted as a measure of image homogeneity.

Figure 3: Generalization tests with Lenna image: a) original image; b) Gaussian degraded with
1% noise; c) restored image by a 3x3 Wiener filter; d) ANN result multiscale approach, 80%
similarity; and e) ANN result - multiscale approach, 90% similarity.
Figure 3 shows the Lenna image: a) without noise; b) artificially degraded using 1% of noise and
after the restoration by c) the Wiener method; and by the neural network approach using the d)
80% and e) 90% vector similarity thresholds in the clusterization phase. The quantitative analysis
is obtainable in Tables 2 & 3.
Table 2 brings the results for μf ,vf and α for the original, degraded and restored images. We can
see an rise in brightness and contrast for the images restored by the proposed approach, through
the values obtained for μf and vf . A non significative difference in the α values shows
preservation of image homogeneity.
Table 2: Statistics of the images.
Lenna OI 1%noise WF ANN ANN
80% 90%
μf 24.152 124.136 123685 120.896 125.219
vf 2295.997 2912.072 2318.443 2208.260 2655.231
α 47.916 53.963 48.150 46.992 51.528
NCRTABAS-2020 77
The obtained values for MSE and SNR between the original image and the 46 Castro,
Drummond and Silva degraded version, and between the original and restored images are
presented in Table 3. A better performance of the ANN restoration can be verified by comparing
its smaller MSE and its larger SNR, with the corresponding values for Wiener restorarion. A
smaller MSE means that the restored image is closer to the original version.
Table 3: Filtering errors.
Lenna MSE % SNR(db) %
D×OI 25.278 - 5.554 -
W×OI 11.141 - 12.670 -
ANN 80% OI 10.290 7.63 13.360 5.44
ANN 90% OI 10.708 3.88 13.015 2.72
3.1. VALIDATION OF THE PROPOSED TECHNIQUE
As a form of validation of the technique of the proposed restoration approach, a fuzzy
classification method was applied to noisy and restored images. The aim of this phase was to
verify the benefits of the transformation in the image feature space due to the restoration process
applied. It is important to notice that the proposed approach does not consider any prior
knowledge of existing degradation problems in the images. The performance of the classifier is
examined by contrast of its application to the noisy images and to their restored versions, using
the kappa index [3] to assess the quality of the classifications. We used a prototype basedfuzzy
classifier [7] that works by first establishing cluster centers for the samples of each class. This is
performed by the original fuzzy c-means (FCM) algorithm [2] and a clusterization index [8] is
used to calculate the optimal number of centers foreach class. The resulting centers are then
transformed into fuzzy prototypes by the application of a similarity relation to each one.We have
used two slices, 14 for training and 15 for testing, in sequences T2 and F (see Figures 5a and 5e).
Five classes were considered: background, cerebral spinal fluid (CSF), gray matter, white matter,
and the Multiple Sclerosis (MS) lesion. The sample extraction was performed under the
supervision of a radiologist who established the ground truth for the classification and analysis
purposes. Interestingly, even though the effects of the restoration process are not very evident
under visual analysis, as can be seen in Figures 5 b) - d) and f) - h), the classification results were
improved in some cases. The MS lesions were diagnosed in a clinical exam using the non-
restored images: there are 2 of them in right side of the brain in slice 14 and another 2 in left
side in slice 15. It is to be noticed that the ANN based approach led to a better classification
performance. The description to this fact related to low-pass filtering property of the ANN based
approach. In the laring phase the input data consists of data from 3 different 3x3 windows that
are linearly combined to lead to the neuron induced local field or activaton potential [10].

4. CONCLUSION
A neural network multiscale image restoration method was proposed for restoring degraded
images based on an universal training data strategy. Experimental outcomes show the
competence of the proposed approach to the problem. Quantitative analysis show the proposed
method presented similar results to the ones obtained by using the Wiener filter, reported in the
literature as the most used method for image restoration.However, in most of the conducted
experiments the neural network restored images presented a slight enhancement in both
brightness and contrast, as observed by the increase in the mean and variance values of the
images, while a reduction of the degradation was performed as it may be observed by the signal
NCRTABAS-2020 78
to noise ratio measurements. In addition, the performance of the restoration methods discussed
and implemented in the paper were also compared by applying such methods to restore a real
MR brain image prior to a supervised image fuzzy classification task. Very good outcomes were
attained in the classification for the proposed restoration approach using 80% and 90% vector
similarity thresholds. The multiscale approach performed better than the classical Wiener filter
approach, with the most satisfactory kappa index reached for the image restored by the
multiscale ANN based method with 80% vector similarity threshold. Despite the results
obtained, in all of the experiments presented in this paper, the feature space consisted only of the
pixels spectral response in different sequences of MRI. The noise reduction produced by the
restoration methods, however, may not always guarantee that the accuracy of the classifier will
be better. In fact, the image restoration process leads to the classification of different data.
However, the compactness effect observed in the feature spaces may be taken as an advantage
for the classifier addressed in this paper, 50 Castro, Drummond and Silva since it needs less
prototypes for each class, thus improving the performance of the classification in terms of
computational time.An advantage of the proposed method is related to the fact a neural network
approach may be less computationally expensive than theWiener filter when dealing with very
large image datasets, in addition to the easiness of implementation of the ANN models that may
also be implemented directly in an imaging acquisition hardware.

REFERENCES
[1] M. Bertero, P. Boccacci, “Introduction to Inverse Problems in Imaging”, Philadelphia,
Bristol, 1998.
[2] J. Bezdek, R. Ehrlich, W. Full, FCM: The fuzzy c-means algorithm. Computers
& Geosciences, 10, No. 2-3, (1984), 491–263.
[3] Y.M. Bishop, S.E. Feinberg, P.W. Holland, “Discrete Multivariate Analysis:
Theory and Practice”, Cambridge: MIT Press, 1975.
[4] A.P.A. Castro, J.D.S. Silva, Neural Network-Based Multiscale Image Restoration Approach.
In: Proceeding on Electronic Imaging, Vol. 6497, San Jose, pp.3854–3859, 2007.
[5] A.P.A. Castro, J.D.S. Silva, Neural Network-Based Multiscale Image Restoration Approach.
In: Proceedings of IPDO, Miami, 2007.
[6] J. Chen, J. Benesty, Y. Huang, S. Doclo, New insights into the noise reduction wiener filter,
IEEE Trans. on Audio, Speech and Language Processing, 14, No.4 (2006), 1218–1234.
[7] I. Drummond, S. Sandri, A clustering-based possibilistic method for image classification,
Lecture Notes in Computer Science, 3171 (2004), 454–463.
[8] I. Drummond, S. Sandri, A clustering-based fuzzy classifier, Frontiers in Artificial
Intelligence and Applications, 131, No. 1 (2005), 247–254.
[9] R.C. Gonzalez, R.C. Woods, “Digital Image Processing”, New York, Addison
Wesley, 1992.
[10] S. Haykin, “Redes Neurais: Princ´ıpios e Pr´atica”, P. Alegre, Bookman, 2001.
[11] K.V.D. Heijden, “Image BasedMeasurement Systems”, New York,Wiley, 1994.
[12] A.K. Jain, “Fundamentals of Digital Image Processing”, New Jersey, Prentice
Hall, Inc, 1989.
[13] A.D. Kulkarni, “Computer Vision and Fuzzy-Neural Systems”, New Jersey,
Prentice Hall, 2001.
[14] Y.D. Wu, Q.Z. Zhu, S.X. Sun, H.Y. Zhang, Image restoration using variational
PDE-based neural network, Neurocomputing, 69, No. 16-18, (2006), 2364–2368.
NCRTABAS-2020 79

E-6
ZIGBEE BASED DATA SECURED AND OPTIMAL CONDITION OF “WIRELESS
COMMUNICATION” USING ADVANCED E-D STANDARDS.

T. Shantha Kumar
Assistant Professor, Dept of Computer Science (Shift II),Alpha Arts & Science College,Chennai
Corresponding author:shanmailrajni@gmail.com

ABSTRACT
The role of communication in day to day life is extremely important.Communication are often of
two types which are wireless or wired. Basically wireless communication is usually preferred
over wired .But sometimes we'd like a secured wireless communication just in case of industries,
companies etc. This paper helps in enabling the user for transmitting data wirelessly through
ZigBee with encrypting data to supply security. In the paper it consists of two sections they're
transmitter and receiver .The data are often sent to microcontroller through pc by using software
called hyper terminal, this software is used for serial communication. The microcontroller after
receiving the info it forwards the info to the ZigBee transmitter which is connected to the
microcontroller. The data is encrypted and then transmitted to receiver. ZigBee transceiver does
data transmission. Original data is Plain text whereas the modified data by using operations so
that only authorized person can decode is called as Cipher text. The received data is decrypted
and is displayed on pc which needs some password to open the info .So by this the info can't be
hacked and is secured
Index Terms:Cryptography, Security, Wireless Network, ZIGBEE S2

E-7
A REVIEW STUDY OF SMART HEALTH CARE SYSTEM IN IOT

R.Vajubunnisa Begum1,*, Dr.K. Dharmarajan2, H. Jasmin3


1
Associate Professor, JBAS College for Women, Chennai,
2
Associate Professor, Vels Institute of Science Technology and Advanced Studies, Chennai
3
Assistant Professor, JBAS College for Women, Chennai
Corresponding Author: *vaju6666@gmail.com

ABSTRACT
The main goal of the proposed framework is to monitor the physical parameters such as
Blood pressure, Respiratory level, ECG, oxygen level and Glucose level with the help of non –
invasive sensors except glucose measurement for a Diabetes DCM patient. This system would
generate an emergency notification for patients particularly with serious illness.
INTRODUCTION
The Old methods for DCM diagnosis lack the specificity and effectiveness. It was thus
problematic to get early and precise diagnosis as well as treatment. Healthcare is one of the
application domains in IoT along with sensors, gateway platform as processor aggregator and
data storage platform as cloud computing by means of various communication technologies.
NCRTABAS-2020 80
OVERVIEW OF THE SYSTEM
The physiological parameters can be uninterruptedly observed and showed on the
Smart Phones/ PC through internet. Our system will be beneficial to a Diabetes DCM patient.
The proposed Smart Monitor System improves healthcare delivery by interactive patient’s data
over three advanced network protocols – Bluetooth 5.0, Wi – Fi module and 4G LTE mobile
connectivity. System has another facility to store the cloud database and these database stores all
information of patient’s health parameters in structured format using MYSQL server supported
by PHP programming Language. Based on this information, the physician will prescribe
medicine to patient. The aim of this prototype model is to demonstrate and implement a
structure which includes slave and master circuit. The slave circuit integrates different sensors to
monitor the patient’s data and send it to Arduino/ Z - uno via Bluetooth 5.0. The Master circuit
has used Raspberry Pi3 Model B+ processor as a gateway. This framework provides different
protocols to support the communication and the collected data of patient are transmitted to the
server for analyzing. After analyzing, Doctors can provide timely treatment and then prescribe
medicine when the patient health status is in abnormal condition.

ENHANCED FEATURES IN THE PROPOSED MODEL


Network Protocols
Increased Data Range and Data speed, Large Memory Capacity, Long Battery Life, Economical,
Good Compatibility, Reliability and Better Security.
Hardware Platform
Faster Wi-Fi Speed, Ups in Processing power, an increase of 16.7% efficiency, Faster
wired Ethernet, On board dual band wireless antenna, 1.4 GHz CPU Speed.

DEVELOPMENT OF SLAVE CIRCUIT AND MASTER CIRCUIT WITH NEW WIRELESS


COMMUNICATION TECHNOLOGY

(a) (b)
Fig.1 (a) Block Diagram of Slave Circuit (b) Block Diagram of Master Circuit

CONCLUSION
This system is cost effective, resiliency, energy efficiency and achieved prolonging
healthcare when compared with other monitor systems. In this framework, several problems have
been examined the existing Network Protocol properties such as Data Speed, Data range, Power
NCRTABAS-2020 81
Consumption, Message capacity, Mobility and Security. Several problems have also analyzed in
Hardware Gateway platform like Network Interfaces, CPU Speed, Memory Capacity and Cost.

REFERENCES
[1] R. Kumar et al. “An IoT Based patient monitoring system used raspberry pi”, IEEE, 31st
October 2016.
[2] Tatiana Huertas et al “Biomedical IoT Device for Self – Monitoring Applications”,
Springer Publications, 26th – 28th October 2016.
[3] Omar s. Alwan et al “Dedicated Real – time monitoring system for health care using
ZigBee”, IEEE, 27th August 2017.
[4] Kavita Jaiswal IIIT et al “An IoT Cloud based smart healthcare monitoring system using
Container based virtual environment in Edge device”, IEEE Proceedings, ICETIETR 2018.

E-8
REAL TIME FABRIC FLAW DETECTOR USING MICROCONTROLLER

A.Selvarasi1,*, M.Jeevitha2.
1
Assistant Professor, Department of Electronics and communication Engineering
Alpha College of Engineering, Chennai.
2
III Yr B.Sc. Electronics & Communication Science, Alpha Arts & Science College
*Corresponding author: sriselvarasi@gmail.com

ABSTRACT
Textile industries are one of the most growing and competitive markets worldwide and
form a major part of manufacturing, employment and business operations in several developing
countries. Among the numerous failures faced by textile industries, fabric flaws constitute more
than 85%. So extra efforts are taken in manufacturing improved quality of fabrics. Defects in
textile fabric are a major threat to textile industries. So defect detection becomes an essential step
in the manufacturing process of textiles. But the process of detection of fault in fabrics includes
only manual inspection strategy. This traditional human visual inspection leaves the process to
be inefficient and time consuming. Even then fault classification was based on visual perception.
These methods employed so far were time complex and less effective.So automation in this
process through image processing technique is introduced later. The image processing technique
is used to spot the faults as simulation output and Arduino kit is used for fault detection. Here
automated fabric inspection system has been proposed to enhance the accuracy of fabric defect
detection. This system is made using MATLAB with image processing techniques and this idea
is implemented on Arduino kit for real time applications. Neural Networks with Back
Propagation algorithm is used as a finest classifier for classification of faults. Recently, Arduino
has become a liable target for the execution of algorithms suited to Microcontroller based
processing applications. The unique architecture of the Arduino has allowed the technology to
be used in several such applications encompassing all aspects of image processing. Whenever the
software notices a defect in the fabric it sends a signal to the microcontroller and it halts the
system for a while to eradicate the flawed fabric part. And the buzzer is switched ON then the
detected fault is displayed in the LCD. And the speaker will give the voice alert after noticed the
fault.
NCRTABAS-2020 82
Index term-Arduino, Fabric Fault classification, Image processing, MATLAB

FABRIC FAULT DETECTION ANALYSIS


The First step on proposed work is to acquire a cloth using the digital camera from the
textile industry and the image is saved in computer as PNG format.

In the sample flaw fabric image, the color of a pixel is made up of red, green, and
blue (RGB). It can be described based on its pixel intensity values. RGB color image needs
huge space to store so it is transformed to the gray value. After grayscale conversion
intensity image is obtained.
The pre-processing steps are used to eliminate the noise of an input image. Filtering
in image processing is a procedure that cleans up appearances and permits for selective
highlighting of certain information. The image obtained after the filter is shown in Fig.
Histogram equalization is a technique for expanding the contrast by uniformly
distribution the gray values enhances the class of an image. It improves the contrast of
images by transforming the values in an intensity image.
The noise of the image is being eliminated in the noise removal part of the system.
The next step after the noise removal from the fabric picture is the transformation of the
noise removed image to the binary image of the original image. Then the morphological
operations are done on the binarized image to discover the structure of the fault. Dilation is
the procedure of broadening the boundaries of foreground image. The software module is
connected to the hardware by using a USB cable. In the hardware part Arduino is used.

FUTURE SCOPE
Further enhancement of this work would be extending the algorithm in such a way that it works
efficiently on video images. This avoids unnecessary time delay in quality testing. In future, the
efficiency & speed of detection can still be improved to make more productivity.
NCRTABAS-2020 83

E-9
NANOMETROLOGY FOR NANOPARTICLES

T.Angeline
Asst. Professor, Department of Biotechnology,Alpha Arts and Science College, Porur, Chennai
Corresponding author: angelinethomas2007@gmail.com
.
Nanoparticles have gained significance in the later half of the twentieth century for their
uniqueness in size, property and application. The size of the nanoparticle gives them their unique
features. It is important to precisely assess the size and shape of the particle in order to
determine the properties of the nanoparticle. Nanometrology is a sub-discipline of metrology, the
science of measurements at the nanoscale level which includes physical and chemical
measurements. Nanometrology has intertwined itself with nanotechnology in order to assess the
complexity of the synthesized nanoparticles with a greater level of precision. Precision is very
much important at the nanoscale level for production of nanoparticles at industrial scale.
Conventional characterization methods do not fulfill the need of the nanoparticles as they are
incapable of analyzing them at different dimensions in their atomic level. The modern
nanometrological methods offer information on the topography, morphology, crystal structure
and chemical composition of the nanoparticles. Small Angle X-ray Scattering (SAXS) helps
determine the shape, size, pore size and structure of the nanomaterials. X-ray absorption
spectroscopy (XAS) provides details of the local structural information of an element of interest.
Neutron diffraction measures the crystal structure along with surface magnetic properties.
Electron diffraction studies help in the determination of the crystal structure of the nanoparticles.
Scanning Electron Microscopy (SEM) will help in understanding the morphology, topology and
chemical composition of the nanoparticles. Transmission Electron Microscopy (TEM) with a
resolution of 0.5Å can analyze the different phases of the sample present and even defects in the
sample. It also gives detailed information on the crystal structure and size of the particles.
Atomic Force Microscopy (AFM) and Scanning Force Microscopy (SFM) offers information on
the topology and mechanical properties such as elasticity, plasticity, frictional characteristics and
molecular interactions of the nanoparticles. Thus the wide array of nanometrological techniques
facilitates us to develop a deeper sense of understanding of the nanoparticles of any dimension.
Keywords: Small Angle X-ray Scattering (SAXS), X-ray absorption spectroscopy
(XAS)Transmission Electron Microscopy (TEM), Atomic Force Microscopy (AFM) and
Scanning Force Microscopy (SFM)

E-10
A STUDY ON AWARENESS OF MATERIALS TO SAFE ENVIRONMENT WITH
REFERENCE TO HOUSEHOLD E WASTE

R. Selvi
Assistant Professor, KRMMC, Chennai, TN
Corresponding author: selvi_krmmc@yahoo.co.in
ABSTRACT
E-waste encompasses wastes generated from used electronic devices and household
appliances. The term E-waste is loosely applied to consumer and business electronic
NCRTABAS-2020 84
equipment that is near or at the end of it's useful life. It is a waste consisting of any
damaged or unwanted electrical or electronic appliance.
Waste generated from the following electronic equipments is generally referred to as
household E-waste:Waste generated from the following electronic equipments is generally
referred to as household E-waste:
 IT and Telecom equipments like computers, laptops, tablets andthe systems used in the BPO
call centres.
 Large household appliances like washing machines, microwave ovens,
refrigerators,televisionetc.
 Small household appliances like PC's, mobile phones, MP3, players, I-Pods, Tablets etc.
 Consumer and lighting equipments like bulbs, CFL, fluorescent, tubelights.
 Toys, leisure and sports machines

E-WASTE PRODUCTION IN INDIA – An Overview


The Indian electronic waste industry is rising at a very rapid pace. E- Waste generated is
expected to rise at a rate of 20% annually. With changing life styles and revolutions
ininformation and communication technologies and increasing per capita income, India is the
second largest electronic waste generator in Asia. Indian PC industry is developing at a rate of
25% annually. The future projection of E waste in India as per the Department of Information
Technology is shown in following figure.

EFFECT OF E-WASTE ON ENVIRONMENT


India has majorly two sorts of electronic waste market called organized and unorganized
market. 90% of the electronic waste generation within the country finishes up within
the unorganized market. Electronic waste accounts for 70% of the overall toxic wastes
which are currently found in landfills which is posing toxic chemical contamination in soil
(toxic substances can leach into theland over time or released into the atmosphere) and other
natural resources. When E waste is disposed off on the bottom the hazardous substances mix
with the soil and lowers the pH of the soil making the soil acidified. The presence of metals
like cadmium, mercury, lead causes pollution resulting in severe environmental impacts
like heating , hole within the ozonosphere . The E waste may also pollute ground water. The
heavy metals like cadmium, lead etc may leach from the waste and
should pollute thebottom water.
NCRTABAS-2020 85

E WASTE AND IMPACT ON HEALTH


E Waste comprises of several different substances and chemicals most of which are toxic
which if not properly handled and disposed may cause adverse impact on human health.
However classification of e waste as hazardous and non hazardous depends on the extent of
hazardous materials present in it.

AWARENESS ON E GADGETS DISPOSAL


The segregation and disposal of E –Waste from home is a threat, where awareness to how to
dispose is lagging. The impact of the e-waste has to be properly managed from home.
 E- waste should not be thrown into the common dustbin.
 Awareness should be created on the concept Reduce, Reuse and Recycle.
 In certain gadgets the iron and copper present in the e-waste can be separated and sold
out.
 Proper disposal plants for e-waste is limited. Hence collection centres can be formed
areawise.
 Institutions and organisations can take up the responsibility of collecting the e waste from
neighbourhood.
 Mobile apps can be developed so that the public can reach out collection centres easily.

SUGGESTIONS
 Avoid( Reduce) buying of unnecessary and unwanted electronic items
 Choose brands which has higher shelf time.
 If going for higher versions find a suitable place to Reuse the existing good ones.
 Concentrate on recylcling of E- Waste.
 Proper disposal of e waste.

CONCLUSION
Rapid technological changes, increased purchase power, low initial cost, more of china products,
high obsolescence rate etc., have contributed to the great rise in E Waste generation.
Government should take stringent actions but that alone cannot help the scenario. Hence there is
a strong need for awareness campaign among the public, schools and colleges bringing out the
measures that an individual can take to reduce the E waste.

E-11
E-WASTE MANAGEMENT

N. Muthulakshmia,*, N. Chandrakala b, K. Divyac


a,b
Assistant Professor, JBAS College for women, India
c
II Year Electronics and Communication Science, JBAS College for women, India
Corresponding author: *Muthulakshmiv1975@gmail.com

ABSTRACT
“Any device connected to a power source will not satisfy the current owner to the purpose for
which it was created” Such as computer, television, cell phones, refrigerators and ovens. E –
NCRTABAS-2020 86
wastes are considered dangerous as certain components of some electronic products containing
materials that are hazardous. The aim of the paper is to create the awareness, knowledge,
perception and attitude of the public about existent, risk management of E – Waste which is the
rapidly growing problems of the world. This paper highlights the implementation of E waste
management and harmful effect due to hazardous substances and reduces the usage of hazardous
substances.
Keywords – E – Waste, Toxic hazardous, E – Waste management.

I INTRODUCTION
The information and communication revolution brought massive changes in our lives, our
economies, industries and institution. Also, these have led to the problem of massive amount of
hazardous waste and other wastes generated from electric products. Rapid
growthoftechnologyand a high rate of obsolescence in the electronic industry have led to one of
the fastest growing waste streams in the world which consist of end of life electrical and
electronic product which contain toxic materials. The collection, management of E – waste and
the recycling process is properly to be regulated. It may cause environmental damage and health
problems.

II CATEGORIZATION OF E – WASTE
 Large household appliances (refrigerators/freezers, washing machines, dishwashers)
 Small household appliances (toasters, coffee makers, irons, hairdryers)
 Information technology (IT) and telecommunications tools (personal computers,
telephones, mobile phones, laptops, printers, scanners, photocopiers)
 Consumer equipment (televisions, stereo equipment, electric toothbrushes)
 Lighting equipment (fluorescent lamps)
 Electrical and electronic tools (handheld drills, saws, screwdrivers)
 Medical equipment systems
 Monitoring and control instruments.

III E – WASTE MANAGEMENT


The major constituents of E – waste management are E – waste collection, sorting and
transportation, E- waste recycling. In industries E wastemanagement is done by waste
minimization techniques. It involves record management, production process modification,
volume reduction, recovery and reuse.
Managing E – Waste both locally generated and internationally imported are the major
challenges for government. Less than five (5%) of Indians total electronic waste gets recycled
due to absence of proper infrastructure, legislation and framework, said ASSOCHAM. Over 90
percent of e – waste generated in India is managed by the unorganized sector and scrap dealers.
Organizing sector accounts for less than ten percent of the recycling business, there is huge scope
for growth as the recyclers.
Waste Electric and Electronic Equipment must be taken into consideration not only by
the Government but also by the public due to their hazardous material contents. Currently, the
main options for the treatment of electronic waste are reuse, remanufacturing, and recycling as
well as incineration & land filling. The existing E – Waste recycling techniques are CRT
recycling, glass to glass recycling, glass to lead recycling, metal recovery, pyro and
NCRTABAS-2020 87
hydrometallurgical processing, bio metallurgical processing and recovery of metals from mobile
phones. Recycling of electronic waste is an important subject of waste treatment and also
recovery of valuable materials. Recycling of E – waste can be broadly divided into three steps
1. Disassembly:Aiming on singling out hazardous or valuable components for special treatment,
is an indispensable process in recycling of E – waste.
2. Upgrading: Using mechanical process and metallurgical processing to upgrade desirable
material content.
3. Refining:Recovered materials are retreated or purified by using chemical (metallurgical)
process.
a. Pyrometallurgical:
Pyrometallurgical process includes incineration, smelting in a plasma arc furnace or blast
furnace, drowsing, sintering and melting and reaction in a gas phase at high temperatures. In this
process, the crushed scraps are burned in a furnance to remove plastics. The presence of
halogenated frame retardants (HFR) in the smelter feed can lead to the formation of dioxins
unless special installations and measures are present.Precious metals are obtained at the end of
the process.
b. Hydrometallurgical:
This method is more exact, more predictable and more easily controlled. It consists of series of
acid or caustic leaches of solid material. Solutions are then separated and purified. Final
treatment by electro refining process, chemical reduction or crystallization for metal recovery.
c. Biometallurgical:
The understanding of biochemical process involved in the treatments of metals has been subject
to growing investigation. Bio metallurgical can be classified as Bioleaching and Biosorption.In
bioleaching dust are collected from shredding process of electronic scrap was used in
investigation.
IV CONCLUSION
E – waste are characterized by a complex chemical composition and the pollution is due to the
irregular management has degraded the environment. Motivated the minimization of the
environmental effects caused by the generated E – waste many technological changes has been
developed. They are
 Introduction of optical fibers(CU elimination from the cablings),rechargeable batteries
(Ni, Cd reduction but Li increases) etc.,
 LCD screens instead of CRT screens (containing lead oxide and barium).
 Create computer components and peripherals of biodegradable material.

REFERENCES

[1] Johri, Rakesh, ed. E-waste: implications, regulations, and management in India and current
global best practices. TERI Press, 2008.
[2] Kollikkathara, Naushad, HuanFeng, and Eric Stern. "A purview of waste management
evolution: Special emphasis on USA."Waste management29, no. 2 (2009): 974-985.
[3] www.ijoem.com/E-waste hazard: The impendingchallenge
[4] www.en.wikipedia.org/wiki/EnvironmentalHazard
[5] http://onlinecourses.nptel.ac.in.
NCRTABAS-2020 88

E-12
MOBILES AND OPTICAL COMMUNICATION

G. Keshav
II Semester, M.A, Rayalaseema University, Kurnool- 2, AP
Corresponding author:gudipadukeshav@gmail.com

ABSTRACT
A mobile device is a general term for any sort of handheld computer. These devices are
designed to be extremely portable, and that they can often slot in your hand. Some mobile
devices—like tablets, e-readers, and smartphones—are powerful enough to do many of an
equivalent belongings you can do with a desktop or laptop pc .

TABLET COMPUTERS
Like laptops, tablet computers are intended to be transportable. However, they supply a
unique computing experience. the foremost obvious difference is that tablet computers don’t
have keyboards or touchpads. For several people, a standard computer sort of a desktop or
laptop remains needed in order to use some programs. However, the convenience of a tablet
computer means it's going to be ideal as a second computer.

E-READERS
E-readers are almost like tablet computers, except they are mainly designed for reading e-
books (digital, downloadable books). Examples include the Amazon Kindle& Kobo. Most e-
readers use an e-ink display, which is simpler to read than a standard computer screen . you
will even read in bright sunlight, even as if you were reading a regular book. You don’t
need an e-reader to read e-books. they will even be read on tablets, smartphones, laptops,
and desktops.

SMARTPHONES
A smartphone may be a more powerful version of a standard telephone . additionally to an
equivalent basic features—phone calls, voicemail, text messaging—smartphones can hook
up with the web over Wi-Fi or a cellular network (which requires purchasing a monthly data
plan). this suggests you will use a smartphone for an equivalent belongings you would
normally do on a computer, like checking your email, browsing the online , or shopping
online. Other standard features include a high-quality camera and therefore the ability to
play digital music and video files. for several people, a smartphone can actually replace
electronics like an old laptop, digital music player, and camera within the same device.

SOCIAL NETWORKING
Social networking sites including Facebook and Twitter allow users to rapidly generate
content for people in their network to look at . instead of sending individual notes, social
networking provides a continuing stream of updates and knowledge . These computer tools
have taken communication a step further than email thanks to their ability to instantly
NCRTABAS-2020 89
communicate life and standing updates to a whole network of individuals who can respond
and discuss such notes in real-time.

VOIP AND VIDEO CHAT


Voice-Over-Internet Protocol — or VOIP — replaced the necessity for landline telephones
in many instances. These lines can provide instant phone communication over the web , and
sometimes are cheaper than fixed phone lines. They also provide the power to conduct video
chats to ascertain whom you are speaking with.

INTERNET
The World Wide Web, Internet and email revolutionized the way individuals communicate
with one another . instead of waiting days or weeks to ascertain information, we are able
to now view all information at the speed of light. An E-mail has fundamentally transformed
how people share information and conduct business supported the speed and adaptability it
offers.

E-13
IOT AND WIRELESS NETWORKS

S. Shanthaa*, V. Savithria
a
Assistant Professor, Mar Gregorios College of Arts and Science, India
Corresponding author: *shanthamgc@gmail.com

ABSTRACT
The Internet of Things is an developing topic on Technical, social, and economic significance. It
refers to combining computers, sensors, and networks to monitor and regulate numerous devices
with nominal human intervention. For example it is the assembly of devices like home
appliances, vehicles, and healthcare devices using an embedded component like a microchip or
microcontroller.

IoT has been progressively taking a sea of technological variations in our daily lives, which in
turn benefits to making our life simpler and more easy. Though IoT has plentiful benefits, there
are some errors in the IoT governance and employment level. The key observations in are that
(1) There is no standard meaning (2) Universal standardizations are essential in architectural
level (3) Technologies are changing From vendor-vendor, so requirements to be interoperable (4)
For improved global governance, we need to build standard Protocols. Let us hope future
improved IoT.

REFERENCES
1. Lianos, M. and Douglas, M. (2000) Dangerization and the End of Deviance: The Institutional
Environment. British Journal of Criminology, 40, 261-278.http://dx.doi.org/10.1093/bjc/40.2.261
2. Ferguson, T. (2002) Have Your Objects Call My Object. Harvard Business Review, June, 1-
7.
3. Reinhardt, A. (2004) A Machine-to-Machine Internet of Things.
NCRTABAS-2020 90

E-14
ARTIFICIAL INTELLIGENCE

R.Vajubunnisa Begum1,*, S. Divya2, S. Tamilarasi3


1
Associate Professor, JBAS College for Women, Chennai,
2, 3
III year Electronics and Communication Science, JBAS College for Women, Chennai,
Corresponding Author: *vaju6666@gmail.com

ABSTRACT
To give a preview of what is artificial intelligence, how it works and the real-world applications
of artificial intelligence. Artificial Intelligence is the concept of giving the ability to study to
machines.AI works by merging large amounts of data . AI is a wide-ranging field of study that
includes many models, methods and technologies. Some of them are:-

Machine learning: It mechanizes analytical model building.

Deep learning: it uses enormous neural networks with numerous layers of processing units.

Natural language processing (NLP): It is the capability of computers to study, recognize and
produce human language, including speech. The following stage of NLP is natural language
interaction, which permits humans to communicate with computers using normal, everyday
language to perform responsibilities.

Graphical processing units:they are key to AI because they offer the heavy compute power
that’s essential for iterative processing. Training neural networks needs big data plus compute
power.

The Internet of Things: It makes enormous amounts of data from connected devices, most of it
unanalyzed. Automating models with AI will permit us to use more of it. The goal of AI is to
deliver software that can reason on input and explain on output. AI will deliver human-like
interactions with software and offer decision support for specific responsibilities.
Artificial Intelligence and the technology are one side of the life that always interest and wonder
us with the novel ideas, topics, innovations, products …etc. however there are numerous
important tries to reach the level .This is not the end of AI, there is more to come , who knows
what the AI can do for us in the future, maybe it will be a entire society of robots.

REFERENCES
[1] www.wikipedia.com
[2] https://computer.howstuffworks.com/quantum-computer1.htm
[3] https://www.forbes.com/sites/bernardmarr/2017/07/10/6-practical-examples-of-how-quantum
computing-will-change-our-world/#1e10035780c1
NCRTABAS-2020 91

E-15
ARTIFICIAL INTELLIGENCE – FUTURE OF EVERYTHING

M.Karthick, D.S. Hemanth, J.SimonEbinezer, J.SyedAbuthahir


U.G Students, B.Sc Computer Science (Shift II), Alpha Arts & Science College, Porur,
Chennai.
Corresponding Author:dshemanth1999@gmail.com

ABSTRACT
AI includes machine learning also as deep learning. There has been significant progress in
both fields. On one hand, machine learning algorithms are helping businesses evolve, and on
the other, speech recognition, image processing techniques and fingerprint patterns are taking
the world by storm.We use gadgets that are intelligent and make our everyday tasks easy. For
example, Alexa can remind you about your daily appointments, keep a check on your grocery
list, play your favorite music once you need, read news and even play some brain games.
There are self-driving cars. How does the car know about traffic signals, traffic path , road
congestion and when the vehicle move slow or fast and so on? More serious business
scenarios include spam filtering, product suggestions and personalization of feeds, dynamic
pricing (for example during online ticket booking), optimization (for example, getting the
best route for a destination), emotion analysis and much more.
Keywords: Reasoners, A.I Visionaries, Inference Mechanisms

E-16
MEMS AND NANO TECHNOLOGY

V. Savithria*, S. Shanthab
a,b
Assistant Professor, Mar Gregorios College of Arts and Science, Chennai, TN
Corresponding author: *shanthamgc@gmail.com

ABSTRACT
MEMS (Micro-Electro-Mechanical Systems) are a specialized field pertaining
to technologies that are capable of miniaturizing existing sensor, actuator or system
products. MEMS based sensor products provide an interface which will sense, process
and/or control the encompassing environment. Sensors are a category of devices that builds
very small electrical and mechanical components on one chip. Nano technology is a
growing field that uses the unique properties of ultra-small scale materials to an advantage.
Micro-Electro Mechanical System is a technique of merging Electrical and Mechanical
components all together on a single chip, to build a system of small-scale dimensions.
MEMS are nothing but Integration of a number of micro-components on a single chip which
allows the micro system to both sense and control the environment. It is made up of
components between 1 to 100 micrometers in size. Micro /NanoElectro Mechanical systems
(MEMS/NEMS) need to be designed to perform expected functions typically in millise cond
to picosecond range. MEMS/NEMS materials exhibit good Mechanical and tribological
NCRTABAS-2020 92
properties on the nanoscale. The term wont to define MEMS varies in several parts of the
planet . In the us they're predominantly called MEMS, while in another parts of the planet.
MEMS is a fusion of micro sensors, micro actuators and micro electronics and other
technologies, which can be integrated onto a single microchip. Microelectronic integrated
circuits are often thought of because the "brains" of a system that allow microsystems to
sense and control the environment. Sensors collect information from the environment
through evaluating mechanical, thermal, Biological, chemical, optical, and magnetic
phenomena. The electronics then process the data obtained from the sensors.
Professionals have decided that MEMS and nanotechnology are two different labels. The
most essential common benefits used by these technologies contains : increased information
capabilities & miniaturization of systems.

REFERENCES
[1] Microsensors, Muller, R.S., Howe, R.T., Senturia, S.D., Smith, R.L., and White, R.M.
[Eds.], IEEE Press, New York, NY, 1991.
[2] Micromechanics and MEMS: Classic and Seminal Paper to 1990, Trimmer, W.S., IEEE
Press, New York, NY, 1997.
[3] Journal of Micro electro mechanical Systems
(http://www.ieee.org/pub_preview/mems_toc.html)
[4] Journal of Micromechanics and Micro engineering (http://www.iop.org/Journals/jm)
[5] Sensors and Actuators A (Physical)
(http://www.elsevier.com:80/inca/publications/store/5/0/4/1/0/3/)

E-17
SOLAR CELLS AND FUEL CELLS

Pravin S
III Year, Department of ECS, Alpha Arts and Science College, Chennai, TN
Corresponding author: smlpravin@gmail.com

ABSTRACT
Solar cell converts light to electric energy by photoelectric effect. They don’t utilize fuels to
generate power. Unlike electric generators, they don’t have moving parts. But fuel cell uses
chemical energy of hydrogen or another fuel to produce electricity. They can provide power for
systems like utility power stations or a laptop computer.

SOLAR CELLS
Photoelectric effect was discovered by Heinrich Hertz in 1887.In 1888, Russian physicist
Aleksandr Stoletov built the first cell based on photoelectric effect. In 1905, Einstein explained
photoelectric effect in a landmark paper, thus receiving nobel prize in 1921. In the year 1946, the
modern junction semiconductor solar cell was patented by Russell Ohl. Solar cells gained
prominence with their incorporation onto Vanguard 1 sattelite since 1958.Solar cells are grouped
together to make solar module called solar panel. They can be used as solar water heaters and
NCRTABAS-2020 93
cookers. In addition to producing electric power, solar cells can be used in traffic, emergency,
and construction road signs, reducing the need of powered generators. Communication satellites
require light weight power source that lasts for years and works in space. Even they generate
electricity without battery replacements or fuel, but by using solar energy.

FUEL CELLS
Fuel cells are composed of anode, cathode, and electrolyte membrane. Fuel cells work by
passing hydrogen through anode and oxygen through cathode. Hydrogen splits into electrons and
protons at the anode. Protons pass through electrolyte membrane whereas electrons are forced
through the circuit. By this way, large amount of heat and electricity. At the cathode, formation
of water molecules takes place by the combination of protons, electrons and oxygen.The first
fuel cell was first conceived by Sir William Robert Grove (a physicist), in 1839 by mixing
hydrogen and oxygen in presence of an electrode. This resulted in the production of electricity
and water. In 1889, using air and industrial coal gas, Ludwig Mond and Charles Langer built a
working fuel cell. Since 1920’s, fuel cell research in Germany paved the way for today’s
development of carbonate cycle and solid oxide fuels of today. Three main applications of fuel
cells are portable uses, transportation, and stationary installations. In future, fuel cells that use
hydrogen replace petroleum fuel used today. Stationary fuel cells are the most powerful cells,
which can provide high power to schools, homes, military bases, banks and airports. They
produce very low amount of greenhouse gases. They don’t produce air pollutants that cause
health problems. Fuel cells have higher efficiency.

E-18
INTERNET OF THINGS AND WIRELESS NETWORKS

S. Vasanth1, P.V. Prasanth2


1,2
III Year, Department of ECS, Alpha Arts and Science College, Chennai, TN
Corresponding author: Vazavasanth16@gmail.com1, Prathiprashanth1197@gmail.com

ABSTRACT
Internet of Things (IOT) is the internetworking of physical devices, automobiles and other
objects, which consists of an embedded system with sensors, actuators and network connectivity
that allows to gather and interchange data. IOT allows substance to be sensed and controlled
remotely .The IOT is a swiftly increasing and promising technology, which becomes more and
more existing in our normal lives. Likewise, the technology is asample of cyber-physical
systems, which incorporates technologies such as smart grids, smart homes and smart cities.
Since the high-rate development of IOT technologies, and the significant increment in the
number of the connected devices.In order to give an illustrationof IOT solution, a simple IOT
demonstrator employed using the current inexpensive hardware, and cloud efficient software.
Wireless networks have seen incomparable rise in their size and number of users in recent years.
This incomparable rise attributed to the growth in the number of mobile computing devices.
Besides, the amount of data that is controlled by these wireless networks has enlarged in recent
years. This growth in the flow of data over these wireless networks is due to rise in popularity of
cloud computing, which is constructed on the concept of Software as a Service, where in all the
NCRTABAS-2020 94
data handling happens in the cloud. Even though there has been a rise in the usage of the
Wireless networks, little has been done to increase the security of these wireless networks and
they remain prone to attacks from a malicious user. One such wireless network that is
extensively used but is still prone to attacks is WiFi. Wi-Fi protocol (IEEE 802.11), over the
years it has been upgraded several times, but these upgrades have mostly resulted in increase in
the overall data rate of the communication. A little has been done to develop the security of the
protocol. As a part of this presentation,we present two architectures that use Anomaly Behavior
Analysis to identify and, categorize attacks on the Single access point and Distributed Wi-Fi
networks and then track the location of the attacker. The architectures are able to classify the
attacks on the network by relating the number of different types of Wi-Fi frames in the WFlow
with the Wi-Fi frames present in the attack types. The presented architectures use different
methods to track the location of the attacker. The first architecture uses anmethodology of
clustering to track the location of the attacker, while the second architecture uses classification
rules learnt from machine learning to track the location of the attacker. The attack detection
modules of the IDS have no false positives or negatives even when the network has a high frame
drop rate. The Clustering approach to track the location of the attacker performs well in static
environments with 81% efficiency, while the Rule Classification approach to track the location
of the attacker performs well in dynamic environment with 76% efficiency.
Keywords: Internet of Things, M2M communication, ARM mbed, Node-RED, Blue mix, 5g
Communication

REFERENCES
1. https://en.wikipedia.org/wiki/Internet_of_Things
2. http://vs.inf.ethz.ch/publ/papers/Internet-of-things.pdf
3. http://www.iot-a.eu
4.https://en.m.wikipedia.org/wiki/Wireless_network
5.https://commotionwireless.net/docs/cck/networking/types-of-wireless-networks/
6.https://www.sciencedirect.com/topics/computer-science/wireless-networks
NCRTABAS-2020 95

Contributed papers FROM


COMPUTER SCIENCE
NCRTABAS-2020 96

C-1
SMART GREENHOUSE SOLUTION BASED ON IOT AND
CLOUD COMPUTINGTECHNOLOGIES

H.J. Felcia Bel, M.C.A., M.Phil.,


Assistant Professor, Alpha Arts and Science College, Chennai, TN
Corresponding author:felciabel@gmail.com

ABSTRACT
Smart Greenhouse work is mainly about the development of current agricultural practices by
using IoT and cloud computing technologies for improved yield. It offers an android app of a
smart greenhouse, which helps the farmers to carry out the work in a farm automatically without
the usage of more manual inspections. Greenhouse being a closed structure defends the plants
from extreme weather conditions namely: wind, ultraviolet radiations and pest attacks. The
irrigation of agriculture field is carried out by automatic drip irrigation, which works according
to the soil moisture threshold set so as optimum amount of water is applied to the plants. Based
on data from soil health card, proper quantity of nitrogen, phosphorus, potassium and some other
minerals can be applied by using drip fertigation procedures. Appropriate water management
tanks are built and they occupied with water after measuring the present water level using an
ultrasonic sensor. Plants are also providing the necessary wavelength light during the night using
growing lights. Temperature and air humidity are controlled by humidity and temperature
sensors and fogger is used to control the same. A tube well is controlled using GSM module by
missed call or sms. Bee-hive boxes are installed for pollination and boxes are observed using
ultrasonic sensors to measure honey and send mails to the buyers when they are filled. Then
readings composed from storage containers are uploaded to cloud service (Google drive) and can
be given to an e-commerce company. Yet what matters behind the millions of connected
applications, devices, and sites are “clients”. Therefore, it is very critical to give customers the
best experience and it can only be possible via IoT Cloud. It is the one and only platform where
you put your IoT data and help to work for your clients.
Smart Green House android app is succeeded to detect and handling the micro climatic
environment inside a Green House. From the green house effortlessly get soil moisture, humidity
and temperature value to android app, according to sensors values and we set predefined
threshold values for every sensor ,depending on sensor readings we are going to control by
means of water sprayer, cooling fan, rooftop and focus light and just press the button in android
app we can make on/off motors and it also has datasheet of altogether horticulture plantation and
season wise safeguard material for monitoring and controlling. The purpose of this project is to
design a simple, easy to install, user-friendly to monitor and record the values of temperature,
humidity, soil-moisture and sunlight of the environment that are continuously changed and
controlled in order enhance them to attain extreme plant growth and yield. The outcome shows
that the situation specified in sensor‘s database and system in actually appropriate. The achieved
test result concludes that the proper working of the system.
Keywords: Cloud Computing; IoT; GSM
NCRTABAS-2020 97

C-2
A STUDY TO EXPOSURE THE COVARIANCE PLAUSIBILITY OF DISEASE
USING DATA MINING

Dr. Aneeshkumar A.S.


Head, Department of Information Systems Management,
Alpha Arts and Science College, Chennai, TN
Corresponding author: aneeshkumar.alpha@gmail.com

ABSTRACT
Data mining is an well-known technology for identifying appropriate factors and predicting
the occurrence of related aspects for the estimation and future planning in various fields. It is
regarding finding insights which are statistically consistent, significant and previously
unknown data. Health care management is one among the major user of the Data mining
techniques for diagnosing the attributes for a variety of medical issues and treatment
planning. Disparity in liver enzymes is a accountable physical difficulty which affects other
proportional actions of the body. The influence of Liver disorder symptoms in diabetes
patients are used here for the supportive study of the relational recognition of Liver disorder
and Diabetes Mellitus.
Keywords— CHAID; Accuracy; ROC; Correlation; Significance

C-3
INGENIOUS LIGHTING SYSTEM (ILS) FOR SMART CITIES USING IoT

1R. Praveenkumar, 1M.Kamarajan, 2M.P.Prabakaran


1
Assistant professor, Electronics and Communication Engineering,
Easwari Engineering college, Ramapuram,Chennai-600089
2
Assistant professor, Electronics and Communication Engineering,
Dhaanish Ahmed College of Engineering, Chennai
Corresponding author: praveenkumar.r@eec.srmrmp.edu.in

ABSTRACT
The goal of the smart city relates to safer, convenient, comfortable operation, and better
energy conservation. The street lamp is an essential part of urban infrastructure in the city and
closely related to safety and energy conservation. Currently, the street lamps are controlled
by manual management or light perception control. So, the maintenance is time consuming,
especially for the suburban street lamps, it can be even longer than few months. For the light
perception control, it has limited flexibility. Current management systems has no remote and
real time controls. Current street lamps have only two states, on and off and are highly energy
consuming. Moreover, brightness of the lamps are not adjustable. Sometimes, brightness of
street lamps can be reduced to reduce energy consumption. So dynamical light intensity
adjustment can be used according to current demands, to reduce energy consumption. This
paper proposes raspberry pi based ingenious lighting system (ISL) to improve energy
NCRTABAS-2020 98

efficiency and meet the above needs. Here, ingenious lighting system is implemented by
raspberry pi, to trace and adjust the entire node. This system, consists of three sensors
namely LDR, IR and Current sensor. These sensors are connected to one lamp node, if one
node is failed then, raspberry pi gets network from another node and it provides to the failure
node. This project is highly automated and helps to trace street lamp status. The main
application of this system is, it automatically finds a node failure and resolves it immediately
by using another node network.
Keywords-: Energy Saving, Light Intensity adjustment, Raspberry pi, LDR sensor, IR sensor,
Current sensor and light dimmer.

C-4
IMPLEMENTATION OF DENSITY BASED TRAFFIC LIGHT CONTROLLER
USING ARDUINO

Shrinitha S1,*, Devipriya.E1, Suvalakshmi.V1R.Thiruvengadam2


Final year Under Graduate1, Assistant Professor2
Department of ECS, A.M.Jain College, Chennai, Tamil Nadu, India.
*Corresponding author: shrinithavasan@gmail.com,dp.elumalai04@gmail.com

ABSTRACT
In recent days, there are lot of scope to the solution to the traffic in roads, due to challenges
and requirements increases to control it in major cities. Here, it has been decided to use
Arduino to write program according to our requirements due to its simplicity and economy
and technology is used to measure the traffic density in a particular road. It automatically
senses the traffic density at the junction and changes the signal accordingly. Arduino is used
to develop the prototype model for this system.
The aim of this project is, saving the time of people, if there is no traffic on the other signal,
one shouldn’t wait for that signal. The system will skip that signal by sensing traffic density
and will move on to the next one. Arduino is the main part of this project and ultrasonic
sensor HC-SR04 is connected with Arduino to calculate the distance. This distance between
vehicle and signal will tell us if any vehicle is near the signal or not and according to that the
traffic signals will be controlled. The main task is to avoid use of delay because it has to be
continuously read from the ultrasonic sensors and also at the same time, then it has to control
signals which require the use of delay function. So the timer is used with one library which is
used to repetitively measure a period of time in microseconds and at the end of each period,
an interrupt function will be called. In this function, the proposed design will read from the
sensors and in the loop function, and control the traffic signals.
The working of density based traffic light controller using Arduino is divided into three as
mentioned in the following steps.
1. If there are traffic at all the signals, then the system will work normally by controlling the
signals one by one.
NCRTABAS-2020 99

2. If there is no traffic near a signal, then the signal is ignored by the system and move on to
the next one.
3. If there is no traffic at any of the signals, system will stop at the current signal and will
only moved on to the next signal if there is any traffic at any other signal.
Keywords: Density, Sensor, Traffic signal, Arduino

C-5
DESIGN AND IMPLEMENTATION OF GSM BASED BANK VAULT SECURITY
SYSTEM USING ARDUINO

K.Hemapriya1, Richa Suman Sharma1, Dr.M.Selva Kumar2


Under Graduate1, Assistant Professor2
Department of ECS, A.M.Jain College, Chennai, Tamil Nadu, India.
Corresponding author:mkarmegarajan@gmail.com

ABSTRACT
Automated security system is a helpful, where safety is an important issue. In most of
the offices and bank, lockers are used to secure valuable documents or any expensive things.
So Locker safety is an important issue in recent days. Nowadays there is demand for more
efficient security systems to avoid access of unauthorized person. In security system mostly,
use mechanical Key method or single password method. These traditional methods are not
fully safe because of robbery of keys or knowledge of password. For long, lockers have been
the first choice to safeguard valuables for most people.
The conventional method has many drawbacks such as,
 The bank employees must be present with the keys to open the bank vault.
 The keys can be duplicated or loss of key.
Due to the above drawbacks, a need has arose to develop a more secure, reliable and
faster technique which would overcome the drawbacks and provide full security to the
customer.In this present work, a security system has been designed to protect the bank
vault from thief or unauthorized person. This security system consists of two sensors and
GSM (Global System for Mobile communication) module. The motion sensor and laser
sensor are used. The GSM module is used to send warning SMS to dedicated phone
number more than single authorized person. When any of two sensors detect something
wrong, then a warning SMS is automatically transmitted to a dedicated phone number by
GSM module. All objects have a temperature above absolute zero emit heat energy
in the form of radiation infrared wavelength.PIR sensors don't detect or measure "heat";
instead of that they detect the IR radiation emitted or reflected from an object. So when an
unauthorized person tries to enter the vault room, PIR sensor detect it and sends a signal to
the Arduino, after that it posts warning SMS through GSM module. A security system
protects from intrusion and unauthorised access. Hence, it has been proposed a more
secure system which employs SMS based and GSM technology. In this it has been tried to
solve the problem with the help of GSM based bank vault security system using Arduino.
Keywords: Bank vault, Security, GSM, Arduino.
NCRTABAS-2020 100

C-6
DESIGN AND IMPLEMENTATION OF TV REMOTE CONTROLLED ROBOTIC
VEHICLE

G. Tony Santhosh1,*, S. Akshhaya2, M. Ishwarya3


1
Assistant Professor /ECE, Alpha College of Engineering, Chennai
2,3
II year ECE, Alpha College of Engineering, Chennai
*Corresponding author: tonysanthosh.g@gmail.com

ABSTRACT
This paper suggests a plan to regulate a robotic vehicle using a standard TV remote.
IR sensor is connected to the control unit on the robot for detecting the IR signals transmitted
by remote. This data is forwarded to the control unit which moves the robot as desired. An
8051 series microcontroller is used in this plan as control device. Transmitter uses a TV
remote through which IR commands are transmitted. At the receiving end, these commands
are used for monitoring the robot in every direction. At the receiving end the movement is
attained by two motors that are interfaced to the microcontroller. RC5 based coded data from
the TV remote is received by IR receiver and interfaced to the microcontroller. The codes on
the microcontroller refers to RC5 code to produce respective output based on the input data to
work the motors through a motor driver IC. The motors are interfaced with control unit
through motor driver IC. Further the paper gives an improvement by DTMF technology.
With this technology, we can control the robotic vehicle by a Mobile phone. This technology
has an benefit over long communication range as compared to line of sight communication in
IR technology.
Index Terms - Microcontroller, TV remote, RC5, DTMF

C-7
DESIGN AND IMPLEMENTATION OF FIRE EXTINGUISHING ROBOT

Andrews Juben Ratchanyaraj1,*, N. Dharshinipriya2, D. Manju3, S. Saseedharan4


1
AP/ECE, Alpha College of Engineering, Chennai
2,3,4
II ECE Student, Alpha College of Engineering, Chennai
*Corresponding author:juben.ij@gmail.com

ABSTRACT
Detecting fire and extinguishing the fire manually is a dangerous job for a fire
extinguisher, it often risks the life of that person. This paper aims in giving a technical
solution to the mentioned problem. We can use a robot which is mechanically designed that is
capable of carrying out a complex series of actions automatically, if it is programmed by a
computer. The robot which is use as a fire extinguisher is DTMF tone controlled robot that
has a small fire extinguisher unit added on to it. The movement of this robot can be controlled
using a mobile phone through DTMF tones. When it reaches the fire, the flame sensor detects
NCRTABAS-2020 101

the fire and gives the further signal to the extinguisher units to trigger the pump and it starts
to spray the water. Arduino UNO board (ATmega328P microcontroller) is programmed and
used in this system.
Keywords: DTMF technology, DC motors, flame sensor, water pump, arduino.

C-8
MINE WORKERS SAFETY SYSTEM USING ZIGBEE

Ms.E.Niranjana1 , C. Kiran Kumari2


1*
Assistant Professor, Department of BME, Rajiv Gandhi College of Engineering &
Technology, Puducherry.
2*
UG Student, Department of BME, Rajiv Gandhi College of Engineering & Technology,
Puducherry.
*Corresponding author: c.kiran0808@gmail.com

ABSTRACT
In some atmosphere, underground mines are necessary to ban the entry of the workers. Here
real time monitoring system is beneficial for a few safety measures. The recent advancement
is Wireless Communication using Zigbee. Various sensors are accustomed sense the daily
activities of underground coal mines. The sensors transmit the acquired data to base station
using wireless transmission. Microcontroller is employed for collecting data and making
decision, supported which the mine worker is informed through alarm also as voice system.
Hardware and software is employed for identification of the parameters using sensor.
Keywords: Wireless, Mine worker, Zigbee.

C-9
REMOTE SENSING APPLICATIONS USING IMAGE PROCESSING

S. Lakshmi
Assistant Professor, Department of Computer Applications, Alpha Arts and Science College,
Chennai, TN. India
Corresponding author: lakaasc@gmail.com
ABSTRACT
Imaging systems, mainly those on board satellites, provide a repetitive and steady view
of the world that has been utilized in numerous remote sensing applications like city
growth, deforestation and crop monitoring, weather prediction, land use mapping and so
on. For each application it's necessary to develop a selected methodology to extract
information from the image data. To develop a strategy it's necessary to spot a
procedure supported image processing techniques that's more capable the
matter solution. In spite of the appliance complexity, some basic techniques are common
in most of the remote sensing applications named as image registration, image fusion,
image segmentation and classification. Hence, this paper aims to
NCRTABAS-2020 102

present ansummary about the utilization of image processing techniques to unravel a


general problem on remote sensing application.
INTRODUCTION
The main objective of processing a digital image is for information extraction and
enhancement of its visual quality in order to make it easy to understand by a human
analyst or autonomous machine perception. Examples of digital images are those
acquired by digital cameras, sensors on board satellites, medical equipments, industrial
quality control equipments, etc. Various image processing technologies have been
developed to deal with the challenges to extract information from remotely sensed data.
To build a remote sensing application, a processing procedure must be developed to
process the data and, therefore, generate the expected output. Before analyzing the
images, they have to be geometrically and radio metrically corrected. This processing
phase, called pre-processing, is essential mainly in applications where the images are
acquired from different sensors and at different times. After this phase, the images are
enhanced to facilitate the information extraction. Finally, the images are segmented and
classified to produce a digital thematic map. In this paper we'll provide a general view of
how image processing techniques are often applied in remote sensing applications.
IMAGE PROCESSING TECHNIQUES
For each remote sensing application a selected processing methodology must be
developed. Preprocessing phase consists of those operations that prepare data for
subsequent analysis that attempts to correct or compensate for systematic errors. After
preprocessing is complete, the analyst may use enhancement techniques to enhance the
objects of interest as well as feature extraction techniques to reduce the data
dimensionality. Feature extraction attempts to extract the most useful information of the
data for further study. This phase reduces the number of variables that must be
examined, thereby saving time and resources. Enhancement operations are carried out to
improve the interpretability of the image by increasing apparent contrast among various
features of interest to facilitate the information extraction task. Common enhancement
and feature extraction techniques include contrast adjustments, band rationing, spatial
filtering, image fusion, linear mixture model, principal component analysis and color
enhancement. After preprocessing and enhancement steps, the remotely sensed data are
subjected to quantitative analysis to assign individual pixels to specific ground cover
types or classes. This phase can be performed by analyzing the properties of individual
pixel (per pixel) or group of pixels (region). In latter, the image is firstly segmented into
a set of regions that can be described by a set of attributes (area, perimeter, texture,
color, statistical information). This set of attributes is used to characterize and identify
each object in the image. This operation of recognizing objects in the image is called
image classification and it results in thematic maps as output. After
classification, it's necessary to guage its accuracy by comparing the classes on the
thematic map with the areas of known identity on the bottom (reference map).
NCRTABAS-2020 103

A. Image Registration: Registration is the process which makes the pixels in two images
precisely coincide to the same points on the ground. The coordinates (x,y) of the input
image that will be registered are adjusted to the coordinate system of the reference image
so that the grids be superimposed.
B. Image Fusion In remote sensing applications, the increasing availability of digital
images in a variety of spatial resolution, and spectral bands provides strong motivations
to combine images with complementary information to obtain hybrid products of greater
quality. In order to accomplish this task, image fusion has been employed.
C. Image Segmentation: Image segmentation is a basic task in image analysis whereby
the image is partitioned into meaningful regions whose points have nearly the same
properties, e.g., grey levels, mean values or textural properties. Segmentation methods
are basically supported two approaches: (1) dividing up the image into variety of
homogeneous regions, each having a singular label, (2) determining boundaries between
homogeneous regions of different properties.
D. Image Classification: Image classification is the process used to produce thematic
maps from remote sensing imagery. A thematic map represents the world surface objects
(soil, vegetation, roof, road, buildings), and its construction implies that the themes or
categories selected for the map are distinguishable within the image. Many factors can
difficult this task, including topography, shadowing, atmospheric effect, similar spectral
signature, and others. In order to facilitate the discrimination among classes , regions
obtained within the segmentation process are described by attributes (spectral,
geometric, texture) that plan to describe the objects of interest in the image.
NCRTABAS-2020 104

CONCLUSION
This paper presented a brief review about the general procedure employed to solve a
remote sensing application using image processing techniques.

REFERENCES
[1] Schowengerdt, R. A Remote Sensing Models and Methods for Image Processing,
London, Academic Press, 1997, p.521.
[2] Yang, X. H.; Jing, J. L.; Liu, G.; Hua, L. Z.; Ma, D. W. Fusion of multispectral and
panchromatic images using fuzzy rule. To be published. 2006.
[3] ENVI - Environment for Visualizing Images. Research System, Inc.
http://www.ittvis.com/ENVI.
[4] Hsieh, J. W.; Liao, H.M.; Fan,K.C.; Ko, M.T.; Hung.Y.P. Image registration using a new
edge-based approach. Computer Vision and Image Understanding, Elsevier Science Inc.
New York, NY, USA, v. 67, n. 2, p. 112–130, 1997.
[5] Fookes, C.; Maeder, A.; Sridharan, S.; Cook, J. Multi-Spectral Stereo Image Matching
using Mutual Information. 3D Data Processing, Visualization and Transmission, p. 961-
968, 2004.
[6] Jensen, J.R. Remote Sensing of the Environment: An Earth Resource Perspective. Upper
Saddle River, Prentice Hall, 2007, p.608. 2 ed.
[7] Fonseca, L.M.G., Costa,M.H.Korting, T.S., Castejon, E., Silva, F.C. Multitemporal
Image Registration based on Multiresolution Decomposition. RBC - Brazilian Journal of
Cartography. No 60/03, October, 2008.
[8] Leila M. G. Fonseca, Laércio M. Namikawa and Emiliano F. Castejon Image Processing
Division.

C-10
PERFORMANCE AND ANALYSIS OF VIDEO STREAMING OF SIGNALS IN
WIRELESS NETWORK TRANSMISSION

N.Praveen, R.Shyam Sundar, J.SyedAbuthahir


U.G Students, B.Sc Computer Science (Shift II), Alpha Arts & Science College, Porur,
Chennai.
Corresponding author: npraveen194@gmail.com
ABSTRACT
In this work, we describe an efficient video communication for wireless transmission of
H.264/AVC medical ultrasound video over mobile WiMAX networks. Medical ultrasound is
encoded using error resilient encoding in which, the quantization levels are varied according
to the function of the diagnostic significance of image region. We perform this functional
concept by using OPNET simulations of the mobile WiMAX medium and also considering a
number of the physical layer characteristics also like service prioritization classes, different
types of modulation be performed, fading channel's conditions, coding schemes and their
types and mobility. We are encoding the ultrasound videos at 4CIF(704×576)resolution to
NCRTABAS-2020 105

supply high level of clarity. The video quality assessment is being done supported both of
their evaluations in objective and subjective manner.
Keywords:Flexible Macroblock Ordering (FMO), Diagnostically Relevant Encoding, 4G,
mobile health care.

C-11
NEURAL NETWORK TECHNIQUE IN COMBINATION WITH TRAINING
ALGORITHM AND WAVELETS

K. Indhu, R. Subashini
U.G Students, B.Sc Computer Science (Shift II), Alpha Arts & Science College, Chennai.
Corresponding author:indhujasreekumar1999@gmail.com

ABSTRACT
Wavelets are mathematical tools for hierarchically decomposing functions. Wavelet
Transform has been proved to be a very useful tool for image processing in recent years. It
allows a function which may be described in terms of a coarse overall shape, plus details that
range from broad to narrow. Advances in wavelet transforms and quantization methods have
produced algorithms capable of surpassing the existing image compression standards like the
Joint Photographic Experts Group (JPEG) algorithm. For best performance in image
compression, wavelet transforms require filters that combine a number of desirable
properties, such as orthogonality and symmetry. The Neural Networks are good alternative
for solving many complex problems. In this paper image compression is employed through
multilayer network. A novel algorithm for neural network with different techniques is
proposed in this paper. Experimental results show that this algorithm outperforms than other
coders such as SPIHT, EZW, STW exits in the literature in terms of simplicity and coding
efficiency by successive partition the wavelet coefficients in the space frequency domain and
send them using adaptive decimal to binary conversion. This method holds parameters like
PSNR, MSE, BPP, CR, image size. SPHIT for important features.
Keywords: PSNR, MSE, STW, SPIHT, EZW, Neural Network.

C-12
OPTIMIZED BINARIZATION TECHNIQUE FOR DENOISING DOCUMENT
IMAGES

N. Habibunnisha1, Dr. D. Nedumaran2


1,2
Central Instrumentation and Service Laboratory, University of Madras, Guindy Campus,
Chennai 600 025, Tamil Nadu, India
Corresponding author :dnmaran@gmail.com
ABSTRACT
Document Image Binarization is an important preprocessing step in Document analysis and
Recognition (DAR). Still there is a need of importance in finding the best algorithm
tobinarize the degraded document images. Usually, the degraded old documents
NCRTABAS-2020 106

containsvarious types of noises like non-uniform illumination, show-through, bleed-through,


page stains, water-spilling, spots, margin noise, faded ink characters, etc. These noises will
affect further processingof document analysis like, feature extraction, classification and
characterrecognition. Hence, these problems have been considered and improved the image
quality through a simple binarization technique in order to reduce the various types of noise
from degraded document images. First, the raw images were converted into grey scale
image and then,the median filter was used to reduce the unwanted background noises.
Subsequently,the local Otsu thresholding was applied to filtered images. Finally, the
morphological operation of closing was applied on binarized image. This technique had
reduced the processing steps and performed better improvements. For evaluating the
performance of the proposed technique, we have used Document Image Binarization
Competition (DIBCO) 2014 dataset images. To measure the performance of the proposed
technique, the performance metrics like f-measure, Peak Signal to Noise Ratio, Negative Rate
Metric (NRM), Distance reciprocal Distortion (DRD)and Misclassification Penalty Metric
(MPM) were estimated and compared.The results of this study revealed that the proposed
technique achieves higher values in f-measure and PSNR but,low values for NRM, DRD and
MPM, which is the clear indication of noise removal quality of the proposed technique.
Key words: Document Image Analysis, Binarization, Thresholding, PSNR, NRM, DRD.

C-13
A LIGHTWEIGHT CRYPTOGRAPHIC ALGORITHM USING SHA – 3

Heerah D1*
1 Computer Science and Engineering Dept, PES University, Outer Ring
Road, Hosakerehalli, Bangalore, India
*Corresponding Author: heerah1493@gmail.com,

ABSTRACT
Today, the interconnection of computing devices embedded in everyday objects such as
watches, refrigerators and cars via the internet and has laid ground for a lot of innovation,
especially when it comes to monitoring health care data. In order to ensure security of data
transfer from one node to another node in the wireless communication, a lightweight
cryptographic algorithm is designed with the combination of two encryption technique
SHA-3 algorithm is used to generate a secure hash key which is then used to encrypt the
data to be sent from one node to another node through AES encryption. The key generated
from SHA-3 algorithm is stronger and it is impossible crack this key by the hackers since it
uses the sponge construction technique. The key size is hugely reduced when passed
through this algorithm; the size reduction is from few MB of data to 256bits or more in bits
range. The encrytion through AES technique is usually high speed and consumes low
RAM, which obeys the lightweight property; hence this algorithm is used for wireless
networks which have power and computational constraints.
Keywords:SHA-3 algorithm,AES technique
NCRTABAS-2020 107

C-14
REAL-TIME BIG DATA ADOPTATION AND ANALYTICS OF CLOUD
COMPUTING PLATFORMS.

M. Anoop,
Research Scholar,
Vels Institute of Science, Technology and Advanced Studies (VISTAS), Chennai.
Corresponding Author: profanoopcs@gmail.com

ABSTRACT
Within the past few years, tremendous changes are happening in Cloud Computing, Big Data,
Communication technology and Internet of things. Shift to the newest technology is
envisaging new upcoming challenges. Big Data is becoming an important transformation for
the enterprises and scientific community. IoT, social websites, life science , automated and
smart devices will fuel the explosion of knowledge for the near future. This transformation
provides an opportunity to find the insights to make the businesses more agile and to answer
the questions which were considered beyond our reach before. The core purpose of this paper
is to discuss the views of various researchers, Big Data tools and techniques required for
storage, management and analytic and its growth and suspected challenges in various
domains.Big Data are very large and sophisticated that it's really tough and only with
traditional approaches it's very difficult to process and analyze the info . Effective data
management for Big Data sets is not possible with traditional RDBMS (Relational database
management systems). Due to the dimensions of massive Data it's very difficult to extract the
knowledge during a proper and required manner. Bringing insights from the large amount of
the data is very useful. In fact the raw input is that the data that's processed into information.
The data has no meaning when it's individual. But volume of knowledge will provide some
meaningful output and it provide trends and patterns.
Keywords: Real-Time Processing, Big Data, Data Analytics.

C-15
PROVIDING OPTIMAL PERFORMANCE AND SECURITY GUARANTEES USING
DROPS FOR THE CLOUD

S.Nanthini1, C.Surya2, A.Sumathi3,M.Gomathi4


1,2,3
Assistant Professor (CSE), Sri Ramakrishna College of Engineering, Perambalur,
TamilNadu
4
Assistant Professor (CSE),Arjun College of Technology,Pollachi,TamilNadu
Corresponding author:nanthini27j88@gmail.com

ABSTRACT
The IaaS(Infrastructure cloud Service model) deals enhanced resource manageability and
availability, where tenants – protected from the minutiae of hardware maintenance – charge
NCRTABAS-2020 108

of computing resources to deploy and operate complex systems. However, many conventions
operating on delicate data avoid migrating operations to IaaS platforms due to security
concerns. IaaS, consisting of protocols for a consistent launch of virtual machines and
domain-based storage protection. The protocols agree faith to be identified by tenuously
demonstrating host platform configuration before launching guest virtual machines and
guarantee confidentiality of data in remote storage, with encryption keys maintained outside
of the IaaS domain. The experimental outcomes will display the validity and efficiency of the
proposed protocols. In electronic health record system,the frame model was implemented on
a test bed. The proposed protocols can be combined into existing cloud environments. For
firming the trust model in cloud network communications and applying searchable encryption
schemes to create harmless cloud storage mechanisms.
Keywords- infrastructure cloud; virtual machine; remote storage; efficiency

INTRODUCTION

Cloud computing has proceeded from a bold vision to considerable deployments in various
application domains. Though, the complexity of technology underlying cloud computing
familiarizes novel security threats and challenges. Threats and mitigation techniques for the
IaaS model have been lower than exhaustive scrutiny in recent years, while the industry has
invested in enhanced security solution sand issued best practice recommendations. We see
two main improvement vectors concerning these operations. First, details of such exclusive
solutions are not unveiled and can subsequently not be engaged and improved by
supplementary cloud platforms. Next, to the outstanding of our realities, not any of the
solutions affords cloud tenants a proof regarding the integrity of compute hosts supporting
their slice of the cloud infrastructure. The proposed set of protocols for virtual machines in
iaas affords tenants with a proof that the requested VM cases had been launched on a host
with an anticipated software stack. Additional relevant security mechanism is encryption of
virtual disk volumes, implemented and imposed at compute host level. Although provision
data encryption at stillness is offered by several cloud providers and can be constituted by
tenants in their VM instances, functionality and migration capabilities of such solutions are
severely embarrassed.

EXISTING SYSTEM

The Trusted Cloud Compute Plat-form (TCCP) to guarantee VMs are running on hardware
and software stack on a far off and initially untrusted host. To aid this, a depended on
coordinator stores the list of attested hosts that run a “trusted virtual computing device
monitor” which can securely run the client’s VM. Trusted hosts preserve in memory an
individual depended on key used for identification every time a client launches a VM. The
paper gives a precise preliminary set of thoughts for relied on VM launch and migration, in
precise the use of a trusted coordinator. A drawback of this result is that the depended on
coordinator keeps information about all hosts deployed on the IaaS platform, making it a
precious goal to an adversary who tries to expose the public IaaS provider to privateness
attacks.
NCRTABAS-2020 109

DISADVANTAGES
 The system is contingent on the user′s employed scheme for data confidentiality.
 The probable quantity of loss in case of data tempering as a result of intrusion or
access through other VMs can't be decreased.
 These two schemes do now not protect the information files towards tempering.
 It loss due to problems springing up from virtualization and multi-tenancy

PROPOSED SYSTEM

The device recommends a Division and Replication of Data in the Cloud for Optimal
Performance and Security (DROPS) that intelligently fragments user documents keen on
portions and imitates them at tactical locations inside the cloud. The DROPS methodology
does not save the total file on a single node to keep away from compromise of all of the
information in case of profitable attack on the node. This methodology, in contrast, trashes
the file and keeps the fragments on a couple of nodes. Cloud computing has increased
sizeable linking for recent years. It is a form of distributed computing, over the internet
through on demand and pay on utilization basis the resources and utility platform are shared.

Fig 1 System Architecture

ADVANTAGES

 The suggested system fragments and replicates the data file over cloud nodes.
 The instance of a fruitful attack, no evocative information is exposed to the attacker.
 The non-cryptographic nature of the proposed scheme makes it faster to achieve the
required operations on the data and it enhanced security.
NCRTABAS-2020 110

VII. SCREEN SHOTS

Fig 2 MAIN ENTRY Fig 3


REGISTRATION FORM

Fig 4 Cloud Service Providers Fig 5 Receive Request

Fig 6Send to CSP

Fig 7 Response to CSP Fig 8 File Downloaded

VIII. CONCLUSION
From a tenant point of view, the cloud security model does not yet embrace against
threat models developed for the outdated model where the hosts are functioned and used by
NCRTABAS-2020 111

the same organization. In contrast, here is a stable progress in the direction of reinforcement
the IAAS security model. In this work a framework for trustworthy infrastructure cloud
deploymentsubmitted, with two focus points: VM deployment on reliable computer hosts and
domain-based protection of stored data.

REFERENCES
[1] T. Mohan,Shilpa,Sudheendran,FepslinAthishmon,” Divisioning and replicating data in
Cloud for Optimal Performance and security” International Journal of Pure and Applied
Mathematics 118(8):271-274, January 2018
[2] Ahmad I, Khan S U, and , “Comparison and analysis often static heuristics-based
Internet data replication techniques,”Journal of Parallel and Distributed Computing, Vol.
68, No. 2, 2008,pp. 113-136.
[3] Ali M, Khan A N, Kiah M L M, MadaniS A, and “En-hanced dynamic credential
generation scheme for protectionof user identity in mobile-cloud computing, The Journal
ofSupercomputing, Vol. 66, No. 3, 2013, pp. 1687-1706 .

C-16
IMPACT OF DATA MINING TECHNIQUES IN HEALTHCARE RESEARCH

R.Bagavathi Lakshmi1, S.Parthasarathy


Department of Computer Applications,V.V.Vanniaperumal College for Women
(Autonomous), Virudhunagar, India
Department of Computer Applications, Thiagarajar College of Engineering, Madurai, India
Corresponding author:bagunithi@gmail.com

ABSTRACT
Healthcare research is essential to modern society because increasing of new diseases makes
the people despondent. In the 21st century, all the hospital becomes computerized and keeps
all the medical records and transactions in the repository. One of the core research domains
like image processing give significant advancement in healthcare and even data mining also
contributed as well. Data mining is the research area that extracts knowledge from the data
repository. The mining algorithms like clustering and classification can be applied in many
areas like brain tumour detection, cancer detection, heart dieses, hospital management, cell
mutation, etc. This study paper explores such algorithms, architectures, and problems that are
developed by various researchers. This paper focus on the results belongs to a brain tumour
and advanced mining techniques on medical records.
keywords: Healthcare, Brain Tumour Detection, Data Mining
1. INTRODUCTION
The ultimate objective of any research is for the betterment of daily life and sometimes
it’s could be future development or supportive one. The core researches like image
processing in healthcare [1], green computing [2, 3], IoT [4], etc. are directly inf luenced
NCRTABAS-2020 112

in human life or environment. Across the whole the fields, data are being collected and
accumulated at a vivid pace. There is an critical need for brand spanking new generation
of theories and tools to help humans in extracting useful information (knowledge) from
the rapidly growing volumes of digital data. The paper further organized into the
following sections. Section II explains the role of data mining in healthcare industry. In
this section outline the mining algorithms with the application on healthcare and it
highlights the contribution of clustering and classifications in that industry. This section
also discussed contribution of mining algorithm on brain tumour and patient states and
other disease. It discusses how to extract knowledge from the social media and cloud
based applications. The cloud based mining algorithms is vital and its give the detailed
output data. Section III gives the conclusion of this paper

II. DATA MINING IN HEALTHCARE


Healthcare covers elaborated processes of the diagnosis, treatment and hindrance of illness,
injury and different physical and mental impairments in humans [5]. The clustering could be
a common descriptive task within which one seeks to spot a finite set of classes or clusters to
explain the info [6]. Rui Velosoa [7] had used the vector quantization technique in cluster
approach in predicting the readmissions in intensive medication. From the results the work by
these researchers provides a helpful result in serving to to characterize the various styles of
patients having a higher probability to be readmitted. There are five brain tumour detection
approaches these include clustering, classification, genetic algorithm, neural network, region
growing and thresholding approaches.

Fig 1 – Knowledge discovery of healthcare data

Duggal et al. [8] compared numerous classification models that predict hospital admission
rate for diabetic patients. They found that the random forest (RF) formula [9] was the
optimum classifier for that task. Strack et al. [10] studied the impact of glycated
haemoprotein (HbA1c) measure on admission rate of diabetic patients. They used variable
NCRTABAS-2020 113

logistical regression on a dataset of patient diabetic encounters from the records of a hundred
thirty North American country hospitals. They all over that getting a measure of HbA1c for
patients with diabetes could be a helpful predictor of admission rates (i.e., patient admission
to a hospital among thirty days when being discharged from associate degree earlier hospital
stay). B.V.Kiranmayee et al [11] proposes a unique design to notice brain tumour by using
classifying the 2 medical images. Throughout training part, the scan pictures are load and
designed the data set. Within the testing part the original image may be compared and test
whether or nottumour found or not. The dataset may be designed through collected pictures
from the web. The dataset were collected from University of California because they
maintain all diabetics’ patients for very long time (table 1).

Table 1.Statistics on selected data in the original dataset

African Others &


Caucasian Hispanic Asian
American Unknown
Race
76009 19210 2037 641
3779 (3.71%)
(74.78%) (18.88%) (2.00%) (0.63%)
Time in Median Average Minimum Maximum
Hospital 4 days 4.4 days 1 day 14 days
Expired or discharged to another
Discharge Discharge to home
healthcare facility, etc.
Disposition
60234 (59.19%) 41532 (40.81%)

The mining algorithms are applied on the social media data and the information can
visualize the actual states in the market. This study included 58,575 recorded AEs of
13,192patients (median age 50-59; 10,333 males and a couple of ,859females) with
advanced coronary failure who received an LVAD between 2006 and 2015. Data was
extracted from INTERMACS, a national registry comprising over 180 clinical centers.
During this projected framework,
1) population-level healthcare data scattered across disparate local data sources are
integrated, that provides plentiful data for the information mining process;
2) computational infrastructure and resources are often delivered by cloud computing
platforms in an exceedingly reliable, scalable, and efficient manner, that satisfies the
machine and financial demand for building health care data mining services;
3) The service development method is modularized, that makes the service development,
update, and maintenance easier and faster.

III. CONCLUSION
Data mining is the key technique that applies in a variety of fields. Convergence on image
processing, cloud-based, software engineering is significant but applying mining techniques
in the healthcare industry is divine because it saves human lives. There is an open ground to
apply mining algorithms in all areas of medical records. The records are uncountable and
NCRTABAS-2020 114

unimaginable so that it required the mining concept. This paper is an elaborate study of the
result and the mining techniques that vary depend upon the dataset, size of dataset and
application. The paper finds the common characteristics among the healthcare data sets are
highly imbalanced data sets, where the majority and the minority classifier are not balanced
resulting prediction erroneous when run by the classifiers. At the end, the study highlights the
result which already published and tested in real-time data and the suggestions have been
made.

REFERENCES
[1] Tayade MC et al. “Role of image processing technology in healthcare sector:
Review”,International J. of Healthcare and Biomedical Research, Volume: 2, Issue: 3,
April 2014, Pages 8-11.
[2] S.Pandikumar, M.Sumathi, “Energy Efficient Algorithm for High Speed PacketData
Transfer on Smartphone Environment”, International Journal of Engineering and
Advanced Technology, Volume-8 Issue-6, August 2019.
[3] S.Pandikumar et al. “Principles and Holistic Design of Green Web Portal”,
International Journal of Computer Applications (0975 – 8887)Volume 65– No.9,
March 2013.
[4] S.Pandikumar and Rajappan Vetrivel. “Internet of Things Based Architecture of
Weband Smart Home Interface Using GSM.” (2014), pp. 1721-1727.
[5] J.-J. Yang, J. Li, J. Mulder, Y. Wang, S. Chen, H. Wu, Q. Wang, and H. Pan,
“Emerging information technologies forenhanced healthcare,” Comput. Ind., vol. 69,
pp. 3–11, 2015.
[6] U. Fayyad, G. Piatetsky-Shapiro, and P. Smyth, “From data mining to knowledge
discovery in databases,” AI Mag., pp.37–54, 1996.
[7] R. Veloso, F. Portela, M. F. Santos, Á. Silva, F. Rua, A. Abelha, and J. Machado, “A
Clustering Approach for PredictingReadmissions in Intensive Medicine,” Procedia
Technol., vol. 16, pp. 1307–1316, 2014.
[8] R. D. Duggal, S. Shukla, S. Chandra, B. Shukla, and S. K. Khatri,“Predictive Risk
Modelling for Early Hospital Readmission ofPatients with Diabetes in India,”
International Journal of Diabetes inDeveloping Countries, 36(4), 2016, pp. 519-528.
[9] Dietrich, D., B. Heller, and B. Yang. "Data Science & Big DataAnalytics:
Discovering, Analyzing, Visualizing and PresentingData,”, John Wiley & Sons, 2015.
[10] B. Strack, J. P. DeShazo, C. Gennings, J. L. Olmo, S. Ventura, K. J.Cios, and
J. N. Clore, “Impact of HbA1c Measurement on HospitalReadmission Rates: Analysis
of 70,000 Clinical Database PatientRecords,” BioMed Research International, vol.
2014, 2014.
[11]B.V.Kiranmayee et al, A Novel Data Mining Approach for BrainTumour Detection,
International Journal of Computer Applications 39(18):129-140, Jan 2016.
NCRTABAS-2020 115

C-17
DENSITY-BASED CLUSTERING: AN OVERVIEW

Vinolyn Vijaykumar1,*, R.Kiruthika2,


Assistant Professor, Alpha Arts and Science College, Chennai - 600116.
*Contributing author: vinolynvijay@gmail.com

ABSTRACT
A Density based clustering algorithm is one of the important findings where there is no
proper shape, that is, non-linear shapes structure based on the density. It works by detecting
the areas where points are intensive and where the points are separated by areas that are
empty or sparse. Points that are not in clusters are called noise. This tool uses unsupervised
machine learning clustering algorithms which identify patterns based purely on spatial
location and the distance to a specified number of neighbors. These algorithms are considered
unsupervised because they do not necessitate any training on what it means to be a cluster.
Keywords: Clustering, Types, Density-based Clustering

INTRODUCTION
Clustering is the task of isolating the population or data points into a number of groups such
that data points in the same groups are more analogous to other Savithri
ata points in the identical group than those in other groups. In simple words, the objective is
to segregate collections with alike traits and assign them into clusters.

DENSITY-BASED CLUSTERING
Density-Based Clustering refers to unsupervised learning methods that recognize distinctive
groups/clusters in the data, based on the idea that a cluster in a data space is a contiguous
region of high point density, detached from other such clusters by contiguous sections of low
point density.

Clustering methods are used to find the clusters in data point. The following clustering tool
having 3 clustering methods:
 DBSCAN—Density-Based Spatial Clustering of Applications with Noise (DBSCAN) is
most widely used density-based algorithm. Density reachability and Density
connectivity are the two concepts used in DBSCAN for separating dense clusters from
sparser noise. This DBSCAN is the fastest clustering method. It is widely applicable for
clear Search Distance and it works good at potential clusters.
 Self-adjusting (HDBSCAN)—Range of distances to separate clusters of varying densities
from sparser noise. The HDBSCAN algorithm is the most data-driven of the clustering
methods, and thus necessitates the least user input.
 Multi-scale (OPTICS)—It uses the distance between neighboring features to produce a
reachability plot which is then used to separate clusters of varying densities from noise.
The OPTICS algorithm offers the most flexibility in fine-tuning the clusters that are
NCRTABAS-2020 116

perceived, though it is computationally exhaustive, particularly with a large Search


Distance.
The small feature Cluster parameter is considered in the calculation of the core-distance. It is
designed by 3 clusters namely the concept oriented, the core-distance or each point. It is a
dimension of the distance that is required to travel from each point to the defined minimum
number of features.

 If a large feature per Cluster is chosen, then the corresponding core-distance will be
larger.
 If a small feature per Cluster is chosen, then the corresponding core-distance will be
smaller.
In borders of clusters, points will have large core-distances and wont comes in cluster.

Illustration of the core-distance, measured as the distance from a particular feature that must
be traveled to create a cluster with a minimum of 4 features including it.
Search Distance (DBSCAN and OPTICS)
For Defined distance (DBSCAN), if the Minimum Features per Cluster cannot be found
within the Search Distance from a particular point, then that point will be marked as noise. In
other words, if the core-distance (the distance required to reach the minimum number of
features) for a feature is greater than the Search Distance, the point is marked as noise.
The Search Distance, when using Defined distance (DBSCAN), is preserved as a search cut-
off.

CONCLUSION
Density based clustering algorithm has acted a important role in finding non-linear shapes
structure based on the density. Though DBSCAN is the most widely used density-based
NCRTABAS-2020 117

algorithm there exists some advantages and disadvantages. Some advantages are: It does not
require a-priori specification of number of clusters. It is able to detect noise data while
clustering. And DBSCAN algorithm is able to find arbitrarily size and arbitrarily shaped
clusters. To mention a few disadvantages, DBSCAN algorithm fails in case of varying
density clusters. It also fails in case of neck type of dataset and does not work well in case of
high dimensional data.

REFERENCES
[1] H. Liu, M. Dash, X. Xu, and C. Technology, “‘1+ 1 > 2’:Merging Distance and Density
Based Clustering,” Database Syst. Adv. Appl. Seventh Int. Conf. on. IEEE, pp. 32–39,
2001.
[2] E. Bic and D. Yuret, “Locally Scaled Density Based Clustering,” Int. Conf. Adapt. Nat.
Comput. Algorithms. Springer, Berlin, Heidelb., pp. 739–748, 2007.
[3] M. Parimala, D. Lopez, and N. C. Senthilkumar, “A Survey on Density Based Clustering
Algorithms for Mining Large Spatial Databases,” Int. J. Adv. Sci. Technol., vol. 31, pp.
59–66, 2011.
[4] C. Böhm, C. Plant, B. Wackersreuther, and R. Noll, “Density-based Clustering using
Graphics Processors,” Proc. 18th ACM Conf. Inf. Knowl. Manag. ACM, pp. 661–670,
2009.
[5] A. Tepwankul and S. Maneewongwattana, “U-DBSCAN : A DensityBased Clustering
Algorithm for Uncertain Objects,” Data Eng. Work., pp. 136–143, 2010.
[6] L. Ma, L. Gu, B. Li, S. Qiao, and J. Wang, “G-DBSCAN : An Improved DBSCAN
Clustering Method Based On Grid,” Adv Sci Technol Lett, vol. 74, no. Asea, pp. 23–28,
2014.
[7] X. Xu, “Density-Based Clustering in Spatial Databases : The Algorithm GDBSCAN
and Its Applications,” Data Min. Knowl. Discov., vol. 2, no. 2, pp. 169–194, 1998

C-18
PRIVACY-PRESERVING OF FILES AND SECURING THE DATA SETS USING
SYSTEM BASED CLOUD STORAGE

D. Suganthi*1
*1
Final Year M.Tech (IT Dept), College of Engineering, Anna University, Guindy Campus,
Chennai, T.N, India.
Corresponding author: *sugan5530@gmail.com

ABSTRACT
To defend outsourced data in cloud storage against corruptions, adding liability tolerance to
cloud storage composed with data integrity scrutiny and failure reparation becomes
dangerous. Recently, regenerating codes have increased popularity due to their lesser repair
bandwidth while providing liability tolerance. Old inspection methods for regenerating coded
data provide only private auditing and handle auditing, repairing, which is sometimes
NCRTABAS-2020 118

unrealistic. In this paper, we suggest a public auditing scheme for the regenerating-code-
based cloud storage.

To overcome the regeneration we introduce a proxy, which is privileged to renew the


authenticators, into the traditional public auditing system model. Moreover, we design a
novel public verifiable authenticator, which is produced by a couple of keys and can be
regenerated using partial keys. Thus, our scheme can completely release data owners from
online problem.

In addition, we randomize the encrypt coefficients with a pseudorandom function to reserve


data privacy. Wide security scrutiny shows that our scheme is demonstrable protected under
random oracle model and experimental evaluation indicates that our structure is highly
effective and can be practicably integrated into the regenerating code-based cloud storage.

INDEX TERMS
Cloud storage, regenerating codes, public audit, privacy preserving, authenticator
regeneration, proxy, privileged, provable secure

C-19
HEALTHCARE DATA PREDICTION SYSTEM USING COLLABORATIVE
FILTERING - MACHINE LEARNING TECHNIQUE
Heerah D1*
1Computer Science and Engineering Dept, PES University, Outer Ring Road,
Hosakerehalli, Bangalore, India
*Corresponding Author: heerah1493@gmail.com

ABSTRACT
Healthcare data is very rich and it includes a record of services received, conditions of
those services, and clinical outcomes or information concerning those services. With the
development of incredible machine learning systems, it is presently possible to get more
noteworthy bits of knowledge from the accessible information. Machine learning is when a
computer has been taught to identify patterns by providing it with data and an algorithm to
understand that data. The prediction system designed is to suggest medicines/drugs
prescribed for the particular disease by training the patient’s history medical data using
collaborative filtering technique. The predictive results also includes the doctor suggestions
based on the user rating information and the system also allows the user to query for drugs
that satisfy a set of conditions based on side effects and symptoms.
NCRTABAS-2020 119

C-20

DATA COLLECTION IN WIRELESS SENSOR NETWORKS FOR IOT USING


PREDICTION

C.John Paul1, Aparyay kumar 2


Assistant professor,Department of computer Applications, Alpha Arts and Science College,
Porur, TN
Assistant Professor, Department of Computer Applications, Alpha Arts and Science College,
Porur, Chennai, TN
Corresponding author:jpaasc@gmail.com1,aparyay @gmail.com2

ABSTRACT
One of the foremost prominent and comprehensive ways of data collection in sensor
networks is to periodically extract raw sensor readings. This way of data collection permits
complex analysis of data, which may not be promising with in-network aggregation or query
processing. For many applications in wireless sensor networks (WSNs), users might want to
continuously extract data from the networks for analysis. However, accurate data removal is
difficult—it is usually too costly to get all sensor readings and also not necessary within the
sense that the readings themselves only represent samples of the true state of the world.
An energy-efficient framework for clustering-based data collection in wireless sensor
networks by incorporating adaptively enabling/disabling prediction scheme is recommended.
To realize prediction techniques efficiently in WSNs, adaptive scheme is introduced to
regulate prediction utilized in this framework, and design algorithms are used to exploit the
benefit of adaptive scheme to enable/disable prediction operations. It avoids the need for
extensive node-to-node propagation of aggregates, but rather it uses faster and more
proficient cluster-to-cluster propagation.

C-21
MONITORING OF CLUSTER FOR HADOOP OPTIMIZED
MACHINE TRANSLATION

V.Prema
Assistant Professor, Dept. of Computer Applications, Alpha Arts and Science College,
Chennai, TN
Corresponding author: vprema78@gmail.com
ABSTRACT
The main objective of the Machine Language Translation into the Distributed Environment is
to translate a document from English to Indian languages. The system consists of a pre-
processor which will categorize every word of every sentence into a vowel’s, verb, noun,
pronoun and adjective based on its usage and send them to the engine. The task of the engine
NCRTABAS-2020 120

is to parse these words and find the corresponding meaning for every word in the language
specified by the user and then send the output to the post processor. The job of the post
processor is to arrange these translated words into meaningful sentences. While translation
the system creates a translation memory which is very helpful for translation of sentences
with same sentence structures. The system needs high speed, memory, time for this reason
the time taken for translation is very high on standalone system. As all thread management
for translation has to be managed by the programmer itself. The objective is to build an
authentication cum authorization schema for the Hadoop cluster in order to provide security
which would prevent unauthorized user from accessing the HDFS and running MapReduce
job. The cluster needs to be monitored for failure with regards to services and high CPU
workload.

1.WHAT IS HADOOP?

The Apache Hadoop project develops aopen-source software for most reliable and
distributed computing. The Apache Hadoop software library and its features is a unique
framework that allow users for the distributed processing of large set of data across clusters
using simple and efficient programming models.
 Hadoop is a free, programming framework based on Java that supports the processing of
large sets of data in a distributed computing environment 0n the whole.
 Hadoophelps to run applications on systems with thousands of nodes involving millions
of petabytes.

2.OPTIMIZATION OF HADOOP & MAP-REDUCE


 Tuning factors
The cluster system and the HDFS parameters for performance and reliability. HDFS uses a
basic file system block size of 64MB and the Job Tracker also chunks task input into 64MB
segments.
 Block service threads
NCRTABAS-2020 121

<Property>
<name>dfs.datanode.handler.count</name>
<Value>30</value>
<Description>The number of server threads for the datanode.</description>
</property>

3.TUNING TO IMPROVE JOB PERFORMANCE

Tuning the Reduce Task Setup

Figure 6.3: Behind the scenes in the reduce task

4.HADOOP ARCHIVES

Hadoop Archives, or HAR files, are a file archiving facility packs files into HDFS blocks
more efficiently.A Hadoop. HAR tool runs a Map-Reduce job to process the input files in
parallel to run it a Map-Reduce cluster is up.
To run the archive command:
%hadoop archive -archiveNamefiles.har /my/files /my

5.COMPRESSION

Table 6.2: A summary of compression formats


 A hadoop-site.xml Specification for Map Output Level Compression with LZO
<property>
NCRTABAS-2020 122

<name>mapred.compress.map.output</name>
<value>true</value>
</property>
<property>
<name>mapred.map.output.compression.codec</name>
<value>org.apache.hadoop.io.compress.LzoCodec</value>
</property>
 A hadoop-site.xml Specification for Final Output Files to be Compressed with LZO
<property>
<name>mapred.output.compress</name>
<value>true</value>
</property>
<property>
<name>mapred.output.compression.type</name>
<value>BLOCK</value>
</property>
<property>
<name>mapred.output.compression.codec</name>
<value>org.apache.hadoop.io.compress.LzoCodec</value>
</property>

6.MACHINE TRANSLATION PROCESS

 INPUT 1:KanakVrindavan is a popular picnic spot in Jaipur.


 PRE-PROCESSING: -Kanak-Vrindavan is a popular picnic spot in Jaipur
 POS:Kanak-Vrindavan/N is/V a-popular-picnic-spot/N in/P Jaipur/N
 ENGINE:-Engine is works on Tree Adjoining Grammar which is Rule based. Because
English language is based SVO model and Indian Language is based on SOV model.
 NVNPN = 0^N 4^N 3^P 2^N 1^V
 SYNTHESIZER
 OUTPUT
Hin ककककककककककककककककककककककककककककककककक
di कककक
outp
ut
Mar ककककककककककककककककककककककककककककककककक
athi कककककककककककक
outp
ut
Tam कककककककककककककककककककककककककककककककककककककककक
il ककककककककककककककककककककककककककककक.
outp
ut
NCRTABAS-2020 123

 INPUT 2: The Amber Palace is a classic example of Mughal and Hindu architecture
 PRE-PROCESSING:-The Amber-Palace is a classic example of Mughal-and-Hindu
architecture
 POS: The-Amber-Palace/N is/V a-classic-example/N of/P Mughal-and-Hindu-
architecture/N
NVNPN = 0^N 4^N 3^P 2^N 1^V
TAMIL OUTPUT:
ककककककककककककककककककककककककककककककककककककककककक
ककककककककककककककककककककककककककककककककक

7.CREATION OF TRANSLATION MEMORY WITH SINGLE


SENTENCE
INPUT SENTENCE 10000 (file Size 1.33MB)

NO_OF_NODES OUTPUT TIME


SINGLE NODE 1 24Hr 38 Min
2 12Hr 30 Min

MULTI NODE 3 9Hr 15 Min

4 6Hr 47 Min

30

25
TIME IN HOURS

20
SINGLE NODE
15 2 NODE CLUSTER
10 3 NODE CLUSTER

5 4 NODE CLUSTER

0
NODES

8.CONCLUSION
The Hadoop cluster was set up for the optimization of the Machine
Language Translation, the users trying to access the Hadoop cluster for
running MapReduce jobs and to access HDFS is authenticated and authorized
NCRTABAS-2020 124

by Kerberos and LDAP servers. The cluster is monitored by Ganglia and


Nagios Network monitoring System for failure in services and CPU Workload
on all the nodes.
9.REFERENCES:

[1] Hadoop Operations by Eric Sammer


[2] Hadoop in Action by Chuck Lam
[3] Hadoop in Practice by Alex Holmes
[4] Understanding and Deploying LDAP Directory services
By Timothy A Howes
[5] TheNagios Book by Chris Burgess
[6] The official Ubuntu Server Book by Kyle Rankin
[7] LDAP System Administration by Gerald Carter

C-22

AMBIENT INTELLIGENCE (AMI) AND ERGONOMICS – A STUDY

Dr.Maharasan.K.S1,*, Ms. HarineSenthil2


Business Analyst, Technologies Division, True Friend Management Support
ServicePvt Ltd
Under GraduateEngineeing Student, Information Technology, Dr. Mahalingam
College of Engineering and Technology,
*Corresponding author: maharasan@gmail.com

ABSTRACT
The presence of Information and Communication Technology has been felt everywhere either
directly or in-directly. ICT is emerging as a deep-rooted tree with multiple branches of
specialization offering services to almost all processes practiced in the world. In-spite of this
deeper and wider presence, its potential is yet to be tapped fully for certain areas. Ambient
Intelligence is once such area that has a huge potential and being used in selective areas.
Health Care Informatics and Higher Education are a few to name. This coverage grows in
leaps and bounds every day. Technology cannot completely replace human beings. It can
serve only as an assistive and productivity enhancement mechanism. The advent and
introduction of new ICT features are enabling the skilled workers to be very productive and
accurate in their day to day tasks. The purpose of any automation system is to ensure that the
end user and the beneficiaries are remaining in their comfort zone. This is the place where
smart homes, smart offices and any other product and service comes with SMART as a
prefix. Ambient Computing is one such technology that enables the beneficiaries to utilize the
available resources seamlessly without any hassles by remaining in their comfort zone. This
is the place where the term ergonomics come into picture. Ergonomics deals with work
environment layout.The objective of this paper is to consolidate information about the
NCRTABAS-2020 125

potential of Ambient Computing, Ambient Intelligence and how it can be used for proposing
a workplace ergonomics that will ensure a comfortable workspace for the end users. This
work analyses the present collaborating scenario of Ambient Intelligence & Ergonomics and
proposes methods for the uncovered areas.

Keywords: Ambient Computing, Ambient Intelligence, Ergonomics, IoT, Machine Learning


and Assistive Technology.

C-23
ANALYSIS OF BIG DATA CHALLENGES AND TRENDS IN RECENT
ENVIRONMENT

Dr. A. Saranya
Department of Computer Applications, Madurai Kamaraj University
Corresponding author:asaranyaalagar@gmail.com

ABSTRACT
Big Data means large collections of data sets containing plentiful information. Big data
has the potential to revolutionize the art of management to require appropriate decision
on time. Extremely large data sets that may be analyzed computationally to reveal
patterns, trends, and association from unstructured data into structured ones to find a
solution for a business is the key factor in today’s market. A huge repository of
terabytes of knowledge is generated every day from modern information systems and
digital technologies like Internet of Things and cloud computing. Scrutiny of these
enormous data requires a lot of efforts at multiple levels to extract knowledge for
decision making. Therefore, big data analysis may be a current area of research and
development. The objective of this paper is to explore the impact of big data
challenges, open research issues, and various tools related with it. As a result, this
article delivers a platform to discover big data at several stages. Additionally, it opens a
new horizon for researchers to develop the solution, based on the challenges and open
research issue Paper also covers different big data tools used with its salient features.
Future research directions during this field are wide opened; but this paper has tried to
facilitate the exploration of the domain and therefore the development of optimal
techniques to deal with Big Data.
In recent years information are produced at a histrionic pace. Scrutinizing these data is
challenging for a normal man. To this end during this paper, we survey the
varied research issues, challenges, and tools wont to analyze these big data. From this,
it is understood that each big data platform has its distinct focus. Some of them are
designed for execution whereas some are good at real-time analytic. Each big data
platform also has specific functionality. Different techniques used for the analysis
include statistical analysis, machine learning, data mining, intelligent analysis, cloud
com- puting, quantum computing, and data stream processing.
NCRTABAS-2020 126

Key Words: Big Data, Big Data Analytics, Hadoop, Business Intelligence, Entity
Recognition .

REFERENCES
[1] Rick Whiting. (2018), 2018 Big Data 100: The 10 Coolest Data Science And Machine
Learning Tools, accessed 10May 2018, <https://www.crn.com/slide-
shows/applications-os/300102941/2018-big-data-100-the-10-coolest-data-science-and-
machine-learning-tools.htm?itc=refresh>.
[2] A COMPLETE LIST OF BIG DATA ANALYTICS TOOLS TO MASTER IN 2018,
accessed 10 May 2018, <https://www.norjimm.com/blog/big-data-analytics-tools-to-
master-2018/>
[3] AmitVerma (2018), Top 10 Open Source Big Data Toolsin 2018, accessed 10 May 2018,
<https://www.whizlabs.com/blog/big-data-tools/>
[4] Top 11 Big Data Analytics Tools in 2018, accessed 10 May 2018,
<https://www.guru99.com/big-data-analytics-tools.html>
[5] Data Flair. (2018), 10 Best Big Data Analytics Tools for 2018, accessed 11 May 2018,
<https://data-flair.training/blogs/best-big-data-analytics-tools/>
[6] Assuncao,M. D., Calheiros, R. N., Bianchi, S., Netto, M. A., &Buyya, R. (2015). Big
Data computing and clouds: trends and future directions. Journal of Parallel and
Distributed Computing, 79, 3–15.
[7] Gil Press. (2016), Top 10 Hot Big Data Technologies,accessed 11 May 2018,
<https://www.forbes.com/sites/gilpress/2016/03/14/top-10-hot-big-data-
technologies/#36896a265d7b>

C-24

QUANTUM COMPUTING

R.Vajubunnisa Begum1,*, H. Jasmin 2, M. keerthana 3


1
Associate Professor, JBAS College for Women, Chennai,
2
Assistant Professor, JBAS College for Women, Chennai
3
III year Electronics and Communication Science, JBAS College for Women, Chennai,
Corresponding Author: *vaju6666@gmail.com

ABSTRACT
With the continuous growth of technology and the desire to be linked to people and objects
similar for more effective, faster and easier way of doing stuff, Internet of Things (IoT) or the
network of “smart” devices that gathers and shares data to other grew exponentially. Business
Insider Intelligence forecasted that “by 2023, Consumers, Companies and Governments will
mount 40 billion IoT devices worldwide.” In this paper we investigate the employment of
quantum computing for resolving problems in wireless communication systems using IoT.
INTRODUCTION
NCRTABAS-2020 127

IoT is just the network of interconnected things/devices which are embedded with sensors,
software, network connectivity .
Quantum Computing and Internet of Things (IoT) coined as Quantum IOT defines a concept
of greater security design which harness the virtue of quantum mechanics laws in Internet of
Things (IoT) security management. Also it confirms protected data storage, processing,
communication, data dynamics.
Quantum computing began within the early 1980s, when physicist Paul Benioff proposed a
quantum mechanical model of the Turing machine . Quantum computing is a neighborhood
of computing on technology supported the principles of scientific theory , which explains the
behavior of energy and material on the atomic and subatomic levels.
HOW QUANTUM COMPUTERS WORK
In physics , if you apply an outdoor force to 2 atoms, it can cause them to become entangled
and therefore the second atom can take the properties of the primary atom. So if left alone, an
atom will spin altogether directions. The instant it's troubled it chooses one spin, or one
value; and at an equivalent time, the second entangled atom will choose an opposite spin.
This allows scientists to understand the worth of the qubits without actually watching them.
Classical computers that we use today can only encode information in bits that take the worth
of 1 or 0. This restricts their ability. Quantum computing, on the opposite hand, uses quantum
bits or qubits. It harnesses the unique ability of subatomic participles that permits them to
exist in additional than one state i.e. a 1 and a 0 at the same time. Superposition and
entanglement are two features of physics on which these supercomputers are based. This
empowers quantum computers to handle operations at speeds exponentially above
conventional computers and at much lesser energy consumption.
CONCLUSION
Quantum systems could seamlessly encrypt data, help us add up of the large amount of
knowledge we’ve already collected, and solve complex problems that even the foremost
powerful supercomputers cannot – such as medical diagnostics and weather prediction. In the
U.S., Google, IBM and NASA are experimenting and building the primary quantum
computers.

REFERENCES
[1] www.wikipedia.com
[2] https://computer.howstuffworks.com/quantum-computer1.htm
[3] https://www.forbes.com/sites/bernardmarr/2017/07/10/6-practical-examples-of-how-
quantum computing-will-change-our-world/#1e10035780c1
[4] https://www.azoquantum.com/Article.aspx?ArticleID=101

C-25
IMAGE PROCESSING

A. Jemimah
Department of English, Rayalaseema University, Kurnool, Andhra Pradesh.
Corresponding author:jemimah.arshapogu@gmail.com
NCRTABAS-2020 128

ABSTRACT
Image processing is the learning of concepts, models and algorithms for the manipulation of
images It extents a wide variety of areas such as digitization, histogram manipulation,
warping, filtering, segmentation and compression. Computer vision deals with concepts and
algorithms for automating the development of visual perception, and involves tasks such as
noise elimination, smoothing, and perfecting of edges. segmentation of images to separate
object regions, and description of the segmented regions; and finally, interpretation of the
scene.Image processing is a technique to perform some processes on an image, in order to
obtain an enhanced image or to cutting some valuable information from it.
C-26
ANALYSIS OF SCHEDULING ALGORITHMS FOR WIMAX

Mrs. P.Sudha1,*, Dr.A.Rengarajan2


1
Research Scholar, Bharathiar University, Coimbatore, India.
2
Professor, Veltech Multi Tech DR.RS Engineering College, Avadi, Chennai
*Corresponding author: kannansudha2001@gmail.com

ABSTRACT
WiMAX is one in each of the recent technologies within the wireless world. The goal of
WiMAX is to carry wireless communications with Quality of Service (QoS) during a secured
setting. Most important part of this design is scheduling algorithm and this part is not defined
and left open for sellers to implement as per their needs. In contrast to wireless LANs,
WiMAX networks include many QoS mechanisms at the MAC level for protected services
for knowledge, voice and video. Its quality feature totally different from IEEE 802.16
protocols that was supported Static WiMAX and provided the Wireless communication at
fastened locations.
WiMAX is one amongst the most necessary broadband wireless technologies and is
anticipated to be a viable various to traditional wired broadband techniques as a result of its
price efficiency. Being associate emerging technology, WiMAX supports multimedia
applications like scientific discipline, voice conference and on-line diversion. It’s necessary
to supply Quality of Service (QoS) guaranteed with completely different characteristics, quite
difficult, however, for Broadband Wireless Access (BWA) networks. Therefore, a good
scheduling is important for the WiMAX system. The demand for prime speed broadband
wireless systems, web access and transmission service has enhanced hugely as these
applications square measure employed in all sectors like trade and commerce, education and
analysis, communications, and even leisure and entertainment. Consequently, the necessity
for BWA has fully grown considerably as a result of the increase within the variety and types
of users. As a result of their mobility and need for information access in the least times,
associate efficient broadband property is way asked for. Hence, WiMAX, used to a variety of
wireless technologies have emerged from IEEE to satisfy the demands of the varied end-
users. It’s deployed to serve all the end-users. Moreover WiMAXtechnology relies on a
NCRTABAS-2020 129

regular that's IEEE 802.16 that is BWA that provides mobile broadband property. A number
of the most benefits of WiMAX over different access network technologies are longer range
and additional refined support for Quality of Service (QoS) at the MAC level. Many differing
types of applications and services are often utilized in WiMAX networks and also the MAC
layer is intended to support this convergence. The quality defines 2 basic operational modes:
mesh and point-to-point. Within the former mode, subscriber stations (SS) will communicate
to every alternative and to the base stations (BS). Within the point to point mode, the
independent agency area unit only allowed to communicate through the BS. Thus, the
supplier wills management the environment to confirm the QoS requirements of its
customers.Hence, WiMAX provides wireless transmission of data using many transmission
modes, from point-to-point links to portable and fully mobile internet access. The technology
provides broadband speed upto 10mbps without the need for cables. The technology is based
on the IEEE 802.16 standard (also called Broadband Wireless Access) that provided several
services such as data, voice, and video services. Scheduling using WiMAX became one of
the most challenging issues, since it was responsible for distribution of available resources of
the network among all users. In this paper the performance of Cross Layer Scheduling (CLS),
and TCP-Aware Uplink Scheduling (TCP-AUS) is analysed.
REFERENCES
[1] R Pries, F Wamser, and D Staehle, K Heck, P Tran-Gia, “On traffic characteristics of a
broadband wireless internet access”, Next Generation Internet Networks, Vol:1, Iss: 3,
PP: 1–7, 2009.
[2] M Kihl, C Lagerstedt, A Aurelius, and P Odling, “Traffic analysis and characterization
of internet user behavior”, in Proceedings of International Congress on Ultra-Modern
Telecommunications and Control Systems and Workshops (ICUMT) (IEEE, Moscow,
18–20 October 2010), pp. 224–231.
[3] K Wongthavarawat, A Ganz, “Packet scheduling for QoS support in IEEE 802.16
broadband wireless access systems”, Int. J. Commun. Syst, 16(1), 81–96 (2003).
[4] T.C. Chen, J.C. Chen, and Y.Y. Chen, “Maximizing Unavailability Interval for Energy
Saving in IEEE 802.16e Wireless MANs,” IEEE Transactions on Mobile Computing,
vol. 8, no. 4, pp. 475–487, Apr. 2009.
[5] S.L. Tsao and Y.L. Chen, “Energy-efficient packet scheduling algorithms for real-time
communications in a mobile WiMAX system,” Computer Communications, vol. 31, no.
10, pp. 2350–2359, June 2008.
[6] H.L. Tseng, Y.P. Hsu, C.H. Hsu, P.H. Tseng, and K.T. Feng, “A Maximal Power-
Conserving Scheduling Algorithm for Broadband Wireless Networks,” in Proc. of IEEE
WCNC, 2008, pp. 1877–1882.
[7] J. Shi, G. Fang, Y. Sun, J. Zhou, Z. Li, and E. Dutkiewicz, “Improving Mobile Station
Energy Efficiency in IEEE 802.16e WMAN by Burst Scheduling,” in Proc. of IEEE
GLOBECOM, 2006.
[8] S.C. Huang, R.H. Jan, and C. Chen, “Energy Efficient Scheduling with QoS Guarantee
for IEEE 802.16e Broadband Wireless Access Networks,” in Proc. of the International
conference on Wireless Communications and Mobile Computing (IWCMC), pp. 547–
552, 2010.
NCRTABAS-2020 130

[9] H. S. Kim and S. Yang, “Tiny MAP: an efficient MAP in IEEE 802.16/WiMAX
broadband wireless access systems,” Computer Communications, vol. 30, no. 9, pp.
2122–2128, 2007.
[10] OPNET Modeler [Online].Available:http://www.opnet.com/products/modeler/home.html
[11] Yekanlu E. and Joshi A., “Performance Evaluation of an Uplink Scheduling Algorithm
in WiMAX”, kinge
Institute of Technology Sweden, 2009.
[12] Zarrar Yousaf, F., Daniel, K., and Wietfeld, C. “Analyzing the Throughput and QoS
Performance of a WiMAX Link in an Urban Environment”, IN-TECH, pp. 307–320,
2009.

C-27
CLOUD COMPUTING

P.SruthiVennela
III semester, Rayalaseema University, Kurnool.AP
Corresponding author:sruthi2566@gmail.com
ABSTRACT
Cloud computing may be a sort of computing that relies on sharing computing
resources instead of having local servers or personal devices to handle applications. It is
a term referred to as storing and accessing data over the internet. With cloud computing,
users can access files and use applications from any device which will access the web.
The important characteristic feature of cloud computing is resource pooling. It means the
Cloud provider pulled the computing resources to supply services to multiple customers
with the assistance of a multi-tenant model. Another characteristic feature of Cloud
Computing is On-Demand Self-Service. It is one among the important and valuable
features of Cloud Computing because the user can continuously monitor the server
uptime, capabilities, and allotted network storage. With this feature, the user also
can monitor the computing capabilities. Next, Easy maintenance, the servers are easily
maintained and the downtime is very low and even in some cases, there is no downtime.
Availability, the capabilities of the Cloud are often modified as per the utilization and
may be extended tons. It analyzes the storage usage and allows the user to shop for extra
Cloud storage if needed for a really bit.Automatic System, Cloud computing
automatically analyzes the info needed and maintenances a metering capability at some
level of services. We can monitor, control, and report the usage. It will provide
transparency for the host also because the customer. Cloud Computing is economical. It
is a one-time investment as the company (host) has to buy the storage and a small part of
it can be provided to the many companies which save the host from monthly or yearly
costs. Pay as you go, in cloud computing, the user has got to pay just for the service or
the space they need utilized. There is no unknown or additional charge which is to be
paid. The service is economical and most of the time some space is allotted for free of
charge . All these are some of the major characteristic features of cloud computing.
NCRTABAS-2020 131

Cloud Computing is based on the service model. The service models are characterized
into three basic models: 1)Software-as-a-Service (SaaS)
2) Platform-as-a-Service (PaaS)
3) Infrastructure-as-a-Service (IaaS)
1) Software-as-a-Service (SaaS) is known as 'On-Demand Software'. It is a software
distribution model. In this model, the applications are hosted by a cloud service provider and
publicized to the customers over the internet. In SaaS, associated data and software are hosted
centrally on the cloud server. Companies like Google and Microsoft provide their
applications as a service to the end-users.
2) Platform-as-a-Service (PaaS) is a programming platform for developers. This platform is
generated for the programmers to make, test, run and manage the applications.
3) Infrastructure-as-a-Service (IaaS) is a way to deliver a cloud computing infrastructure like
servers, storage, network and operating system. Customers can access these resources over
the Cloud Computing platform i.e internet as an on-demand service. There are so many
advantages of Cloud Computing.
● Cost Savings, Cost saving is the biggest benefit of cloud computing. It helps you to
save substantial capital costs as it does not need any physical hardware investments.
● High Speed, Cloud computing allows you to deploy your service quickly in fewer
clicks.
● Back-up and restore data, Once the data is stored in a Cloud, it is easier to get the
back-up and recovery of that, which is otherwise very time taking process on-
premise.
● Unlimited storage capacity, The cloud offers almost limitless storage capacity.
● Mobility, Employees who are working on the premises or at the remote locations can
easily access all the could services. All they need is Internet connectivity.
● Technical Issues: Cloud technology is always prone to outages and other technical
issues.
● Security Threat in the Cloud: Another drawback while working with cloud computing
services is a security risk. Hackers might access this information.
● Internet Connectivity: Good internet connectivity is a must in cloud computing. You
can't access the cloud without an internet connection. Moreover, you don't have any
other way to gather data from the cloud.
● Lack of Support: Cloud Computing companies fail to provide proper support to the
customers. Moreover, they need their users to depend upon FAQs or online help,
which may be a tedious job for non-technical persons.
Despite all the pros and cons, we can't deny the fact that Cloud Computing is the fastest -
growing part of network-based computing. It offers an excellent advantage to customers
of all sizes: simple users, developers, enterprises and every one sorts of organizations.
So, this technology is here to remain for an extended time.

C-28

SOFT COMPUTING & INTELLIGENT SYSTEMS


NCRTABAS-2020 132

S. Divya, G. Priyadharshini
U.G Students, B.Sc Computer Science (Shift II), Alpha Arts & Science College, Porur,
Chennai.
Corresponding author:divyasrini220@gmail.com
ABSTRACT
Soft Computing is that the big motivation behind the concept of conceptual intelligence in
machines. Soft Computing is tolerant of impression; uncertainty and approximation which
differ from hard computing.
As system based soft computing and robotic projects deals with development of approximate
system models to find the optimal solutions to world problems, it's considered mutually of the
emerging area of research all told fields of engineering and sciences Due to the rapid
development in computer mechanization, technological domain & vast research has also been
administered by the researchers within the field of robotics for the event of robots in various
applications like industry, medical, rehabilitation, agriculture, military etc. to help person.
During this paper, soft computing techniques and their application in robotics has been
illustrated.
KEYWORDS:Soft computing, Robotics, Swarm intelligence, Neural network

C-29
SECURE INTERNET SERVICES IN ONLINE BANKING

M.Vimal Raj, C.Prabakaran, P.Akash, Praveen.N


U.G Students, B.Sc Computer Science (Shift II), Alpha Arts & Science College, Porur,
Chennai.
Corresponding author:virnaloffi1@gmail.com

ABSTRACT
Online banking system plays an important part of the core banking system compared to the
traditional way customer’s access banking service. Security is a serious issue to be taken into
concern. Online banking has various types of attacks. One such attack called system attack,
denial attack, DOS Attack.Our aim is to provide a secure internet service to the user by
providing details of hacker to the bank and user. The account details along with IMEI number
of the user are registered in the bank server whenever the user starts transaction, the bank
server uses comparison algorithm to give authentication for the user.Security of web-based
applications may be a serious concern, thanks to the recent increase within the frequency and
complexity of cyber-attacks. Biometric techniques offer emerging solution for secure and
trusted authentication, where username and password are replaced by bio-metric data.
KEYWORDS:Online Banking, Cyber attacks.

C-30
SURVEY ON THREATS ATTACKS AND IMPLEMENTATION OF SECURITY IN
CLOUD INFRASTRUCTURE
NCRTABAS-2020 133

D. Kowsalya, M. Suba sree


U.G Students, B.Sc Computer Science (Shift II), Alpha Arts & Science College, Porur,
Chennai.
Corresponding author:subasreemadhavan02@gmail.com
ABSTRACT
Cloud computing is the buzz word of today. Many of the organizations are using cloud
infrastructure due to which there is well utilization of resources and cost is also fewer. Cloud
data centers are keeping this enormous information making it reliable and accessible.
Emergence of cloud computing supports in reducing the cost of implementation and improves
improved utilization of resources, which attracts many organizations to redesign their
infrastructure to make well-suited with cloud environment. To attain security in cloud
environment is a biggest challenge because many of the solutions are susceptible. Security
contains privacy, integrity and accessibility. This paper summarizes the several methods
which are being followed for the security in cloud infrastructure which includes public key
infrastructure, encryption and Shamir’s secret sharing

Keywords:Cloud Computing, IDS, Attacks, Firewall, Encryption

C-31

CLOUD COMPUTING

1,*P
Karthick ,2P Jeeva
III Year, Department of Electronics and communication science, Alpha Arts and Science
College, Porur, Chennai, TN
*Corresponding author: ajukarthu@gmail.com

ABSTRACT
This paper is about cloud computing. It is a rapidly developing and outstanding promising
technology. It has produced the anxiety of the computer society of whole world. Cloud
computing is Network -based computing, whereby shared data, resources, and software, are
provided to terminals and portable devices on-demand, like the energy grid. Cloud computing
has come of era since 2006. Cloud computing provides on request services to its clients. Data
storage is among one of the key service area delivered by cloud computing. The companies
don’t want to buy a set of software or software certificates for every servant. Servers and
digital storage devices take up space.

1.1 INTRODUCTION
Cloud Computing delivers us a means by which we can access the applications and utilities,
over the Internet. It allows us to create, construct, and modify applications through online.
NCRTABAS-2020 134

Cloud computing is an unindustrialized computing technology. It provides a set of shared


data communication and stored database.

1.2 CLOUD
The word cloud means to a Network or Internet. In other words, we can say that Cloud is
something, which is existing at remote location. Cloud can deliver services over wide
network area, i.e., on public or on private networks, i.e., WAN, LAN or VPN. Applications
such as web conferencing, e-mail, customer relationship management (CRM), all run in
cloud.

1.3 CLOUD COMPUTING


Cloud Computing refers to deploying, constructing, and retrieving the applications online. It
offers online data storage, infrastructure and application. Cloud Computing is both a mixture
of software and hardware based computing resources distributed as a network service.
Keyword: Cloud Computing , Cloud , Cloud database.

1.4 Basic concepts of cloud computing:


There are certain services and models working behind the scene making the cloud computing
is practical and accessible to end operators. Following are the working models for cloud
computing. They are,
1. Deployment Models
2. Service Models

1.4.1 Deployment Models:


Deployment models state that the type of access to the cloud. Cloud can have any of the four
types of contact: Public, Private, Hybrid and Community.
PUBLIC CLOUD :
The Public Cloud allows systems and services to be easily reachable to the universal public.
Public cloud may be less secure because of its directness, e.g., e-mail.
PRIVATE CLOUD :
The Private Cloud permits system and facilities to be accessible within an organization. It
offers increased privacy because of its private environment.
COMMUNITY CLOUD :
The Community Cloud permits organization and facilities to be available by group of
organizations.
HYBRID CLOUD :
The Hybrid Cloud is mixture of unrestricted and reserved cloud. So, the critical activities are
performed using reserved cloud while the non-critical activities are performed using
unrestricted cloud.
Keyword: Models of cloud, Deployment models, access of clouds.
1.4.1 Service Models:
Service Models are the architectural models on which the Cloud Computing is created. These
can be considered into three basic service models as listed below
NCRTABAS-2020 135

1. Infrastructure as a Service (IaaS)


2. Platform as a Service (PaaS)
3. Software as a Service (SaaS)
Keywords: Service models, IaaS, PaaS, SaaS.
Infrastructure as a Service (IaaS):
IaaS is the delivery of technology infrastructure as an on request scalable service. IaaS offers
to access important resources such as physical machines, virtual machines, virtual packing,
etc.
•Usually billed based on usage
•Usually multitenant virtualized environment
Platform as a Service (PaaS):
PaaS provides the runtime atmosphere for applications, development & deployment tools,
etc. PaaS provides all of the conveniences required to support the complete life cycle of
building and transporting web applications and services exclusively from the Internet.
•Multi-tenant settings
•Highly scalable multi-tier architecture.
Software as a Service (SaaS):
SaaS model lets to use software applications as a service to end users. SaaS is a software
delivery procedure that provides licensed multi-tenant access to software and its utilities
remotely as a Web-based service.

Reference:
[1] https://azure.microsoft.com/en-in/overview/what-is-cloud-computing/
[2] https://www.sciencedirect.com/topics/computer-science/cloud-deployment-model
[3] https://www.ibm.com/in-en/cloud/learn/iaas-paas-saas

C-32
AN OVERVIEW OF IoT APPLICATIONS

Mrs. S. Vijayalakshmi.1 Ms. R. Rama,2 Ms. S. Karthika3


1
Assistant Professor, JBAS College for Women, Chennai, TN, India
2,3
II Year, JBAS College for Women, Chennai, TN, India
Corresponding author: vijivarshini2006@gmail.com

ABSTRACT

This paper on IoT is to acquire basic concept of what IoT is and to share glimpse of it in
various applications around us in everyday walk of our life.

INTERNET OF THINGS (IOT)


IoT is actually a platform where embedded devices are connected to the web , in order that
they can collect and exchange data with one another . It enables devices to interact,
NCRTABAS-2020 136

collaborate and, learn from each other’s experiences a bit like humans do.

IoT applications promise to bring immense value into our lives. With newer wireless
networks, superior sensors and revolutionary computing capabilities, the web of Things
might be subsequent frontier within the race for its share of the wallet. IoT applications are
expected to equip billions of everyday objects with connectivity and intelligence. It is already
being deployed extensively, in various domains, namely
● Wearables
● Smart home applications
● Health care
● Smart cities
● Agriculture
● Industrial Automation
1.Wearables
Wearables iot devices, namely smart watches and fitness trackers, as the among the most
conspicuous examples of iot. Wearables are used primarily for singular functions like
checking the time and tracking exercise.
2. Smart Home Applications
When we mention IoT Applications, Smart Homes are probably the primary thing that we
expect of. The best example is Jarvis, the AI home automation employed by Mark
Zuckerberg. There is also Allen Pan’s Home Automation System where functions within the
house are actuated by use of a string of musical notes.
3. Health Care
IoT applications can turn reactive medical-based systems into proactive wellness-based
systems. The resources that current medical research uses, lack critical real-world
information. It mostly uses leftover data, controlled environments, and volunteers for
checkup . IoT opens ways to a sea of valuable data through analysis, real-time field data, and
testing. The Internet of Things also improves the present devices in power, precision, and
availability.
4. Smart Cities
The thing about the smart city concept is that it’s very specific to a city. The problems faced
in Mumbai are very different than those in Delhi. The problems in Hong Kong are different
from ny. Even global issues, like finite clean beverage , deteriorating air quality and
increasing urban density, occur in several intensities across cities. Hence, they affect each
city differently. The Government and engineers can use IoT to research the often-complex
factors of city planning specific to every city. The use of IoT applications can aid in areas like
water management, waste control, and emergencies.
5. Agriculture
Statistics estimate the ever-growing world population to succeed in nearly 10 billion by the
year 2050. There are numerous possibilities in this field. One of them is the Smart
Greenhouse. A greenhouse farming technique enhances the yield of crops by controlling
environmental parameters. However, manual handling leads to production loss, energy loss,
and labor cost, making the method less effective. A greenhouse with embedded devices not
only makes it easier to be monitored but also, enables us to regulate the climate inside it.
NCRTABAS-2020 137

Sensors measure different parameters consistent with the plant requirement and send it to the
cloud. It, then, processes the info and applies an impact action.
6. Industrial Automation
This is one among the fields where both faster developments, also because the quality of
products, are the critical factors for a better Return on Investment. With IoT Applications,
one could even re-engineer products and their packaging to deliver better performance in
both cost and customer experience.

LATEST APPLICATION
Driverless Cars
One of the foremost futuristic applications of IoT is that the autonomous car. These cars that
appear sort of a product from the near future actually exist today and are mostly under
development or prototype stages. The cars don’t have drivers and are sensible enough to take
you to your destination on their own. Equipped with plenty of devices like sensors,
gyroscopes, cloud architecture, internet and more, these cars sense huge chunks of knowledge
on traffic, pedestrians, conditions of the road such speed breakers, potholes, corners and
sharp turns and immediately process them at rapid speeds. This information is passed to the
controller which takes corresponding driving decisions.

CONCLUSION
The Internet of Things (IoT) has amalgamated hardware and software to the web to make a
better world.
REFERENCES
[1] Internet of Things (A Hands on approach) ArshdeepBahga, Vijay Madisetti. Orient
Blackswan Private limited-New Delhi.
[2] www.iotforall.com

C-33
IMPACT OF MICROCANTILEVERS BASED MICROSENSOR FOR THE
DETECTION OF TOXIC GAS MOLECULES IN THE ATMOSPHERE

J. Jayachandiran, N. Mahalakshmi and D. Nedumaran*


Central Instrumentation and Service Laboratory, University of Madras, Guindy
Campus, Chennai 600 025, Tamil Nadu, India.
*Corresponding author : dnmaran@gmail.com

In recent decades, sensor technologies have been developing at a faster rate in order to
make reliable, compact, versatile and economical sensors. The atmospheric air contains
numerous chemical species like ozone (O3), carbon monoxide (CO), carbon dioxide (CO2),
hydrogen sulfide (H2S), ammonia (NH3), Arsine (AsH3) and phosphine (PH3), are produced
naturally and artificially, which are harmful to the humans and other living things. Therefore,
it is necessary to develop reliable gas sensors to sense the above mentioned harmful chemical
species in the environment.
NCRTABAS-2020 138

A gas sensor should possess two basic functions, i.e., a function to recognize a
particular gas species (receptor function) and another to transduce the gas recognition into a
sensing signal (transducer function). In many cases, the gas recognition is carried out through
gas-solid interactions such as adsorption, chemical reactions and electrochemical reactions.
On the other hand, the way of transduction is heavily dependent on the materials utilized for
the gas recognition. The receptor and transducer functions are not always separated so
explicitly in some sensors like those sensors using semiconducting oxides or solid
electrolytes. Nevertheless, the two functions are governed by different factors, so that, it is
possible to modify or improve each function separately. This would provide the basis for
designing the gas sensors. That is, good sensing characteristics would be obtained only when
both functions are promoted sufficiently. The promotion of the receptor function is especially
important for increasing the selectivity to a particular gas, while that of the transducer
function is important for increasing the sensitivity. Similarly, the cantilever-based sensor
designs are gaining popularity in the field of MEMS, due to its small size, simple design, and
economical fabrication. In cantilever-based gas sensor, the active sensing material is
deposited on the cantilever surface, which interacts with targeted gas molecule and changes
the amplitude of vibration of the cantilever. The change of vibration is mostly depending on
the adsorption/absorption of the gas molecules on the cantilever surface. Then, the change of
resonance frequency of the cantilever structure gives the exact gas concentration in their
surrounding atmosphere. Another type of piezoelectric material is placed to the fixed end of
the cantilever structure then, the piezoelectric material produces the charge carriers based on
the stress-strain and thus charge formation is induced by the adsorption/absorption of gas
molecules.
Therefore, developing a microcantilever based microsensor model is the reliable way
for the trace of toxic gas molecules in the atmosphere. The present work is focused on energy
harvester as well as a targeted gas molecule sensor. In order to meet such a need for various
gas sensors, the proposed model enables the features for changing the sensing layer on the
surface of the sensor to find out the type of (targeted) gas molecules.
NCRTABAS-2020 139
NCRTABAS-2020 138

AUTHOR INDEX
NCRTABAS-2020 139

Abiya Jose 58 Habibunnisha N 105


Acchutharaman K R 33 Hannah Ruben 38
Agnes Robeena Quincy 29 Harinesenthil 124
Akash P 132 Heerah D 106,118
Akshhaya S Hemanth D S 91
Ancy 100 Hemapriya K 99
Andrews Juben Ratchanyaraj 100
Aneeshkumar A.S. 97 Indhu K 105
Angeline T 83 Ishwarya M 100
Anoop M 107
Anubama R 38 Jasmin H 65,79,126
Aparyay Kumar 119 Jayachandiran J 137
Ayesha H B 65 Jayanthi MargretD 10
JayanthiS 60
Bagavathi Lakshmi R 111 Jeeva P 133
Bala Samuvel J 35 Jeevitha M 81
Benita Merlin B 54 Jemimah A 127
Beula D 73 John Paul C 119
Biruntha K 32
Bright A 28 Kamalesh T 26,35
Kamarajan M 97
Kannadhasan S 27
Chandhru M 17
Chandrakala N 85 Karthick M 91
Christina Nancy A 29 Karthick P 133
Karthika S 135
Karuppasamy P 26,35
Daniel Dias 68
Devipriya E 98 Keerthana M 126
Dharmarajan K 79 Keshav G 88
Dharshinipriya N 100 Kiran Kumari C 101
Divya K 85 Kiruthika R 115
Divya S 90,131 Kowsalya D 132
Divyanshi Dubey 45 Kutti Rani S 17

Ekambram K 52 Lakshmi S 101


Eunice Jerusha S 24
Mahalakshmi N 137
Felcia Bel H.J 96 Maharasan.K.S 124
Manikandan B 34
Gayathri T.V 73 Manimegalai E 28
Geetha S K 12 Manju D 100
Gomathi M 107 Manoj Kumar U 50
Gopinath A 64 Mohamed Sathik M 21
Gowthaman P 69 Murali K R 34
Muthu M 31
NCRTABAS-2020 139
Muthu Senthil Pandian 16,18,26,27,29,31,33,35,36 Shantha S 89,91
Muthulakshmi N 85 Shreya Hembrom 48
Shrinitha S 98
Nanthini S 107
Nedumaran D 105,137 Shyam Sundar R 104
Niranjana E 101 Simon Ebinezer J 91
Nivedha S 41 Sivasubramani V 16
Srinivasan M 33
Parthasarathy S 111 Sruthivennela P 130
Pounraj P 31 Suba Sree M 132
Prabakaran C 97 Subashini R 105
Prabakaran M P 132 Sudha P 128
Prasanth P V 93 Suganthi D 117
Praveen N 104,132 Sunil Verma 26,35
Praveenkumar R 97 Surya C 107
Pravin S 92 Sumathi A 107
Prema V 119 Suvalakshmi V 98
Priyadharshini G 131 Syedabuthahir J 91,104
Pushpamanjari V19 Synthiya JudithGnanaselvi E 21

Raj Mohan R 68 Tamilarasi S 90


Raja A 31 Thirumala Rao G 30
Rajani B 50 Thiruvengadam R 98
Rajarajan G 60 Thiyagarajan S 69
Rajeshwer U 17 Tirumala Rao G 20
Rama R 135 Tony Santhosh G 100
Ramachandran K 31
RamanunniOR 68 Uma Maheswari A 55
Ramasamy P 16,18,27,29,31,33,35,36 Usha G 32
Ravikumar R.V.S.S.N. 19,20,30 Uthrakumar R 10
Rengarajan A 128
Renuga Devi T S 41 Vajubunnisa Begum R 65,79,90,126
Richa Suman Sharma 99 Vasanth S 93
Rita John 17,34,37,38,54 Vasimalai N 17
Vijayalakshmi S 135
Sailaja B 19,20,30 Vimal Raj M 132
Sandhiya Malathi M 17 Vinitha G 61
Santhosh N 29,33,36 Vinolyn Vijaykumar 115
Sarala S 12 Vinoth Kumar S 10
Saranya A 125 Vishali D 37
Saravanasundar V 60 Vytheeswaran A S 52
Sarath Santhosh 54
Saseedharan S 100 William Carry M 18
Savithri V 89,91
Selva Kumar M 99 Yuvasri G 43
Selvarasi A 81
Selvi 83
Shahilkirupavathy 24
Shantha Kumar T 79
NCRTABAS-2020 140
With Best Compliments

TTK HEALTHCARE LIMITED

6,cathedral Road
Chennai 600 086
Phone : 28116106
Fax : 044-28116387
With Best Compliments

KAMAKSHI INTERNATIONAL TRADE LLP


NO. 127, Adinath Trade Complex, Adinath Nagar,
200 Feet Inner Ring Road, Madhavaram, Chennai Chennai,
Tamilnadu 600 060

91-44-6888 8799 E-mail : kamakshitrade@gmail.com

With Best Compliments


With Best Compliments

No 46/7C, NH5, Sricity, Tada,


Andhra Pradesh 524401
With Best Compliments

With Best Compliments

With Best Compliments

A ONE NETCAFE & XEROX


No. 15, Vembuli Amman Kovil Street, West K.K. Nagar,
Chennai – 600078
Phone : 044 – 4266 3337
Cell : + 91 – 88704 06360
With Best Compliments

You might also like