Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 33

Mini Project Report

on

RFID-based Elevator Access Control System

Submitted by

Akansha Yadav, (01)


Aniket Masram, (34)
Mayank Jain, (44)
Anushka Sahu, (72)

Guided by
Prof. Sushil Chavan (Guide)

Prof. Rita Bhawalkar (Co-Guide)

Department of Information Technology


Yeshwantrao Chavan College of Engineering, Nagpur
(An Autonomous Institute, Affiliated to RTM Nagpur University, Nagpur.)
Session: 2022-2023
Yeshwantrao Chavan College of Engineering, Nagpur
Department of Information Technology

Session: 2022-23

Certificate
This is to certify that Ms.Akansha Yadav, Mr.Aniket Masram , Mr.Mayank Jain,
Ms.Anushka Sahu has completed a Mini Project course titled “RFID-based Elevator
Access Control” towards the partial fulfillment of requirements for the B.E. seven
semester of Information Technology.

Submitted by

Akansha Yadav
Aniket Masram
Mayank Jain
Anushka Sahu

Prof. Rita Bhawalkar Prof. Sushil Chavan

Project Co-Guide Project Guide

Asst. Prof.Priyanka G.Jaiswal Dr. R. C. Dharmik

Project Co-ordinator(IT) HOD (IT)


ACKNOWLEDGEMENT

We would like to thank our guide Prof. Sushil Chavan and co-guide Prof. Rita Bhawalkar for thorough
guidance in the project. We are extremely grateful and indebted to them for their expert, sincere, valuable
guidance and encouragement which was of immense help to us.

We would like to express our sincere gratitude to Dr. R.C.Dharmik, Head, Department of Information
Technology for his constant encouragement towards the successful completion of our work.

We wish to express our sincere thanks to Dr U.P. Waghe, the Principal of our college for providing us with
all the necessary facilities and the infrastructure without which we would not have been able to complete our
project successfully.

We would also like to again thank our Project Coordinator Asst. Prof. Priyanka G. Jaiswal for her continuous
guidance owing to which the project could take shape.

Last, but not the least, we would like to thank all the faculty members and non-teaching staff members who
helped us despite their busy schedule.
ABSTRACT

This work focuses on building the Elevator Access Control System for use in the department inside
the college premises. The objective of the system is to prohibit the entry of unauthorized users into
the lift by making use of RFID technology and NodeMCU. RFID is the communication technology
that is widely referred to as electronic tags. Radio signals can be used to identify a particular target
and the related data can be read and written without the need for direct communication. The aim of
these systems is to increase security in the department, by prohibiting the access of unauthorized
users and the data of the entered users is recorded in the database along with their access time for
future use. It is noted that the system response time per person is 1.20 sec/per which comes out to be
faster than other available access control systems.

Keywords— RFID, NodeMCU, Security


TABLE OF CONTENTS

Title Page No.

1
1. Introduction

2. Aim and Objectives 5

 2.1 Aim 5

 2.2 Objectives 5

3. Literature Review 6

4. Dataset Description 8

8
 4.1 Color Image Dataset:

9
 4.2 OPG (Orthopantomography) Dataset:

9
 4.3 Analysis of Dataset

11
5. Proposed Methodology

11
 5.1 Steps of Pre-Processing

12
 5.2 Model Construction

13
 5.3 Transfer Learning

15
 5.4 Benefits of Transfer Learning

15
 5.5 Fine Tuning
17
6. Results and Discussion

17
 6.1 VGG16 and Datasets

 6.2 MobileNet and Datasets 18


 6.3 Resnet50 and Datasets 18
7. Analysis of Result 19

20
8. Conclusion and Future Scope

20
 8.1 Conclusion

20
 8.2 Future Scope

9. References 21
23
Social Utility
LIST OF FIGURES

Figure Number Figure Name Page No

1 Caries (Color image) 8

2 Non-caries (Color image) 8

3 Caries (Panoramic) 9

4 Non-caries (Panoramic) 9

5 Training dataset (Color Images) 10

6 Testing dataset (Color Images) 10

7 Transfer Learning 14

8 Fine Tuning 16
LIST OF TABLES

Table Number Table Name Page No

I Model Summary 13

II VGG16 17

III MobileNet 18

IV ResNet50 18

.
1. INTRODUCTION

Lifts are available everywhere: government as well as private offices, malls,


residential buildings, hospitals, educational institutions, etc. They can be easily
accessible to everyone from the building’s residents or occupants to regular visitors to
complete strangers who enter the premises. Out of all the aforementioned identities,
there’s a high chance that the last one can prove to be a security threat as it is
unknown. To overcome this problem at a basic level, elevator access control is
required.
Elevator Access Control (EAC) is another latest addition to modern security
technology that has been developed for limiting the entry/access of an individual. In
simple terms, elevator access control works like a virtual liftman that manages access
of the lift’s passengers, makes sure that strangers do not get on the lift without
authenticating their identity, and keeps a record of who used the lift. Furthermore, if
adopted properly, elevator access control helps in reducing the cost of manpower,
offering a solution that expands and meets future needs.
RFID is an extremely popular solution for access control systems. Radio Frequency
Identification (RFID) utilizes electromagnetic fields to automatically identify and track
various objects. Most RFID systems include a microchip with an antenna (tag), a
reader with an antenna and an access control server.An RFID system cross-references
the data stored on the tag with its own database. If it matches, the access is then
granted.
RFID Frequencies
Frequency is the length of the radio waves used to communicate between system
elements. RFID technology incorporates various frequency levels that determine the
solution’s reading range. The lower the frequency, the shorter the reader’s range.
Below are some of the most common frequency ranges:
 Low Frequency 120 – 150 kHz (LF).
 High Frequency 13.56 MHz and higher (HF)
 Ultra-High Frequency 860 – 980 MHz (UHF)
 Microwave Frequency 2.45 GHz and higher.
Understanding the frequencies at which RFID systems operate will help you to choose
the right solution for your application. Radio waves operate differently at each of these
frequencies and there are pros and cons associated with using each frequency band.
An RFID access control system with a lower frequency can read better near liquid or
metal surfaces, however, it has a slower data read rate. Higher frequency systems have
faster data transfer rates and longer read ranges but are more sensitive to radio wave
interference caused by liquids and metals in the environment.
That being said, rapid technology advancements in recent years have made it possible
to use ultra-high frequency RFID systems around liquids and metals.
1
RFID Tags
RFID tags consist of three key elements: a microchip, antenna and a substrate. The
microchip within the tag is what stores and processes information, modulates and
demodulates radio-frequency signals. And antenna enables the tag to receive and
transmit the signal.RFID tags can be differentiated by their power supply. There are
two main types of tags used in the access control industry - passive and active tags.

2
based on deep CNN architectures on dental x-ray images. Also, applications on dental
caries detection using deep CNN architectures are even more limited in quantity and
inadequate in this area.
This study presented Convolutional Neural Network VGG-16 in order to detect dental
caries by the means of transfer learning. Although there are studies on dental caries in
the literature, there is no publicly available data set. In this paper, a new dataset
consisting of 182 images (caries and non caries) was presented.
The purpose of the present study was to determine the ability of deep convolutional
neural networks to assist dentists in the automatic diagnosis of dental caries based on
panoramic images.
Transfer learning is a machine learning technique where a model trained on one task is
re-purposed on a second related task. Conventional machine learning and deep learning
algorithms, so far, have been traditionally designed to work in isolation. These
algorithms are trained to solve specific tasks. The models have to be rebuilt from
scratch once the feature-space distribution changes. Transfer learning is the idea of
overcoming the isolated learning paradigm and utilizing knowledge acquired for one
task to solve related ones. The reuse of a previously learned model on a new problem
is known as transfer learning. It’s particularly popular in deep learning right now since
it can train deep neural networks with a small amount of data. This is particularly
valuable in the field of data science, as most real-world situations do not require
millions of labelled data points to train complicated models. The reuse of a pre-trained
model on a new problem is known as transfer learning in machine learning. A machine
uses the knowledge learned from a prior assignment to increase prediction about a new
task in transfer learning. The knowledge of an already trained machine learning model
is transferred to a different but closely linked problem throughout transfer learning.
Human tooth is mainly made up of two parts. One is the crown which is clinically
visible and the other is the root which is not clinically visible but embedded in the jaw.
Effect of disease on tooth, can be identified by analysing the x-ray images. In
particular, we have considered three diseases known as dental caries, periapical
infection and periodontitis for the classification task. Healthy teeth can be considered
as another class. It has not been taken into account in this task because of the data
insufficiency. Dental caries is one of the most common dental diseases worldwide. It is
the medical terminology used for the common dental cavity or tooth decay. There can
be different stages of dental caries, but the aim here is to classify disease and not the
advancement in its stage.
This whole work presents a new dental caries classification model based on MI-
DCNN. The major contributions of the proposed model are:
• Data selection, the data pre-processing, and the augmentation that has been
performed on dataset to enhance the images and rotate that image near about 6 degrees
lower and 6 degrees higher but not more than it. Increase the brightness of the images
to show that caries area particularly.

3
• Although there are studies on dental caries in the literature, but there is no
publicly available data set. In this study the new dataset total 182 images and 78 colour
images have gathered from different dental clinics.
• Pre-trained VGG16 architecture based on the transfer learning approach in
dental caries classification was adapted. In this study instead of starting training of
model from scratch and not wasting the time on training use the features of another
pre-trained model to classify the images and make it possible.
• The constructed model has achieved the notable accuracy on both datasets
using VGG16 pre-trained model. The basic idea was to use different models on these
two datasets and find which one is best.
The basic purpose of this study is to make a system or model that will help the dentist
to classify any images whether it is caries or non-caries with the ability of deep
convolutional layer.

This whole work presents a new dental caries classification model based on MI-
DCNN. The major contributions of the proposed model are:

• Data selection, the data preprocessing, and the augmentation that has been
performed on dataset to enhance the images and rotate that image near about 6 degree
lower and 6 degree higher but not more than it. Increase the brightness of the images to
show that caries area particularly.

• Although there are studies on dental caries in the literature, but there is no
publicly available data set. In this study the new dataset total 182 images and 78 color
images have gathered from different dental clinics.

• Pre-trained VGG16 architecture based on the transfer learning approach in


dental caries classification was adapted. In this study instead of starting training of
model from scratch and not wasting the time on training use the features of another
pre-trained model to classify the images and make it possible

• The constructed model has achieved the notable accuracy on both dataset using
VGG16 pre-trained model the basic idea was to use different models on these two
datasets and find which one is best.

The basic purpose of this study is to make a system or model that will help the dentist
to classify any images whether it is caries or non-caries with the ability of deep
convolutional layer.
Here in our approach we used the above pre trained models and measured the accuracy
for the coloured dataset as well as Orthopantomagram(OPG) image dataset. This is the
first ever model we design that works on the colour image as well as OPG images.
During the diagnosis and treatment of oral diseases, dentists need to interpret
panoramic radiographs and record specific symptoms of diseased teeth in the medical
records. New

4
dentists require extensive training and time to perform accurate X-ray film
interpretations. An X-ray analysis showed that more experienced dentists are almost
four times more likely to make a correct assessment of caries lesions than less
experienced dentists. Therefore, considerable attention has been given to interpreting
panoramic X- rays with dental caries automatically. In recent decades, scientists have
tried to deploy machine learning techniques to detect dental diseases. As in the
conventional method, operators or experts perform lesion detection and evaluation on
radiographs manually and objectively. This task is tedious when facing large amounts
of image data and may lead to misinterpretations. Previous efforts have successfully
applied convolutional neural network (CNN)-based deep learning models in computer
vision. Deep learning methods do not depend on well-designed manual features and
have high generalization capabilities. These models have achieved high accuracy and
sensitivity and are the most advanced technology for a wide range of applications. The
increased interest in deep learning methods has also led to their applications in medical
imaging interpretation and in diagnostic assistance systems, for instance, Helicobacter
pylori infection detection in gastrointestinal endoscopy, skin cancer screenings, and
coronavirus disease 2019 (COVID-19) detection in computed tomography images.

5
2. AIM AND OBJECTIVES

2.1 AIM

To perform comparative analysis on various pretrained models of transfer learning and


teeth datasets that have teeth images containing caries and non caries. Furthermore, to
perform comparative analysis based on the type of datasets of images used i.e. color
images dataset and OPG (orthopantomography) dataset as well as to compare and
measure accuracy among the periapical image dataset and panoramic image dataset of
teeth.

2.2 OBJECTIVES

1. To get the result faster in order to save the doctors’ time in detecting caries.
2. To increase the efficiency for checking the caries.
3. To check caries in between teeth where doctors are not able to see it manually.
4. To use advanced techniques that save human energy.
5. The backend and codes in this application can be used for further diagnosis of
various other diseases detection.

6
3. LITERATURE REVIEW

This is a summary about the past research papers related to the topic. It is evaluation of
available literature based on the

In the introduction part, general studies on dental disease detection were mentioned.
Few studies on the detection of dental caries are available in the literature, and in these
studies, commonly panoramic and periapical imaging systems have been used. The
resulting new views from a few statistical feature extractions were classified by the
back- propagation neural network. A success of 97.1% was achieved in the caries
detection using periapical 105 dental X-ray images

Andac Imak used a multi-input CNN ensemble (MI-DCNNE) method to detect dental
caries [1]. In this paper they use multi-layer deep CNN to increase the achievement of
dental caries
Hongbing Yu have proposed a novel caries detection and assessment(UCDA)
framework [2] to achieve a new technique for fully automated diagnosis of dental
caries on the children’s first permanent molar.
In Tsung-Yi Chen’s have proposed a paper for caries detection [3] they used methods
to separate a single tooth by using bitewing film and then used cnn model to detect
caries
Lian Luya conducted a study for detection of caries using Deep Learning Models [4]
but it gave less accuracy. The dentist and neural network seemed to have a similar
performance, the neural network might have better sensitivity and accuracy in
classifying caries extensions in the outer dentin
Sarena Talpur has proposed in this systematic review [5], we concluded how deep
learning has been applied to the images of teeth to diagnose the detection of dental
caries with its three types (proximal, occlusal, and root caries)
In the study by Grace F.Olsen [6] Image -Processing methods are used. They take
input as digital colored images of tooth extract features using the method and
perform detection but the accuracy is only 85% only
Stefano Clanettiet [7] al has proposed a review to estimate the prevalence and severity
of the two dental pathologies, caries and periodontal disease, in the different classes of
socio-economically disadvantaged subjects and to understand which of them are most
affected
Marcus HT [8] provides an updated literature review of ECC. The etiology, clinical
features, caries prevalence in recent literature, consequences of caries infection and
management of ECC are discussed. Tian hing and lifa Michel conducted a study based
on the ensemble approach where the knowledge of proposed model was taken from all
the model like VGG16, ResNet50, MobileNet and made a new model to perform

7
classification of dental images as well as color images the % accuracy of their model
was 75% which is average good
Sergeyi Lawaro and Sergey Brin conducted study on particular VGG16 convolutional
neural network [9] to classify images and the accuracy of their model was 80%
because the augmentation was absent in their model and the loss was also less because
the training on dataset was given from the first layer itself if we train the model from
first layer itself then knowledge taken by that model is accurate but it is not like
transfer learning is like machine learning where we train the model from scratch
In transfer learning we basically transfer the knowledge of that system to the other
system. It can be possible through an approach like fine tuning the layers. In most of
the cases the researcher’s use the transfer learning because we basically update the
classifier which is the last layer of the model which helps to classify images or output
and freeze all the bottom layer which helps to extract the feature. When we apply the
fine tuning we again train the model from scratch because the dataset which we’ve and
the dataset on which the model has trained is different that’s why we’ve to train the
model from scratch but the knowledge still present in that model as it takes another
knowledge of new dataset somehow it more accurate now and it can classify more
dental images with great accuracy. Here in this proposed model of MI-DCNN [10]
there is a comparison of the dataset and the pre-trained model and their accuracy and
data loss. This study provided the sufficient information of which model provides the
highest accuracy on which type of dataset. Basically, the model with VGG16 provides
the highest accuracy on color images, somehow low accuracy on panoramic images.

8
4. DATASET DESCRIPTION
As the dataset is valuable asset in deep learning, the whole work is designed to train
the data and to test the data and validate the data means the dataset should be clean and
accurate having some similar characteristics to make analysis on the dataset. The
dataset that is used in this study is of two types first is 1] color images(periapical) and
2] OPG (orthopantomography) (panoramic) images taken from Jaiswal dental
laboratory. Color image dataset contains 55 images of caries and 19 images of non-
caries i.e., total images
74. The OPG (orthopantomography) dataset contains total of 183 images out of which
71 are non-caries and 112 are caries images. All images taken under supervision of
dentist. Fig2 shows the sample images from both datasets.

4.1 Color Image dataset:

Fig. 1: Image with Caries

Fig. 2: Image with Non-Caries

9
4.2 OPG (orthopantomography) dataset:

Fig. 3: Caries (Panoramic)

Fig. 4: Non-caries(panoramic)

4.3 ANALYSIS OF DATASET

Data can be visualized in the form of pie-charts and in the form of graphs. It will be
more useful to show the data summary to other researchers. Data visualization is
helpful to determine what % of data is taken for training and what % of data has been
taken for testing or validation purpose. The graphical representation of data is shown
in fig.6.

10
Fig.5: Training dataset (Color Images)

Fig.6: Testing dataset (Color Images)

11
5. PROPOSED METHODOLOGY

This study shows the usage of new model from the knowledge layers that is feature
extraction layer and the training to classifier or predictor layer which is the top layer
and feature extraction layers were the bottom layer which has been freezed to take the
knowledge from the pertain model i.e., VGG16. Actually, this research provides the
comparative analysis of results between the dataset and the pre-trained model. This
would be the first ever-study to make analysis on those parameters of dataset and pre-
trained model, three pre-trained model as 1] VGG16 2] MobileNet 3] RestNet50 the
suffix shows the number of layers into the model eg. VGG16 has 16 layers consist of
deep, convolution, pooling, hidden, dense, dropout, flatten layer and so on. Now it is
researcher’s responsibility to decide which layer they have to add into the model.
Basically, we made new empty model and taking the feature extraction layers from the
pre-trained model and on our dataset we are giving the training to the model means
fusion of classifier with feature extracted layers that is MI-DCNN i.e., multi input deep
convolution neural network. The following subtitle shows deep explanation about the
proposed methodology.

5.1 STEPS OF PRE-PROCESSING

In pre-processing of the data, data augmentation is performed for specific images. In


this study the data is of dental caries and non-caries, it is taken into the consideration
that if data will have augmented it will flatten and rotate around the plain with different
angle which will be harmful for model accuracy to avoid it it is recommendation to use
data augmentation with 6 degree upper and lower the plain.

Contrast and increases the clarity of image and shearing help in data look good so this
are some pre-processing will be performed on data. In Transfer learning or in deep
learning the data should be in comma separated values i.e., in .csv file format for better
results, data should be in tabular format number in text. For augmentation
albumentation library helps in Python.

12
5.2 MODEL CONSTRUCTION

Transfer Learning is the process of transforming knowledge of pre-trained model into


new model. Basically, in this study VGG16, Resnet50, MobileNet pre-trained model
have been used. For first VGG16 used to measure accuracy and it has highest accuracy
of 75%.

The Feature Extraction Layer have been freezed and the classifier or predictor layer is
removed from model and the knowledge i.e., the feature extraction layer is connected
with the new proposed model and then the training on the Color images and on the
OPG images is done and then the new model is ready to use for classification purpose.
The included layers are vgg16(functional) with 14714688 parameter, flatten(Flatten)
with 0 parameter, dense(dense) with 12845568, dropout with 0 parameter, dense-1 with
131328 parameters, dropout_1 with 0 parameters , dense_2 with 514 parameters. ence
model total parameters of the proposed model is 27,692,098. The convolution layers
and model summary is shown below.

13
Table I: Model Summary

Total Parameters 27,692,098

Trainable parameters 27,692,098

Non-trainable 0
parameters

5.3 TRANSFER LEARNING

Transfer Learning is the middle between deep learning and machine learning as it takes
the important knowledge from pretrained model and transfer intelligently to other
model and trained that model on the new dataset so that in case of user has small
dataset so we can use the feature extraction layers from other pretrained model and use
it for user purpose.

Eg. if we have knowledge of cycle riding we can use it for bike riding because both
has 2 wheel and somehow balance making is also same. The restriction is we can use
the transfer learning model when the dataset that we have is small and user not have
enough time.

Perhaps we are use three model in our project which is as following:

• VGG16

• MobileNet .

• ResNet50.

These models are both widely used for transfer learning both because of their
performance, but also because they were examples that introduced specific

14
architectural

15
innovations, namely consistent and repeating structures (VGG16), inception modules
(MobileNet), and residual modules (ResNet50).

Keras provides access to a number of top-performing pre-trained models that were


developed for image recognition tasks.

They are available via the Applications API, and include functions to load a model
with or without the pre-trained weights, and prepare data in a way that a given model
may expect in scaling of size and pixel values.

The first time a pre-trained model is loaded, Keras will download the required model
weights, which may take some time given the speed of your internet connection.
Weights are stored in the .keras /models/ directory under your home directory and will
be loaded from this location the next time that we are used.

Fig.7: Transfer Learning

16
5.4 BENEFITS OF TRANSFER LEARNING

Transfer learning is use to speed up the process and it increases the accuracy of image
classification.

It also does the fine tuning if the dataset is large and we want less dataset. I can allow
us to train the model from scratch. It is easy to implement and save our time.

Transfer learning is open-source models you can leverage in different fields. Instead of
creating new source models, these models can be more reliable (in terms of model
architecture), help you save time for building your target model, and prevent you from
facing new problems.

Although transfer learning improves the performance of machine learning models, it


might not give the desired impact, especially with tasks with larger datasets.
Traditional learning starts with randomized weights and tunes them until they
converge. Transfer learning will begin with a pre-trained model, but larger datasets
also lead to more iterations, making your initial weights unimportant.

5.5 FINE TUNING

In transfer learning the fine tuning is the process of training deep layers such as vgg16,
flatten, dense, dropout, dense_1, dropout_1, dense_2 from starting but the difference is
we are not making deep learning model where we train the network from scratch here
we have the knowledge of pretrained model added to that we are giving the extra
knowledge and the information to the new convolutional neural network model for
more accuracy we usually do it when we have the large dataset which is somehow little
bit different from dataset on which the pretrained models are trained.

So, this is the overall introduction of fine tuning in transfer learning. In this model if
fine tuning was applicable then we had to trained the all layers from starting which will
definitely takes a lot of time on new dataset. The overall model can be explained
through below figure 9.

17
Fig.8: Fine Tuning

5.6 MULTI-INPUT DEEP CONVOLUTIONAL NEURAL NETWORK

For feature extraction purpose and for classification purpose the neural network can be
formed. It could be multi-layer or so according to the need of developers. On large
dataset the neural network has to trained in a manner that the it will give the best
accuracy so accuracy is one of the parameters for which the deep neural network can
be formed.

Here in this study we have developed the multi-input deep convolutional neural
network i.e., the combination of the pre-trained model the feature extraction layer of
VGG16 have been freeze and the last predictable layer and trained that layer on the
new dataset that we have. Now the previous knowledge plus the knowledge that the
last predictable layer has just got combined and the new network formed which we
have given a name of MULTI-INPUT DEEP CONVOLUTIONAL NEURAL
NETWORK [MI-DCNN].
Now the new model is ready to use and make results find out the accuracy and make
conclusion.

18
6. RESULTS AND DISCUSSION

The experimental work was carried out on LENEVO IDEA PAD 1TB storage and 8
GB RAM. For all coding purpose the JUPYTER NOTEBOOK has been used. The
dental color images were collected from one of private dental lab from Nagpur for both
images of COLOR and for PANORAMIC images. All ethical issues have been taken
into consideration while making this project. In this study the novel multi-input deep
convolutional neural network was properly proposed to dental images for dental caries
classification. The weight of pre-trained model VGG16 has already been taken into
consideration for multiple entries of the images into the model. Later the both freezed
feature extraction layer and the last trained layer on new dataset has been combined
properly to form the new proposed model called as the Multi-input deep convolutional
layer. In this study the comparison between two parameters have been shown. First is
dataset and second is the pre-trained models.In this study three types of model have
been used 1] VGG16 2] MobileNet 3] ResNet50. Two types of dataset is used viz.,
color dataset and second is panoramic[OPG] images.

6.1 VGG16 and DATASETS

VGG16 performs well on color images the accuracy is about 75% after 50 epochs and
on panoramic images it is all about 60% recorded

Table II: VGG16

MODEL DATASET ACCURACY


NAME

VGG16 COLOR IMAGE 75%

PANORAMIC 60%

19
6.2 MobileNet and DATASETS

MobileNet performs well on color images as it is giving the accuracy of 73% which is
higher than Panoramic images having 59% accuracy

Table III: MobileNet

MODEL NAME DATASET ACCURACY

MobileNet COLOR IMAGE 73%

PANORAMIC 59%

6.3 ResNet50 and DATASETS


ResNet performs well on color images having the accuracy of 71.67% which is higher
than Panoramic images having 60.94% accuracy

Table IV: ResNet50

MODEL NAME DATASET ACCURACY

ResNet COLOR IMAGE 71.67%

PANORAMIC 60.94%

So this was all about the comparative study of all dataset and 3 pre-trained model

20
7. ANALYSIS OF RESULT

Analysis is when any pretrained model is applied on the color images the accuracy is
comparatively high rather than the accuracy which is given panoramic images.
Somehow it is because of the panoramic images is the total area coverage of the
images which contain some king of noise the area which is not important to give so for
that the detection of particular area is required which is the future scope for this study
now. The dataset of color images is periapical dataset means the image has only
portion which is important to learn and trained the predictor layer. Added to it there is
concept of the layer freezing in starting of model construction which is important for
analysis means for all pre-trained network the feature extraction layers should be
carefully add to achieve the level of accuracy

The image augmentation is also important factor while training the predictable layer,
when augmentation performed scaling of images, shearing of images are there, flipping
of images is there with high degree, but the only rotation which is allowed is 6 degrees
upper and 6 degrees lower not more than it. When augmentation is performed on
panoramic images the image quality gets poor but the when the augmentation
performed on panoramic images the no effect is seen on them as only main part of data
is there. So be careful while applying the augmentation on images

21
8. CONCLUSION AND FUTURE SCOPE

8.1 Conclusion

With technology development there is improvement of dental imaging day by day. To


support the professional skill of dentists, these tools are highly efficient in supporting
the skill of professional dentist. Artificial intelligence techniques have been being used
with dental imaging for the last decades for developing decision support system for
dentist. This study uses the dataset of panoramic and the color images and helped to
detect the the caries tooth. Using the transfer learning methodology, it helped a lot to
use pre-trained knowledge of models use it to classify the images and increase the
accuracy. The proposed approach is based on the VGG-16 convolutional neural
network model. A total of 183 images of panoramic and 74 images of color dataset is
used to trained the last predictable layer of the model. Performance evaluation has been
done and found the accuracy of color images is more than the panoramic images and
the reason behind this already explain in analysis part. Future scope is to increase the
accuracy for both the dataset with detection of caries and removal of all noise data
from the image. Ensemble approach can be used to increase the accuracy of the model
and combine the knowledge of all the model viz. VGG16, ResNet and MobileNet50
particularly.

8.2 Future scope

As the domain of our project is oral health and prime aim of project is to develop the
decision support system for the dentist’s. we can extend the functionality of our project
to detect dental caries (tooth decay) along with detecting gum disease (gum disease is
the condition that occurs when the gums are not properly taken care of. which
eventually leads to inflammation and infection that cause gum disease), detection of
herpes (common infection of the mouth that is caused by herpesvirus type 1, it affects
adults), detection of oral cancer (unhygienic food, consumption of tobacco, abusing
alcohol leads to oral cancer), detection of oral tumor (unwanted growth of cell/tissue in
mouth) and also detection of premolar tooth to detect wrong arrival of tooth.

22
10. REFERENCES

[1] A. Imak, A. Celebi, K. Siddique, M. Turkoglu, A. Sengur and I. Salam, "Dental


Caries Detection Using Score-Based Multi-Input Deep Convolutional Neural
Network," in IEEE Access, vol. 10, pp. 18320-18329, 2022, doi:
10.1109/ACCESS.2022.3150358.
[2] H. Yu, Z. Lin, Y. Liu, J. Su, B. Chen and G. Lu, "A New Technique for Diagnosis
of Dental Caries on the Children’s First Permanent Molar," in IEEE Access, vol. 8, pp.
185776-185785, 2020, doi: 10.1109/ACCESS.2020.3029454.
[3] Lian, Luya & Zhu, Tianer & Zhu, Fudong & Zhu, Haihua. (2021). Deep Learning
for Caries Detection and Classification. Diagnostics. 11. 1672.
10.3390/diagnostics11091672.
[4] G. F. Olsen, S. S. Brilliant, D. Primeaux and K. Najarian, "An image-processing
enabled dental caries detection system," 2009 ICME International Conference on
Complex Medical Engineering, 2009, pp. 1-8, doi: 10.1109/ICCME.2009.4906674.
[5] M.A. Hafeez Khan, Prasad S. Giri, J. Angel Arul Jothi, "Detection of Cavities from
Oral Images using Convolutional Neural Networks", 2022 International Conference on
Electrical, Computer and Energy Technologies (ICECET), pp.1-6, 2022.
[6] Mao, Y.-C.; Chen, T.-Y.; Chou, H.-S.; Lin, S.-Y.; Liu, S.-Y.;Chen, Y.-A.; Liu, Y.-
L.; Chen, C.-A.;Huang, Y.-C.; Chen, S.-L.; et al. Caries and Restoration Detection
Using Bitewing Film Based on Transfer Learning with CNNs. Sensors 2021,21, 4613.
[7] Lian, L.; Zhu, T.; Zhu, F.; Zhu, H. Deep Learning for Caries Detection and
Classification. Diagnostics 2021,11, 1672.
[8] Fung MHT, Wong MCM, Lo ECM, CH Chu (2013) Early Childhood Caries: A
Literature Review. Oral Hyg Health 1: 107.
[9] R. Obuchowicz , K. Nurzynska, B. Obuchowicz, “Caries detection enhancement
using texture feature maps of intraoral radiographs,” Oral radiol., vol. 36 no. 3, pp.
275- 287, Jul. 2020 doi: 10.1007/s1182-018-0354-8
[10] L. Megalan Leo and T. Kalapalatha Reddy, ‘‘Learning compact and
discriminative hybrid neural network for dental caries classification,’’ Microprocessors
Microsyst., vol. 82, Apr. 2021, Art. no. 103836, doi: 10.1016/j.micpro.2021.103836
[11] H. Yang, E. Jo, H. J. Kim, I.-H. Cha, Y.-S. Jung, W. Nam, J.- Y. Kim, J.-K. Kim,
Y. H. Kim, T. G. Oh, S.-S. Han, H. Kim, and D. Kim, ‘‘Deep learning for automated
detection of cyst and tumors of the jaw in panoramic radiographs,’’ J. Clin. Med., vol.
9, no. 6, p. 1839, Jun. 2020, doi: 10.3390/jcm9061839
[12] O. Kwon, T. H. Yong, S. R. Kang, J. E. Kim, K. H. Huh, M. S. Heo, S. S. Lee, S.
C. Choi, and W. J. Yi, ‘‘Automatic diagnosis for cysts and tumors of both jaws on
panoramic radiographs using a deep convolution neural network,’’ Dentomaxillofacial
Radiol., vol. 49, no. 8, Jul. 2020, Art. no. 20200185, doi: 10.1259/dmfr.20200185
[13] Y. P. Huang and S. Y. Lee, ‘‘Deep learning for caries detection using optical
coherence tomography,’’ medRxiv, early access, doi: 10.1101/2021.05.04.21256502.
[14] J. Naam, J. Harlan, S. Madenda, and E. P. Wibowo, ‘‘Image processing of
panoramic dental X-ray for identifying proximal caries,’’ Indonesian J. Elect. Eng.
Comput. Sci. (Telkomnika), vol. 5, no. 2, pp. 702–708, Jun. 2017. [Online]. Available:
https://pdfs.semanticscholar.org/7cd9/d2e1 ff9afbe0f84a40dc32ef77c91eeff0be.pdf,
doi: 10.12928/TELKOMNIKA. v15i2.4622
23
[15] S. Oprea, C. Marinescu, I. Lita, M. Jurianu, D. A. Visan, and I. B. Cioc, ‘‘Image
processing techniques used for dental X-ray image analysis,’’ in Proc. 31st Int. Spring
Seminar Electron. Technol., May 2008, pp. 125–129, doi: 10.1109/ISSE.2008.5276424

[16] S. K. Khare and V. Bajaj, ‘‘Time–frequency representation and convo lutional


neural network-based emotion recognition,’’ IEEE Trans. Neu ral Netw. Learn. Syst.,
vol. 32, no. 7, pp. 2901– 2909, Jul. 2021, doi: 10.1109/TNNLS.2020.3008938

[17] P. Singh and P. Sehgal, ‘‘Automated caries detection based on radon


transformation and DCT,’’ in Proc. 8th Int. Conf. Comput., Com mun. Netw. Technol.
(ICCCNT), Jul. 2017, pp. 1–6, doi: 10.1109/ICC CNT.2017.8204030.

[18] O. E. Langland, R. P. Langlais, and J. W. Preece, Principles of Dental Imaging.


Philadelphia, PA, USA: Lippincott Williams & Wilkins, 2002. [4] S. C. White and M.
J. Pharoah, Oral RadiologyE-Book: Principles and Interpretation. Amsterdam, The
Netherlands: Elsevier, 2014

[19] K. He, X. Zhang, S. Ren, and J. Sun, ‘‘Deep residual learning for image
recognition,’’ in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Las
Vegas, NV, USA, Jun. 2016, pp. 770–778. [Online]. Available:

[20] K. Simonyan and A. Zisserman, ‘‘Very deep convolutional networks for large-
scale image recognition,’’ 2014, arXiv:1409.155

24
Social Utility

We have developed a standard classifier which can classify various objects based on
the dataset and the fine tuning provided to it. We can make the required changes as per
the dataset and classification is concerned and can be benefitted from the system.

Our developed system is able to classify whether the dental caries is present in the
patient’s teeth or not with a good accuracy. Our system can be mostly be used by the
dentists and medical practitioners for saving a lot of time, energy and money. The
doctor need not to worry about the manual checking for caries as the proposed system
is able to classify the caries with ease. The doctor can also predict a proper time for
recovery of the caries and can give prescription more accurately.

Furthermore, our developed system can be applied to many domains irrespective of the
type of domains. Furthermore, our developed system would be helpful in medical
domain. For instance, it can be helpful to classify the cancer cells in the patient's body
and can tell us about the person is having a cancer or not. For that we just need to
change some layers in the system. As the model is trained on large number of images,
it need not much training for such cases. For this, we need to remove the last layers for
the system. The convolution layer is already trained much through the ImageNet
dataset which has nearly 1000 classes so it has the lot of information about the various
features of different images. We just need to develop our fully connected layer and
train that layer based on our dataset and freeze the convolution layer. So, by doing that
our system can also be used in cancer detection and classification.

Our system can be applied in other medical domains as the neural networks are trained
on large datasets of images to recognize objects. However, this training and the use of
huge datasets is a time-consuming process and a tedious one too. The use of our
developed system can save valuable time by pre-training the model using ImageNet
that contains millions of images belonging to different categories. Image classification
using our system can be seen in the medical imaging field. The convolutional neural
network (model) is trained using ImageNet to identify kidney problems in ultrasound
images. Also, a model that is trained using images from MRI scans can be effectively
used as the primary model for analyzing CT scans.

25

You might also like