بحث التخرج النموذج النهائيx

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 91

Republic of Iraq

Ministry of higher education and scientific research


University of Al-Qadisiyah
College of Dentistry

Artificial Intelligence In Dentistry

A graduation project Submitted to the college of Dentistry in


partial fulfillment of the requirement for the Degree of Bachelor
in dental and oral surgery (B.D.S)

BY
Zahraa Salih Hadi
Abbas Hamza Yasir
Khayrat Ahmed Zaghir
Ruqayya Ahmed Abdul-Amir
Sura Nazar

Superviserd By
Dr. Ghassan Majid Jasim
2023-2024
‫َو الَّر اِس ُخ وَن ِفي اْلِعْلِم َيُقوُلوَن آَمَّنا ِبِه ُك ٌّل ِّمْن ِع نِد َر ِّبَنا ﴿‬
‫﴾ ۗ َو َما َيَّذ َّك ِإاَّل ُأوُلو اَأْلْلَباِب‬
‫ُر‬

‫صدق اهلل العلي العظيم‬


‫]آل عمران سورة ‪ :‬اآلية‪[ 7‬‬
Supervisor Certificate

I certify that this project is entitled:

Artificial Intelligence In Dentistry

by ( Zahraa Salih Hadi , Abbas Hamza Yasir , Khayrat Ahmed Zaghir ,


Ruqayya Ahmed Abdul-Amir , Sura Nazar ) under my supervision at uni-
versity of Al- Qadisiyah , College of Dentistry in partial fulfillment of the re-
quirements for a Bachelor's degree in Oral and Dental Surgery

Signature :
Name: Ghassan Majid Jasim (Supervisor)
Date: /04/2024
DEDICATION

To those for whom my heart beats with longing and gratitude


For the two bright stars in the sky of my life.

To my mother, the spirit of compassion and warmth, who shone like the
morning sun in my life, with her tenderness and sacrifice, making every mo-
ment a unique experience.

And my father, the rock of steadfastness and generosity, inspired me with the
courage and determination to continue my path despite the harshness of life.
You have all the respect and appreciation, for you are the compass by which
I direct my steps
Towards success and excellence.
ACKNOWLEDGEMENT

First of all, praise be to God Almighty and thanked him for his showers of
blessings throughout our research work to complete the research successfully.

We would like to express our deep and sincere gratitude to Dr. Ghassan
Majid Al-Jubouri, University of Al-Qadisiyah, College of Dentistry, for giving
us the opportunity to conduct research and providing invaluable guidance dur-
ing this research. We have been inspired by his dynamism, vision, sincerity and
motivation. We have taught the methodology of conducting the research and
presenting the research work as clearly as possible. It was a great honor and a
great privilege to work and study under his supervision. We are very grateful
for what he gave us. We would also like to thank him for his friendship, sym-
pathy and sense of humor. We are so grateful to our parents for their love,
prayers, care, and sacrifice for educating us and preparing us for our future.
Special thanks go to our friends for the keen interest they have shown in suc-
cessfully completing this research
List of Contents

Subject I

Holy Quran II

Supervisor Certificate III

Dedication IV

Acknowledgement V

List of Contents VI

List of Figures IX

List of Tables XI

List of Abbreviations XI

Abstract XII

CHAPTER ONE
Introduction

1. Introduction 2

CHAPTER TWO
Literatures

2. Literature Review 9

2.1 Classification Of AI 10

2.1.1 ANN 12

2.1.2 CNN 13
2.2 AI application in health care 14

1. Image Analysis 15

2. Data synthesis and Prediction 16

3. Patient Interaction 18

2.3 The function of artificial intelligence in certain dental spe- 19


cializations

2.3.1 AI in Orthodontics 19

1. Cephalometric Analysis 20

2. Application of AI for determining need for orthodontic extrac- 21


tions

2.3.2 AI in Operative Dentistry 22

Caries Detection 24

2.3.3 AI In Endodontic 26

1. Discovering Root Fracture 28

2. Root Morphology 29

3. Verification of Working Length and Tracing the Apical Fore- 30


men.

4. Retreatment Predictions 32

5. Pulpal Diagnosis 32

6. Automated Endodontic/Peri-apical Lesion Detection 34

7. Microrobots 36
8. Other uses of AI in Endodontics 37

2.3.4 AI In Oral Surgery 38

1. Cancers Of The Oral cavity 40

2. In Implant Dentistry 41

3. In Robotic Surgery 43

2.3.5 AI In Prosthodontics 46

1. Application Of AI In Prosthodontics 47

2. Accuracy Of Intra-oral Scanners Versus Traditional Impres- 49


sions

CHAPTER THREE

Discussion

Discussion 56

CHAPTER FOUR

Conclusion and Recommendation

Conclusion and Recommendation 58

References 60
NO. LIST OF FIGURE

Schematic diagram of the relationship between AI, strong AI, 10


weak AI, expert-based systems, machine learning, deep learn-
1-2
ing and neural network (NN).

2-2 Schematic diagram of deep learning. 13

3-2 AI and data-driven applications are situated all along the pa- 14
tient’s dental journey.

4-2 The figure shows an example of analyzed OPG and the out- 15
comes given by the software.

(A) The classification (mark as yes or no) of the lesion on the 17


periapical radiograph is schematized. (B) The segmentation
5-2
(orange staining) of the lesion on the periapical radiograph is
schematized. (C) The detection (blue box) of the lesion on the
periapical radiograph is schematized.

6-2 The areas of orthodontics that artificial intelligence was used 19

7-2 Cephalometric tracing done using Dolphin Imaging technol- 21


ogy.

8-2 AI-enhanced Image Which Shows Models Applied for Caries 26


Identification and Quantification (Yellow) and Enamel (Red)

31
Example of how AI Technology can Illuminate a Black,
9-2 White and Grey Radiograph with Quantified, Precision Out-
puts for Doctor Communication and Patient Education

10-2 AI-enabled Caries Detections and Tooth Part Segmentations, 33


Provided by a Real-Time Chair-Side AI Radiology Interface

11-2 Screen Shot of a Periapical Radiograph that Includes an En- 35


dodontically Treated Tooth and has been Analyzed by their
AI Algorithms.
NO. LIST OF TABLES Page
A comparison of supervised learning, semi-supervised learn-
1-2 12
ing, and unsupervised learning.
Example of AI applications in Operative Dentistry
2-2 23

LIST OF ABBREVIATIONS

AI Artificial Intelligence

EBD Evidence-based dentistry

CNN Convolutional Neural Networks

ANN Artificial Neural Networks

LST Long Short-Term Memory


M

DL Deep Learning

DNN Deep Neural Networks

ML Machine Learning

NLP Natural Language Processing

NN Neural Network

CBR case-based reasoning

PRs Pattern Recognition

Abstract
Artificial Intelligence (AI) has emerged as a transformative force in den-
tistry, revolutionizing various aspects of patient care, diagnostics, and treat-
ment planning. This integration of AI technologies in dentistry offers effi-
cient solutions, ranging from image analysis for radiographic interpretation
to personalized treatment recommendations.

The utilization of machine learning algorithms, such as convolutional


neural networks (CNNs) and deep learning models, enhances the accuracy of
diagnostics and enables predictive analytics for oral health outcomes. In ad-
dition to improving clinical workflows, AI applications in dentistry contrib-
ute to enhanced patient experiences and more personalized healthcare inter-
ventions.
CHAPTER ONE
INTRODUCTION
1.Introduction
Artificial Intelligence, often referred to as machine intelligence, is the
ability of machines to exhibit intelligence, as opposed to the natural intelli-
gence exhibited by humans and other animals.[1] AI, or artificial intelligence,
can be defined as the study of computing models that are capable of logical
thinking and behavior. AI research is commonly characterized as the investi-
gation of intelligent agents, which are devices capable of perceiving their sur-
roundings and taking actions that optimize their likelihood of successfully ac-
complishing their objectives.[2]

The origins of history can be traced back to around 400BC, when


Plato conceptualized a rudimentary model of brain activity. Since then, the
field of research has had numerous advancements with the introduction of
technology to develop a model that can replicate the functioning of the human
brain [3]. The continuous pursuit of knowledge has led to the emergence of
Artificial Intelligence (AI), a sophisticated system that can replicate the cog-
nitive abilities of the human brain.The number 4 is enclosed in square brack-
ets. The unparalleled complexity of the human brain has perpetually intrigued
academics and scientists throughout history.[4]

Artificial Intelligence is the capacity of a system to extract informa-


tion from external data and acquire knowledge from it, which is then utilized
to achieve aims and goals through adaptable adaption. Machine learning is a
branch of artificial intelligence that enables computer systems to carry out
certain tasks by relying on patterns and mathematical models, without the
need for explicit instructions, in order to approach human cognition. [5]

In his work , Turing suggested establishing a test to determine whether a


machine can attain intelligence at the same level as a person. The name given

2
to this test is the Turing Test [6]. It is located on the lines that follow: If a hu-
man evaluator were able to differentiate between natural language communi-
cations between a human test taker and a machine. The scenario assumes that
a human evaluator is aware that the communication is taking place between a
human and a machine. Furthermore, the human evaluator, human test taker,
and machine are physically isolated from one other.[7] The interaction be-
tween the human test taker and the machine is restricted to written text,
specifically keyboard input, rather than spoken language. The purpose of this
exam is to just assess the machine's logical question-answering capabilities,
excluding any evaluation of its speech interpretation abilities.[8] If the human
evaluator is unable to differentiate between the human test taker and the ma-
chine, then the machine might be said to have successfully passed the Turing
Test, and it is referred to as possessing "machine intelligence". .[9]

In 1955, the term AI was initially used during a two-month workshop


called the Dartmouth Summer Research Project on Artificial Intelligence.[10]
This workshop was organized by John McCarthy, Marvin Minsky, Nathaniel
Rochester, and Claude Shannon. Nevertheless, the idea remained purely theo-
retical. Researchers in the 1950s were impeded by specific limitations that
prevented them from creating genuine AI robots. Prior to 1949, computers did
not possess a crucial requirement for AI activities: the absence of a storage
function prevented the ability to store codes, limiting their execution to imme-
diate tasks. Furthermore, computers were expensive during that period. Fur-
thermore, during that time, funding sources exhibited cautious and traditional
perspectives towards this emerging field. [11]

Between 1957 and 1974, the field of AI experienced rapid expansion due
to the increasing availability and computational capabilities of computers, as
well as advancements in AI algorithms. One example is ELIZA, a computer

3
program capable of understanding spoken language and resolving problems
through text [12].

Two periods of decline in the field of artificial intelligence, known as


"AI Winters," occurred following the initial phase of progress. These winters
were caused by a lack of practical applications and a decrease in research
funding in the mid-1970s and late 1980s [13].

However, AI experienced a significant advancement during the interven-


ing time, characterized by minimal progress. During the 1980s, two distinct
approaches emerged in the field of artificial intelligence: machine learning
(ML) and expert systems. There are two contrasting approaches to AI based
on their theoretical foundations.[14] Machine learning enables computers to
acquire knowledge through experience, whereas expert systems replicate the
decision-making abilities of human specialists. To clarify, machine learning
autonomously acquires knowledge and distills it, whereas expert systems rely
on human specialists to predefine all potential scenarios and corresponding
solutions. Since then, expert systems have predominantly been utilized in sev-
eral industries. [15] The example showcases the R1 (Xcon) program, which is
an expert system consisting of approximately 2,500 rules. This program was
created and utilized by DEC, a computer manufacturer, to aid in the selection
of components for computer assembly.[16]

Significant milestones in the field of computer vision occurred in 2012


and 2017. In 2012, a deep learning network with eight layers, implemented
using a graphics processing unit (GPU), was created. This network won the
ImageNet Large Scale Visual Recognition Challenge (ILSVRC) and obtained
a top-5 classification error rate of 15.3%. The mistake rate was over 10.8%
lower than the second-place finisher. [17] In 2017, SE-NET achieved a top-5

4
mistake rate of 2.25%, which is lower than the human top-5 error rate of 5.1%
.[18]

Notable examples of AI include Deep Blue, a chess-playing expert sys-


tem that defeated chess champion Gary Kasparov in 1997. In 2017, Google's
AlphaGo, a deep learning program, defeated the world's top-ranked Go player
Jie Ke.[19] More recently, in late 2022, OpenAI introduced ChatGPT (Chat
Generative Pre-trained Transformer), a text-generation model that can pro-
duce human-like responses based on input text. The model has generated sig-
nificant discussion since its launch. These instances employed several AI
methodologies to function. [20]

The utilization of Artificial Intelligence is extensive, and there is a


growing focus on optimizing the advantages through increased study.[21]

Over the past two decades, there has been a growing interest in the appli-
cation of digital data processing technology in the domains of medicine and
dentistry.[22]

The application of digital technology, particularly artificial intelligence


(AI) technology, can effectively decrease the expenses and time required for
treatment, minimize reliance on human expertise, and limit the occurrence of
medical errors. This approach also holds immense potential for revolutioniz-
ing public health settings in impoverished countries. [23]

Artificial intelligence technology has been applied extensively in the


dentistry sector, ranging from differential diagnosis and radiographic interpre-
tation to restorative treatment .[24]

There is dental management software available in the market that use ar-
tificial intelligence (AI) to collect and store patient data. At present, artificial

5
intelligence can be employed to create comprehensive virtual databases that
are readily accessible.[25]

CAD software is commonly programmed to save the stereolithographic


file of the particular abutment, which is designed by the technician, in a desig-
nated folder, and then retrieve it automatically when necessary.[26]

Interactive and speech recognition interfaces facilitate the efficient com-


pletion of intricate tasks for dental clinicians. Software utilizing AI technol-
ogy can expedite the documentation of essential data and transmit it to the
physician with greater speed and efficiency compared to human counterparts
(Kannan, 2017). AI possesses a distinctive capacity for learning, which en-
ables it to be educated for various jobs.[27]

It can be included into dental imaging systems to detect even the most
minute aberrations that are imperceptible to the human eye. Due to its excep-
tional capability, it can effortlessly be utilized for precise identification of
cephalometric landmarks .[28]

Many studies on AI applications in dentistry are undergoing or even


have been put into practise in the aspects such as diagnosis, decision-making,
treatment planning, prediction of treatment outcome, and disease prognosis.
Many reviews regarding dental AI have been published, while this review
aims to narrate the development of AI from incipient stages to present, de-
scribe the classifications of AI, summarise the current advances of AI re-
search in dentistry, and discuss the relationship between Evidence-based den-
tistry (EBD) and AI. The limitations of current AI development in dentistry
are also discussed.

6
CHAPTER TWO
LITERATURE REVIEW

7
Review

2.1 Definitions and Classification of AI

Algorithm: is a finite set of rules followed by a computer system to complete


some task. It’s a type of program specifically, one that ends at some point
rather than operating indefinitely.

Artificial intelligence (AI) :is human- or animal-like intelligence exhibited


by machines: for example, the ability to perceive, understand, and reason
about the world in the same way as people do.

An AI detector: is a tool designed to detect when a piece of text (or some-


times an image or video) was created by generative AI tools (e.g., ChatGPT,
DALL-E). These tools aren’t 100% reliable, but they can give an indication of
the likelihood that a text is AI-generated.

Big data: refers to datasets so large or complex that they can’t be handled by
normal data-processing software. This is key to the power of technologies like
LLMs, whose training data consists of enormous quantities of text.
Deep learning : is a type of machine learning approach. Deep learning mod-
els are based on neural networks.They consist of a series of layers (which is
why they’re called “deep”) that gradually transform raw data (e.g., pixels)
into a more abstract form (e.g., an image of a face), allowing the model to un-
derstand it on a higher level.

8
Machine learning (ML): is a field of research focused on how machines can
“learn” (i.e., become capable of more and more advanced tasks). It involves
the use of algorithms interacting with training data to make autonomous deci-
sions and predictions.

- Classification of AI :-
There are many approaches whereby AI can be achieved; different types
of AI can achieve different tasks, and researchers have created different AI
classification methods.
AI is a generic term for all non-human intelligence. As Figure (1-2 )
shows, AI can be further classified as weak AI and strong AI. Weak AI, also
called narrow AI, uses a program trained to solve single or specific tasks. The
AI of today is mostly weak AI. Examples include reinforcement learning, e.g.,
AlphaGo, and automated manipulation robots; natural language processing,
e.g., Google translation, and Amazon chat robot; computer vision, e.g., Tesla
Autopilot, and face recognition; data mining, e.g., market customer analysis
and personalised content recommendation on social media (29).

Figure (1-2), Schematic diagram of the relationship between AI, strong AI,
weak AI, expert-based systems, machine learning, deep learning and neural
network (NN). [30]

9
Strong AI, also known as Artificial General Intelligence (AGI), encom-
passes AI systems that possess the same level of intelligence and capabilities
as humans. These systems have their own consciousness, awareness, and be-
havior, which are as adaptable and flexible as those of humans (30). The ob-
jective of Strong AI is to develop a versatile algorithm capable of making
judgments across several domains. Research on strong artificial intelligence
must exercise extreme caution due to the ethical concerns and the inherent
risks it poses. Therefore, there have been no significant uses of powerful arti-
ficial intelligence thus far.

ML and expert systems are distinct subsets of weak AI. Table (1-2)
demonstrates that machine learning (ML) can be categorized into supervised,
semi-supervised, and unsupervised learning approaches, according to the un-
derlying theory. Supervised learning involves the use of labeled datasets to
train algorithms, with these labeled datasets serving as the "supervisor" for the
algorithm. The algorithm acquires knowledge from the labeled input and ana-
lyzes and recognizes the shared characteristics of the labeled input in order to
make predictions about unlabeled input. Supervised learning encompasses
several algorithms such as k-nearest neighbors, logistic regression, random
forest, and support-vector machine [31]. In contrast, unsupervised learning
autonomously identifies the different characteristics of unlabeled data. Semi-
supervised learning is a method that falls between supervised and unsuper-
vised learning. It involves using a small set of labeled data together with a
larger set of unlabeled data for training purposes (. ( 32

Weakly supervised learning, a new method, has gained popularity in the


field of AI as a way to reduce the costs associated with labeling. Specifically,
the task of object segmentation relies solely on image-level labels, rather than
utilizing information about object boundaries or locations during training
(33).
10
Items Supervised Semisupervised Unsupervised
learning learning learning

Input type Labelled data A mixture of Unlabelled data


labelled and
unlabelled data

Accuracy High Mid Low

Complexity of Low Mid High


the algorithm

Types of algo- Regression and Regression, clas- Clustering and


rithm classification sification, clus- association
tering, and asso-
ciation

Table (1-2), A comparison of supervised learning, semi-supervised learning,


and unsupervised learning [30].

2.1.1 ANN

ANN comprises a group of neurons and layers, as illustrated in Figure


(2-2) . As mentioned above, this model is a basic model for deep learning,
consisting of a minimum of three layers. The inputs are processed only in the
forward direction. Input neurons extract features of input data from the input
layer and send data to hidden layers, and the data goes through all the hidden
layers successively. Finally, the results are summarised and shown in the out-
put layer. All the hidden layers in ANN can weigh the data received from pre-
vious layers and make adjustments before sending the data to the next layer.
11
Each hidden layer acts as an input and output layer, allowing the ANN to un-
derstand more complex features (34).

Figure (2-2), Schematic diagram of deep learning [30].

2.1.2 CNN
CNN is a type of deep learning model mainly used for image recogni-
tion and generation. The mean difference between ANN and CNN is that
CNN consists of convolution layers, in addition to the pooling layer and the
fully connected layer in the hidden layers. Convolution layers are used to gen-
erate feature maps of input data using convolution kernels. The input image is
folded by the kernels completely. It reduces the complexity of images because
of the weight sharing by convolution. The pooling layer is usually followed

12
by each group of convolution layers, which reduces the dimension of feature
maps for further feature extraction.
The fully connected layer is used after the convolution layer and pooling
layer. As the name indicates, the fully connected layer connects to all acti-
vated neurons in the previous layer and transforms the 2D feature maps into
1D. 1D feature.

2.2 AI application in health care


Helping clinicians provide better care on a consistent basis is one of the
key goals of integrating AI in clinical dentistry. Data managing and AI appli-
cations are applied at all stages of the patient journey Figure (3-2) and in all
dental specialty areas. Four areas are to be highlighted; image analysis, data
synthesis and prediction, evidence-supported treatment planning and conduct,
and patient interaction, particularly during supportive care.

Figure ( 3-2 ) , AI and data-driven applications are situated all along the pa-
tient’s dental journey.

13
1 . Image Analysis

AI is currently being widely used in clinical dentistry for image interpre-


tation in dental radiology. AI can detect common pathologies such caries, api-
cal lesions, periodontal bone loss, cysts, and bony fractures on both 2D and
3D pictures. AI can aid dentists in the process of landmark recognition on
cephalometric radiographs, as demonstrated by Schwendicke et al. in their
studies from 2021 and 2019[29]. Presently, AI-assisted radiograph interpreta-
tion is equally or, in certain instances, even more precise than interpretation
by professionals. Furthermore, it reduces the time required for the evaluation
process and simplifies the production of thorough and organized reports.
Image analysis has been extensively explored for non-radiographic data,
including 3D point cloud data from intraoral scans, photographs, or near in-
frared transillumination images . This analysis is used to assist with smile de-
sign, orthodontic planning, detection of preparation margins, restoration de-
sign, and identification of pathologies such as caries or oral mucosa lesions
Figure (4-2) .

Figure (4-2) , The figure shows an example of analyzed OPG and the out-
comes given by the software [30]
14
2. Data synthesis and prediction
The field of clinical dentistry produces substantial volumes of data on a
daily basis. In addition to pictures, a wide range of clinical data, historical
data, claims, treatment data, and other diagnostic test results are also accessi-
ble. Due to the frequent and regular visits of numerous patients to dentists,
there exists a substantial amount of longitudinal data on a wide segment of the
population, particularly in many affluent nations. Presently, this data is fre-
quently gathered in separate data repositories, keeping concealed from collec-
tive utilization, and proving challenging to evaluate collectively when assess-
ing a patient's symptoms or predicting future oral and dental diseases.
Utilizing this data and harnessing it to gain a deeper comprehension of
the patient, including their risk profile and requirements, represents another
potential benefit of AI. Contemporary methodologies for accessing and con-
solidating data, such as through dashboards, strive to present practitioners
with the available data in a more comprehensive and practical manner. En-
abling a comprehensive perspective of each patient, taking into account their
unique medical history and potential dangers, is anticipated to enhance both
the quality and efficiency of healthcare, hence diminishing the necessity for
repetitive and expensive evaluations.
Utilizing data collected through multiple visits and over a span of years,
such as images, clinical evaluations, historical records, medication informa-
tion, and information on systemic conditions, can help to address the current
approach of treating all patients in the same way. This will facilitate a shift to-
wards a more individualized, accurate, and proactive approach to healthcare.
In addition, there will be a growing utilization of data provided by patients,
such as information on their eating habits or dental brushing behavior, which
will be collected through apps and mobile devices. Patients will also play a
more active role in the care process, not only as data contributors but also
through the use of applications and constant support in the "virtual" practice.
15
Therefore, AI will also facilitate a more inclusive dental care experience. The
outcome is referred to as P4 dentistry, which encompasses a more individual-
ized, accurate, proactive, and collaborative approach to dental care (Flores et
al., 2013)[30].
Another area of untapped data aggregation is the utilization of implanted
and wearable sensors. The advancing biomedical sensor technology allows for
the utilization of small and nano-sized sensors, which can be employed to
monitor saliva and indirectly assess mouth health or detect diseases.
Furthermore, the utilization of saliva diagnostics and its associated data
can also be utilized to evaluate various human ailments due to the strong cor-
relation between saliva and general well-being (Cui et al. 2022).The user's
text is "[31]."

Figure (5-2) , (A) The classification (mark as yes or no) of the lesion on
the periapical radiograph is schematized. (B) The segmentation (orange stain-
ing) of the lesion on the periapical radiograph is schematized. (C) The detec-
tion (blue box) of the lesion on the periapical radiograph is schematized [32].
16
3. Interacting with patients
Since the integration of data is a primary objective of data-driven health-
care and artificial intelligence (AI) tools for health, it is probable that there
will be an increased opportunity for ongoing engagement with patients. Devi-
ating from the commonly used "one-stop" method in dental care could enable
the direct treatment of oral diseases at their source, which is inside individu-
als' daily lives.

Several potential examples spring to mind:

A. Depending on the available standards, a sagittal evaluation of hard


and soft tissues in the head and face is performed
B. Changes identified during the reinforcement and treatment proc
dures
C. Development and growth as a factor in determining changes
In the future, it is anticipated that patients would possess a higher leve
of knowledge and awareness upon their arrival at the medical facility.

In contrast, dentists possess a greater understanding of their patients'


daily habits and the associated hazards. Furthermore, numerous dental ser-
vices, particularly those related to guidance and supportive treatment, will be
converted to the virtual setting, so enhancing the effectiveness of care and
promoting its long-term viability.
The primary contributor to the dental carbon footprint is the transporta-
tion of patients to and from the dental office (Duane et al., 2014).The user's
text is "[32]."

17
2.3 The function of artificial intelligence in certain dental specializa-
tions
2.3.1 Artificial Intelligence in the field of Orthodontics
There has been a substantial rise in the application of artificial intelli-
gence technologies in orthodontics in recent years. Artificial intelligence can
be applied to nearly every aspect of the orthodontic workflow. Artificial intel-
ligence (AI) assists in the diagnosis and planning of orthodontic treatments
[33]. The data is inputted into the system, where advanced algorithms and AI
software are utilized to forecast tooth movements and anticipate the eventual
results of the treatment.The text is referenced by the number 16. Artificial in-
telligence (AI) implementation decreases expenses, expedites the process of
diagnosing and treating patients, and perhaps removes the necessity for hu-
man labor , Figure (6-2) , Artificial intelligence (AI) [34] offers “a way to get
sharper prediction from data” [35],[36] by simultaneously analysing all the
different variables present in a malocclusion. This capacity offers the poten-
tial to assist the practitioner to obtain the most favourable outcome when
treating a malocclusion.[37]

Figure (6-2) The areas of orthodontics that ar-


tificial intelligence was used

1. Cephalometric Analysis

Analysis of craniofacial proportions and


measurements , The cephalometric radiograph assess-
ment of sagittal and vertical skeletal features, origi-
nally introduced by Broadbent, continues to be utilized
in orthodontic treatment planning. The analysis of cephalometric radiographs

18
involves the identification of radiographic landmarks, followed by the mea-
surement of different lengths, angles, and ratios [38]. Cephalometric analysis
is mostly utilized for three purposes [39].

A. A sagittal assessment of the hard and soft tissues of the head


and face is conducted based on the existing guidelines.

B. Modifications identified throughout the reinforcing and treat-


ment procedures, development and growth are important factors in in-
fluencing changes.

Cephalometric investigations can be conducted with either manual


tracing of landmarks or AI methods. Refer to Figure (7-2). Manual tracing, a
long-standing and extensively employed technique, is both time-consuming
and prone to errors. According to the orthodontists' expertise, the quality of
the cephalogram, and several parameters for evaluation, the process of manual
tracing can require a time frame of 15-20 minutes to finish [40]. After identi-
fying landmarks for the design, an automated cephalometric analysis trans-
mits these landmarks to a digitizer connected to a computer. Subsequently,
the cephalometric analysis is carried out by measuring distances and angles
using software, based on the traced landmarks. Cephalometric studies aided
by artificial intelligence reduce analysis time and improve diagnosis accuracy
by minimizing subjective errors [41, 42]. The lack of consistency in defining
anatomical landmarks is a significant cause of random error [43].

19
Figure (7-2): Cephalometric tracing done using Dolphin Imaging tech-
nology [32].

2. Application of AI for determining need for orthodontic ex-


tractions

Planning phase is the most significant and critical part of orthodontic


treatment. Extractions should be carefully planned due to their irreversible na-
ture. Clinicians come to the stage of deciding to extractions after combining
the patient data derived from clinical evaluations, diagnostic photographs,
dental models and radiographs with their clinical expertise. Although practi-
tioners with less experience can learn from the decisions of their more experi-
enced colleagues, the lack of a standard assessment method for the decision-
making process requires a different approach. Neural networks were used to
mimic human decision-making process for orthodontic extractions. Sagittal,
vertical and molar relationships, tooth inclinations, overjet, overbite, protru-
sion index, soft tissue characteristics and patient complaints were given as in-
put. Artificial intelligence system can then guide the clinician to decide the
extraction, based on the analysis fed from the mentioned inputs. Studies
showed that artificial intelligence systems can assist clinicians by preventing

20
errors in decision step and can provide 80 to 90% accuracy when making an
orthodontic extraction decision (Jung and Kim, 2016; Xie et al., 2010)[44] .

2.3.2 AI in Operative Dentistry

Traditionally, dentists diagnose caries by visual and tactile examination


or by radiographic examination according to a detailed criterion. However,
detecting early-stage lesions is challenging when deep fissures, tight inter-
proximal contacts, and secondary lesions are present. Eventually, many le-
sions are detected only in the advanced stages of dental caries, leading to a
more complicated treatment, i.e., dental crown, root canal therapy, or even
implant. Although dental radiography (whether panoramic, periapical, or
bitewing views) and explorer (or dental probe) have been widely used and re-
garded as highly reliable diagnostic tools detecting dental caries, much of the
screening and final diagnosis tends to rely on dentists’ experience.

In operative dentistry, there has been research on the detection of dental


caries, vertical root fractures, apical lesions, pulp space volumetric assess-
ment, and evaluation of tooth wear [45-50] Table (2-2). In a two-dimensional
(2D) radiograph, each pixel of the grayscale image has an intensity, i.e.,
brightness, which represents the density of the object. By learning from the
abovementioned characteristics, an AI algorithm can learn the pattern and
give predictions to segment the tooth, detect caries, etc. For example, Lee et
al. [51] developed a CNN algorithm to detect dental caries on periapical ra-
diographs. Kühnisch et al. [52] proposed a CNN algorithm to detect caries on
intraoral images. Schwendicke et al. [53] compared the cost-

Study Type of data Type of Size of Accuracy Sensitivity Specificity AUC Other perfor-
algorithm dataset mances
(training/
21
testing
Vertical root Panoramic CNN 240/60 0.75 Precision:
fracture radiography 0.93; F1: 0.83
detection
Apical lesion CBCT images CNN 16/4 0.93 0.88 PPV: 0.87;
detection NPV: 0.93

Tooth wear Patient’s SVM, KNN SVM: 0.69


evaluation information in total 245 KNN: 0.48
and oral
conditions,
intraoral
optical images

Dental Periapical CNN 2400/600 0.89–0.82 0.923–0.81 0.845-


caries radiography 0.94–0.83 0.917
detection

Dental Intraoral CNN 1891/479 0.896 0.815


caries optical images 92.5% 0.957– 0.943– –0.955
detection 93.3% 0.964

Table (2-2), Example of AI applications in Operative Dentistry [30].

Several studies mentioned above showed that AI has promising results


in early lesion detection, with accuracy the same or even better compared
with dentists. This achievement requires interdisciplinary cooperation be-
tween computer scientists and clinicians. The clinicians manually label the ra-
diographic images with the location of caries while the computer scientists
prepare the dataset and ML algorithm.

Caries Detection

22
Smart microchips enable the assessment of patients' food intake, diet
activity, and oral pH levels, regardless of whether it is a juvenile, adult, or
geriatric instance. Consequently, it aids in evaluating the activity of dental de-
cay, recognizing early or beginning stages of tooth decay, and then allows us
to manage the instances by modifying dietary habits or preventing further
tooth decay.The references [54], [55], and [56] are included.

The utilization of miniature intelligent microchips in the detection of


tooth decay entails the integration of small electronic chips into dental diag-
nostic instruments, hence augmenting the capacity to identify and track tooth
decay, specifically pinpointing areas of tooth deterioration.

A. Early detection: can be achieved by integrating smart chips into de-


vices such as intraoral cameras or diagnostic tools utilized by dentists.These
chips can employ sophisticated imaging techniques like near-infrared imaging
or fluorescence imaging to identify initial indications of tooth decay that may
not be easily detectable by conventional approaches.

B. Data Analysis: Tiny microchips gather data pertaining to tooth


composition, enamel condition, and the possibility of decay. Machine learning
algorithms can be embedded inside the chips to assess this data, offering valu-
able information on the seriousness and advancement of tooth decay.

C. Real-Time Monitoring: Utilizing smart chips enables constant


monitoring, facilitating prompt evaluation of any alterations in tooth condi-
tion. Dentists have the ability to obtain immediate feedback, which allows
them to take preventive action and implement procedures.

D. Patient involvement: Smart chip technology has the potential to en-


hance patient involvement by displaying visual representations or sending no-
tifications related to their dental health.Patients can enhance their awareness

23
of the state of their teeth, hence encouraging the adoption of oral hygiene
measures.

E. Treatment Planning: The data obtained by these chips can help


dentists develop customized treatment plans based on the specific characteris-
tics of decay lesions.Implementing this focused strategy has the potential to
result in efficacious and preemptive therapy.

F. Integration with Digital data: Smart chips have the capability to


seamlessly merge with digital patient data, resulting in the creation of a full
dental history that aids in the ongoing monitoring and management of pa-
tients.

G. Minimizing Subjectivity: Utilizing technology diminishes the in-


herent subjectivity linked to ocular examinations, resulting in a more precise
and impartial evaluation of oral health.

Integrating smart chips into tooth decay detection is consistent with the
broader trend of utilizing technology to enhance dental diagnosis, optimize
treatment results, and encourage oral hygiene.

24
Figure (8-2) , AI-enhanced Image Which Shows Models Applied for
Caries Identification and Quantification (Yellow) and Enamel (Red) [32].

The ability to identify the DEJ allows for representation of carious le-
sion classification as well as progression or resolution in accordance with evi-
dence-based literature.

2.3.3 AI In Endodontics

The objective of endodontic therapy is to provide exceptional care in


order to maintain the tooth's functionality and prevent additional issues. The
successful chemo-mechanical preparation and proper filling of the root canal
system in endodontics practice are strongly linked to a comprehensive under-
standing of root canal morphology.

Inadequate management of all channels results in suboptimal endodon-


tic outcomes and diminishes treatment efficacy.

25
AI exhibited high levels of accuracy and precision when it came to de-
tecting, diagnosing, and predicting diseases in the field of endodontics. Artifi-
cial intelligence (AI) has the potential to enhance the accuracy and effective-
ness of diagnosing and treating endodontic conditions, hence improving the
overall success rates of endodontic treatments.

AI models, including convolutional neural networks and artificial neu-


ral networks, have shown diverse applications in endodontics. These include
analyzing the anatomy of the root canal system, identifying periapical lesions
and root fractures, determining accurate working length measurements, pre-
dicting the viability of dental pulp stem cells, and forecasting the success of
retreatment procedures.

Additionally, several studies have demonstrated the application of this


innovative method in diagnosing and planning therapy for endodontic dis-
eases. Berdouses et al. created a computer-aided automated system (ACDS) to
identify occlusal caries lesions in posterior permanent teeth using photo-
graphic color dental photographs. This approach follows the International
Caries Detection and Assessment System (ICDAS II).

Hiraiwa et al. employed deep learning systems (AlexNet and


GoogleNet) to identify the distal root form of the mandibular first molar on
panoramic dental radiographs.

Both deep learning algorithms outperformed radiologists with substan-


tial expertise in diagnostic tasks, albeit by a small margin. A deep convolu-
tional neural network (CNN)-based artificial intelligence (AI) diagnosis
model was created for CBCT imaging. This model utilizes a random walk
segmentation technique to accurately identify and locate periapical lesions, as
well as quantify their volume.

26
An advanced three-dimensional (3D) imaging technique is widely uti-
lized by around 80% of endodontists to effectively identify and treat root
canal diseases.

CBCT scanning has enhanced the accuracy of detecting periapical dis-


ease when compared to 2D periapical radiography.[57]

1. Discovering Root Fractures

Vertical root fractures (VRFs) are uncommon occurrences in teeth that


have undergone root canal treatment. Vascular and regenerative factors
(VRFs) can be deceptive as they typically manifest as subtle symptoms or
may even be asymptomatic [58]. Vascular root fracture (VRF) occurs in 3.7-
30.8% of teeth that have undergone root canal treatment, with the mandibular
premolars and molars being the most frequently impacted [59].

Treating VRF (Vertical Root Fracture) teeth is a challenging task, and


the majority of these teeth are either removed or treated using hemisection or
root separation methods [60].

Resection of infected roots as part of early treatment can significantly


increase the survival duration of the remaining roots. After five and 10 years,
the survival rates are 94% and 64% respectively [61].

An early diagnosis of a Vascular Reperfusion Failure (VRF) will help


prevent extensive damage to the tissues. The diagnosis of VRF is determined
by evaluating the patient's clinical manifestations and symptoms, as well as
by identifying a fracture line by radiographic examination.

27
An attempt has been made to enhance the detection capability of radio-
graphic techniques by transitioning from conventional radiography to digital
imaging and digital image enhancement [62].

X-ray and CBCT image analysis assist in the detection of a VRF, a di-
agnosis that can be difficult to identify, Inadequate diagnosis can lead to the
necessity of unnecessary surgical operations or tooth extraction.

The clinician often faces a diagnostic challenge due to the clinical pre-
sentation and the low sensitivity of diagnostic imaging in detecting vertebral
fractures [63]. CBCT imaging shown superior efficacy in detecting vertical
root fractures (VRFs) in teeth without fillings, but radiography exhibited
somewhat higher accuracy in identifying VRFs in teeth that had undergone
root canal treatment.

An invitation has been extended to study novel approaches for enhanc-


ing the diagnosis of VRFs, as conventional techniques are unable to accu-
rately detect VRFs [64].

AI applications such as machine learning (ML), convolutional neural


networks (CNN), and probabilistic neural networks (PNN) are employed for
the purpose of detecting the variable refrigerant flow (VRF) [65].

2. Root Morphology

In order to perform successful root canal therapy, a dentist must pos-


sess a thorough comprehension of root canal morphology. If a canal is not
treated properly and is potentially overlooked, it can lead to the growth of
microorganisms and ultimately result in the failure of root canal therapy. In
light of these circumstances, a dentist seeks to acquire a comprehensive
knowledge of root morphologies and an efficient diagnostic tool for their
identification [66].

28
The successful outcome of a nonsurgical endodontic treatment relies
heavily on the ability to accurately identify the various root canal systems.

In the past, the diagnosis of this condition was typically made by peri-
apical X-rays and CBCT image analysis [67].

CBCT, or cone beam computed tomography, may now be precisely as-


sessed in dental clinics for purposes such as analyzing variations in root and
canal morphology [68]. While conventional radiography is often utilized and
has a crucial function in root canal pathology, diagnosis, and treatment plan-
ning, CBCT offers the most superior 3D pictures. Conventional radiographs
no longer have restrictions such as distortion and superimposition of bone and
dental features, as a result [69]. The DL system of AI shown exceptional per-
formance in accurately detecting the root canal anatomy [70]. The DL method
has the potential to assist in diagnostics by accurately categorizing images,
which can be beneficial for inexperienced clinicians in interpreting medical
images [71].

The DL algorithm built using AI and data interpretation was shown to


have the capability to evaluate the morphologies of root canals and their
three-dimensional changes following instrumentation [72].

3. Verification of Working Length and Tracing the Apical Fore-


men.

The accuracy of determining the working length is crucial to the suc-


cess of endodontic treatment [73] . The dental practitioners can master the
working length assessment using several different guidelines and techniques,
with routine success when different techniques are used [74] . The endodon-
tic treatment necessitates the precise determination of root canal length and
29
the apical foramen. The hand sensation method, radiological determination,
and usage of an electronic apex locator are the three methods for measuring
root canal length [75] .

CBCT and electronic apex locators have recently been used as modern
tools for detecting the apical foramen [76] . The electronic apex locator, most
frequently used in clinics to measure root canal length, was developed over
time using multiple techniques . The root canal treatment prognosis can only
be guaranteed when the instrumentation ends at the apical constriction of the
root [ 77,78 ].

The ANN diagnosis method helps to improve the diagnosis and results
in a better radiographic determination of working length. Further, in a wide
range of clinical circumstances, ANNs are used as a judgement system [79] .

Few studies have been done by applying artificial intelligence to locate


the apical foramen and determine the root canal’s working .

30
Figure (9-2) , Example of how AI Technology can Illuminate a Black, White
and Grey Radiograph with Quantified, Precision Outputs for Doctor Commu-
nication and Patient Education [79].

4. Retreatment Predictions

The success percentage of endodontic therapy in dentistry is 90%,


whereas the failure rate is 10%. Consequently, a dentist would highly appreci-
ate the capability to utilize the AI technique to examine and identify cases that
fall within this 10% range, and choose whether extraction or retreatment is
more favorable. Campo et al. introduced the case-based reasoning (CBR) par-
adigm to forecast the outcomes and evaluate the advantages and disadvan-
tages of nonsurgical endodontic retreatment [80].

Essentially, the system assessed if more treatment was required. The


system integrates data from various domains, including achievement, remem-
brance, and analytical probability.

The system's strength lies in its ability to predict the outcome of retreat-
ment with a high degree of accuracy. The effectiveness of the system was
solely dependent on the quality of the data obtained, which posed a constraint.
Cognitive Behavioral Reasoning (CBR) is the systematic process of generat-
ing solutions to problems based on previous experiences with similar chal-
lenges. By retrieving comparable examples, crucial knowledge and informa-
tion can be assimilated. The issue of variations and the presence of diverse
procedures can result in system heterogeneity [81]. In order to attain greater
responsiveness, selectivity, and precision, future research should take into ac-
count the diversity of human approaches and may need to raise sample num-
bers [82].

31
Pulpal diagnosis .5

The incorporation of AI has substantially enhanced the field of pulpal


diagnostics, as evidenced by numerous research. For instance, Tumbelaka et
al. [83] employed an Artificial Neural Network (ANN) to distinguish between
healthy pulp, pulpitis, and necrotic pulp by relying on Pattern Recognition
(PRs). Their research, which utilized a sample of 20 teeth (10 molars and 10
canine teeth), demonstrated that the process of digitizing direct reading radi-
ography might improve the accuracy of pulpal diagnoses. In a more recent
study, Zheng et al. [84] examined the identification of deep caries and pulpitis
using panoramic radiographs (PRs). While AI has demonstrated potential in
accurately differentiating various pulpal disorders through the use of radio-
graphs, it is crucial to acknowledge the constraints that arise from simply de-
pending on radiographic evaluation. It is essential to highlight the mutually
supportive function of clinical and radiographic investigations in conjunction
with additional diagnostic tools, such as pulp and periapical testing. This
comprehensive diagnostic approach guarantees a meticulous assessment,

hence improving the precision and dependability of pulpal diagnoses in clini-


cal settings.
32
Figure (10-2) , AI-enabled Caries Detections and Tooth Part Segmentations,
Provided by a Real-Time Chair-Side AI Radiology Interface [84].

6 . Automated endodontic/periapical lesion detection

The application of artificial intelligence (AI) in the automated identifica-


tion of endodontic and periapical diseases has significantly transformed den-
tistry diagnostics. Several research have proven the practicability of utilizing
machine learning algorithms to detect and categorize abnormalities, resulting
in expedited diagnosis and precise treatment strategizing. Okada et al. [85]
utilized computer-aided detection (CAD) to differentiate between periapical
lesions in cone beam computed tomography (CBCT) scans. They developed a
Deep Learning Algorithm (DLA) specifically designed for detecting periapi-
cal diseases. The utilization of 2902 orthopantomograms (OPGs) and the in-
clusion of assessments by oral and maxillofacial surgeons, the DLA (Deep
Learning Algorithm) demonstrated potential in aiding the identification of pe-
riapical lesions. This highlights the potential collaboration between artificial
intelligence (AI) and clinical experience. Orhan et al. [86] examined the diag-
nostic accuracy of artificial intelligence (AI) in detecting periapical pathosis
using Deep Convolutional Neural Networks (CNNs). By examining 153 ab-
normalities in 109 individuals using Cone Beam Computed Tomography
(CBCT), it was determined that Deep Learning Systems (DLS) based on Arti-
ficial Intelligence (AI) are effective in identifying apical pathosis. This high-
lights the potential of AI in improving the accuracy of diagnoses.

In summary, these investigations showcase the adaptability and revolu-


tionary capacity of AI in automating the identification and categorization of
endodontic and periapical diseases. This offers doctors significant resources
to aid in their diagnostic and treatment-planning efforts.

33
Figure (11-2) , Screen Shot of a Periapical Radiograph that Includes an En-
dodontically Treated Tooth and has been Analyzed by their AI Algorithms
[86].

34
Figure (12-2), Periapical Radiolucency on an Image without AI Definition
[86].

7. Micro-robots

Endodontic micro robots provide an opportunity to increase the quality of


endodontic treatment and to reduce errors during the procedure. For example,
the Advanced Endodontic Technology Project uses micro machine technology
Figure (12-2) , in which a micro robot is mounted on the tooth in need of
treatment and is computer-controlled and monitored as it performs the root
canal procedure [87]

Figure (13-2) , Micro-robots [87].

In short, the micro machine provides automated precision probing, drilling,


cleaning, and filling to assist the clinician in providing error-free therapy.

35
Figure (14-2) , Endodontic microrobots used to increase the quality
of endodontic treatment and to reduce errors during the procedure
[87].

8. Other uses of AI in endodontics

Additional applications of artificial intelligence in the field of endodon-


tics Artificial intelligence (AI) has a broad spectrum of applications in the
field of endodontics that go beyond conventional diagnostic duties. It has the
ability to tackle diverse difficulties in clinical practice, enhance procedural re-
sults, and offer significant insights for full evaluations of endodontic proce-
dures. For example, Albitar et al. conducted a study using a Convolutional
Neural Network (CNN) with U-Net architecture to identify both filled and un-
filled canals. On the other hand, Buyuk et al. developed a CNN model with
Gabor filtering and Long Short-Term Memory (LSTM) networks to detect
broken endodontic instruments in OPGs. Kong et al. [87]

Employed a Multilayer Perceptron to distinguish between stress and


Electric Pulp Tester (EPT)-induced electro dermal activity data. Similarly,
Ramezanzade et al. [88] presented a Multi-path NN to forecast the danger of
pulp exposure in Bitewing radiography. Choi as al. [89]

Created an interactive software system to analyze access cavity using


three-dimensional models, while Suárez et al. [90] assessed the reliability and
precision of ChatGPT, an AI chatbot, in the field of endodontic. Furthermore,
it has been observed that ChatGPT possesses vulnerabilities and constraints in
comprehending the circumstances and formulating decisions on treatment
planning [91]. These studies emphasize the revolutionary capacity of AI in
endodontics and its ability to alter the benchmarks of dental practice. The sub-
sequent subsections offer an in-depth examination of each subject domain,

36
emphasizing significant methodology, results, and the consequences of these
investigations for the wider field of endodontics.

2.3.4 AI In Oral Surgery

Oral surgery relies heavily on the utilization of convolutional neural net-


works, which enable the identification and categorization of characteristics on
panoramic radiographs (PR) and cone-beam imaging (CBCT)[92].

The most extensively researched features among these are radiolucent


bone lesions of the mandible and maxilla [93] .These cystic or tumorous le-
sions, which frequently originate from odontogenic sources, present a distinct
diagnostic and therapeutic difficulty. The radiographic aspects of the condi-
tion often exhibit a variety of characteristics, without any specific sign that
definitively indicates a certain diagnosis. Additionally, the clinical history of
patients with this condition tends to be generally similar. While many lesions
can be treated with a simple surgical procedure called enucleation and curet-
tage, more complex cases like ameloblastomas may necessitate a more ag-
gressive treatment to minimize the chances of tissue damage, malignant trans-
formation, and recurrence [94].The methods employed vary based on their de-
sign and enable the automation of detecting, segmenting, extracting features,
and classifying lesions. These algorithms can be applied to either PR or
CBCT imaging.Two algorithms, which were trained using databases contain-
ing 500 and 1602 panoramic photos respectively, had comparable perfor-
mance to that of the experts. Consequently, these algorithms are expected to
be included into non-specialist dental office screening [95].

37
Enhancing the classification's effectiveness would likely enable the uti-
lization of these models as diagnostic instruments, hence aiding experts in
their therapy determinations. Extracting impacted third molars is a frequently
done treatment by maxillofacial surgeons, oral surgeons, and dentists.

Algorithms have been suggested to enhance the efficiency of several


management phases.

In many circumstances, determining whether or not to extract a tooth can


be a contentious decision. However, this decision can be made easier by uti-
lizing a predictive model that measures the angulation of the third molars on
PR scans to assess their potential for eruption. Although problems, particu-
larly neurological ones, are still common, there are techniques available that
can help guide the surgical process by anticipating the level of difficulty or by
identifying contact with the inferior alveolar canal on the PR [96]. Ultimately,
by training an artificial neural network using 15 clinical factors, a predictive
score for postoperative oedema can be obtained [97].

In the field of implantology, artificial intelligence (AI) has been utilized


to identify the type of implant on panoramic radiographs (PR), forecast the
success of osteointegration or the survival of implants using various input
data, and enhance the design of dental implants by optimizing their porosity,
length, and diameter. This optimization aims to reduce the stress at the inter-
face between the implant and the bone [98].

The proliferation of automated detection methods in the field of PR will


soon result in the development of comprehensive commercial software for the
automated analysis of this imaging examination. The interpretation of PR is
frequently unclear to most physicians and is seldom carried out by a radiolo-
gist. Therefore, it would be feasible to integrate these detection technologies

38
with classification or text production algorithms, enabling the automated cre-
ation of a radiological report.

1. Cancers of the oral cavity

Oral cavity carcinomas are the most prevalent tumors in the upper aerodi-
gestive tract, and their incidence rate is on the rise. Early intervention typi-
cally leads to few surgical complications, but a delayed diagnosis might result
in a lower survival rate and significant functional and cosmetic consequences
[99]. Identifying pre-malignant lesions at an early stage helps prevent them
from developing into cancer, but this task is challenging for individuals with-
out specialized knowledge. Certain machine learning tools enable identifica-
tion using autofluorescence measurement [100] or photography [101].

Fu et al. developed a convolutional neural network (CNN) specifically


designed to detect advanced lesions of oral cavity carcinomas in photos. The
CNN was trained using 6176 images and achieved performance levels compa-
rable to those of experts. Take note of the initial research conducted by van de
Goor et al., where they trained a neural network to identify carcinomas in the
upper aerodigestive tract. This was done using data from a "electronic nose"
that can analyze specific volatile components found in breath [102].

The introduction of these technologies into clinical practice will make it


easier to screen for and manage these lesions in basic care. Metastatic lymph
nodes can be detected using other algorithms that are trained on digital data
obtained from imaging. These factors include texture, grey levels, and the in-
teraction between voxels. This technique is referred to as radiomic analysis or
computational imaging [103]. Machine learning algorithms can establish a
39
connection between these complex parameters and diagnostic, clinical, and
prognostic aspects. As a result, they can serve as reliable radiological
biomarkers that provide insights into the characteristics and surroundings of
tumor lesions [104].

One challenge faced by practitioners in managing cancer patients is pre -


dicting the progression of the disease based on clinical and paraclinical data.
A reliable risk stratification enables the selection of additional investigations,
planning appropriate therapies, and implementing suitable surveillance. Typi-
cally, this assessment is facilitated by utilizing scores that have been validated
through statistical cohort studies, incorporating clinical, paraclinical, or ge-
nomic factors.

However, applying these algorithms at the individual level is challenging


due to the uncontrolled nature of the vast amount of patient and tumor-related
data. Consequently, multiple algorithms have been created to forecast survival
, recurrence risk , and postoperative complication risk . These algorithms pri-
marily rely on clinico-biological data extracted from medical records. This
highlights the importance of maintaining accessible prospective registers and
developing text mining tools that can automatically and systematically extract
data from electronic medical records [105-107].

A noteworthy study conducted by Pan et al. involved training a model


using 12 textural features extracted from CT images of tongue cancer. The
findings of their study demonstrated comparable performance to that of ex-
perts in terms of predicting survival outcomes. Training prediction models us-
ing radiomic data can help solve the challenge of organizing and arranging
textual and numerical data from medical records.

40
2- In implant dentistry

AI in implant dentistry has been steadily advancing, offering innovative


solutions to enhance treatment planning, precision, and patient outcomes.

Here are some key areas where AI is making an impact:

A. Treatment Planning: AI algorithms can analyze patient data, includ-


ing CT scans and digital impressions, to assist dentists in planning im-
plant placement with greater accuracy. By analyzing bone density,
anatomical structures, and adjacent teeth positions, AI can help deter-
mine the optimal location and angle for implant placement.
B. Prosthesis Design: AI-powered software can aid in designing custom-
ized implant prosthetics that fit the patient's unique oral anatomy. This
includes designing crowns, bridges, and dentures that are aesthetically
pleasing and functional.
C. Image Analysis: AI algorithms can analyze radiographic images to
detect abnormalities, such as bone defects or sinus involvement, that
may impact the success of implant placement. This assists dentists in
identifying potential complications before surgery and planning ap-
propriate interventions.
D. Predictive Modeling: Machine learning algorithms can analyze large
datasets of implant outcomes to identify patterns and predict the likeli-
hood of success or failure for specific patients or implant configura-
tions. This helps dentists make more informed decisions and personal-
ize treatment plans based on individual risk factors.
E. Surgical Guidance: AI-powered navigation systems provide real-
time guidance during implant surgery, helping dentists precisely posi-
tion implants according to the pre-planned treatment strategy. This im-
41
proves the accuracy of implant placement and reduces the risk of com-
plications.
F. Patient Education: AI-driven simulations and virtual reality tools can
help patients better understand the implant procedure and expected
outcomes. By visualizing the treatment process, patients can make
more informed decisions and feel more confident about undergoing
implant surgery.
G. Post-operative Monitoring: AI algorithms can analyze post-operative
data, such as healing progress and implant stability, to monitor patient
recovery and identify any signs of complications early on. This en-
ables timely interventions to ensure successful long-term outcomes.

Overall, AI has the potential to revolutionize implant dentistry by improving


treatment planning, enhancing surgical precision, and optimizing patient care.
As technology continues to evolve, we can expect further advancements that
will continue to benefit both dentists and patients in the field of implant den-
tistry [108]

Figure (15-2) , Using Artificial Intelligence in Implant Dentistry for analyze


post-operative data, such as healing progress and implant stability, to monitor
patient recovery and identify any signs of complications early on [108].
42
3- In robotic surgery

AI in dental robotic surgery is a cutting-edge field that combines artifi-


cial intelligence with robotics to enhance the precision, efficiency, and safety
of dental procedures [109] .

Here's how AI is being utilized in dental robotic surgery:

A. Robotic Implant Placement: AI-powered robotic systems assist den-


tists in precisely placing dental implants by analyzing patient-specific
anatomy from pre-operative imaging. These systems can autonomously
drill implant sites according to the treatment plan, ensuring optimal po-
sitioning and angulation for long-term success.

B. Endodontic Robotic Systems: AI-driven robotic systems aid in per-


forming root canal therapy by navigating within the root canal space,
removing infected tissue, and shaping the canal to prepare it for filling.
These systems use real-time imaging feedback to ensure thorough
cleaning and shaping while minimizing the risk of procedural errors.

C. Gum Surgery Assistance: AI-powered robotic systems assist peri-


odontists in performing gum surgery procedures, such as gingivectomy
or crown lengthening. These systems can precisely remove excess gum
tissue and reshape the gumline according to the treatment plan, improv-
ing aesthetics and periodontal health [110].

D. Orthognathic Surgery: AI-driven robotic systems assist oral and max-


illofacial surgeons in performing orthognathic surgery to correct jaw
misalignments. These systems can accurately reposition the jawbones
based on pre-operative virtual surgical planning, resulting in improved
facial harmony and occlusion.

43
E. Soft Tissue Surgery: AI-powered robotic systems aid in soft tissue
surgeries, such as frenectomy or soft tissue grafting. These systems can
precisely manipulate delicate soft tissues while minimizing trauma and
accelerating healing, leading to improved patient comfort and out-
comes.

F. Intraoperative Navigation: AI algorithms integrated into robotic sys-


tems provide intraoperative navigation guidance by overlaying pre-op-
erative imaging data onto the surgical field in real-time. This aug-
mented reality visualization assists dentists in accurately executing the
treatment plan and ensures the desired surgical outcomes [111].

G. Robot-Assisted Biopsies: AI-driven robotic systems facilitate precise


tissue sampling for biopsies in cases of oral lesions or suspected oral
cancer. These systems can autonomously guide biopsy instruments to
the target area while avoiding critical structures, improving diagnostic
accuracy and reducing patient discomfort.

44
Figure (16-2) , robotic surgery where human body motion and human intelli-
gence is simulated [112].

Figure (17-2) , x assistance of robotics and navigational surgery, clinicians


can achieve more successful implant placement procedures and lower the risk
of failure [112].

2.3.5 AI in prosthodontics

The integration of artificial intelligence with design software enables


dentists to create optimal and visually pleasing prostheses by taking into ac-
count various factors such as facial measurements, anthropological calcula-
tions, ethnicity, and patient preferences. Additionally, a significant advance-
ment in this domain is the utilization of CAD CAM technology, which allows
for the creation of 2D and 3D models [113,114].

Virtual Reality Stimulation (VRS) can be utilized to replicate face fea -


tures during therapy, using personalized AI-powered devices that would be
more readily embraced by the younger population.
45
AI-enabled restorative dentistry, utilizing design computer-aided manu-
facturing technology, has already been successfully used in adult dental prac-
tice. It has the potential to greatly benefit dental restorations and improve aes-
thetics. Takahashi et al. introduced a deep Convolutional Neural Network
(CNN) that was pre-trained with ImageNet weights. This CNN was used to
diagnose partially edentulous arches and achieved an accuracy of 99.5% for
maxillary arches and 99.7% for mandibular arches using oral photographs.
The study demonstrated the effectiveness of deep learning in designing re-
movable partial dentures. In order to accurately segment upper and lower
teeth and preserve their boundaries, Xu et al. proposed a two hierarchical
CNN approach using 3D dental model images [115].

2.2.5. Application of AI in Prosthodontics

Artificial intelligence (AI) employs machine-learning models to repli-


cate human intelligence and behavior. These models are developed by statisti-
cal analysis of historical data and trained using previously collected data
[116].

The exponential growth of digital data enables the training of AI systems


to generate more accurate outcomes. The integration of artificial intelligence
in prosthodontics has brought significant changes in its use for automated di-
agnoses, predictive measurements, and classification or diagnostic tools.
Prosthodontics utilizes various aspects of modern dental technology, such as
the replacement of traditional impression-making methods with digital im-
pressions using intraoral scanners.

Intraoral scanners are highly dependable for routine applications, particu-


larly in the fabrication of single crowns or short-span FPDs [117,118]. Never-

46
theless, progress in the scanning domain has resulted in its application in the
production of full dentures and intraoral scans for maxillofacial purposes.
Margin identification in fixed prosthodontics was accomplished through the
utilization of artificial intelligence after an intraoral scan. CAD/CAM, an ab-
breviation for "computer-aided design/computer-aided manufacturing", is em-
ployed in the fabrication of both permanent and removable dental prostheses
[119,120] . This technique utilizes data from numerous real crowns to offer
an optimal crown design suitable for various situations. In modern times, vari-
ous branches of dentistry have employed digital tools to assist patients in
achieving the aesthetically pleasing smiles they desire. These tools encompass
3D face tracking and cost-effective virtual 3D data hybrids, such as frag-
mented cone beam computed tomography (CBCT), intraoral scans, and face
scans. The virtualization of a patient's anatomy is a necessary step in any ther-
apeutic action aimed at altering their smile. The early grin designs were pro-
duced by creating basic sheet drawings from two-dimensional printed images
of patients [121,122]. The integration of AI in prosthetic dentistry has resulted
in numerous innovative possibilities. This includes the ability to generate oc-
clusal morphology in crown contemplation, even in cases of wear or fracture,
as well as programmed teeth setting for dentures and automatic framework
designs for removable dental prostheses. Additionally, AI serves as an educa-
tional tool, providing guidance to students at various levels of education, from
new students to postgraduates. It also offers support to less-experienced un-
dergraduate students in their professional development [123,124].

47
Figure (18-2) , CAD/CAM and Ai in employed in the fabrication of both per-
manent and removable dental prostheses [123]

- Accuracy Of Intra-oral Scanners Versus Traditional Impres-


sions

The use of digital technologies in medicine and dentistry has allowed


clinicians to enhance the efficiency, quality, and patient experience of diag-
nostic and intervention procedures. In the field of dentistry, traditional meth-
ods of taking dental impressions involved placing impression material in a
tray and inserting it into the mouth until it hardened, creating a negative
replica of the desired structures. However, the introduction of intraoral scan-
ners (IOS) for digital impressions has the potential to improve the accuracy
and precision of oral rehabilitations. Unlike the conventional approach, IOSs
generate dental impressions by combining multiple three-dimensional images
to create a complete three-dimensional object.

Recommendations have been made for improving digital impressions.


It may be interesting to understand the current use of IOS technology among
dental clinicians who are late adopters or resistant to new technology.

48
Figure (19-2) , Virtual reality simulation (VRS) technology can be used to
simulate the facial profiles post treatment [124].

The PRISMA-A (Preferred Reporting Items for Systematic reviews and


Meta-Analyses for abstracts) was introduced in 2013 to help authors create
abstracts that can be quickly evaluated for review validity, provide a clear de-
scription of findings, undergo peer review before publication or conference
collection, and be easily retrieved after an electronic search. This checklist
serves as a guide for authors to summarize their systematic review into essen-
tial information for an abstract that will be appealing to a broad audience. It is
crucial to quickly evaluate and compare the precision and clinical reliability
of IOS and conventional impressions based on recent systematic reviews and
meta-analysis. This is necessary to make informed decisions about whether to
adopt newer technologies. Additionally, it is important to clarify the differ-
ences between these two approaches, highlighting the advantages and disad-
vantages of each. Furthermore, it is essential to evaluate and compare the
level of reporting excellence in the summary of these secondary reports [125]

Therefore, this study had two objectives:


49
1. The objective is to assess the accuracy and reliability of IOS technol-
ogy in dentistry using up-to-date secondary sources.

2. To evaluate the level of reporting quality in the titles and abstracts of


the literature that has been included.

A dental impression is a negative imprint or positive digital image that


captures the intraoral anatomy. It is used to create a three-dimensional replica
of the anatomical structure for permanent record or to produce a dental
restoration or prosthesis. Acquiring an accurate impression is crucial for suc-
cessful dental restorations. The most commonly used materials for impression
acquisition are irreversible hydrocolloids, vinyl siloxane ether, vinyl
polysiloxane, and polyether. These materials are biocompatible and offer high
accuracy for conventional impression-acquisition methods.

Nevertheless, the lack of a standardized operating protocol for the im -


pression taking procedure and the distortion of the impression or plaster cast
material have a negative impact on the accuracy of the model. This, in turn,
affects the precision of the three-dimensional (3D) model data and the pro-
duction of fixed prostheses [126]. Additionally, the traditional impression-
making procedure is frequently described as uncomfortable and unpleasant
due to its tendency to trigger the gag reflex and the unpleasant taste of the im-
pression material [127].

Advancements in computer science have led to significant changes in


fabrication protocols. Computer-aided design (CAD) and computer-aided
manufacturing (CAM), along with digital intraoral scanners, are increasingly
used for creating dental prostheses and models [128]. Compared to traditional
methods, using an intraoral scanner to obtain a digital impression offers sev-
eral advantages, including easy repetition, direct visualization of the model,
improved time efficiency, and the ability to acquire data for CAD/CAM pros-
50
thetics directly at the patient's chairside. In clinical settings, patients also pre-
fer digital impression acquisition over traditional methods [129]. Addition-
ally, the digital technique is valuable for treatment planning and simulation in
the aesthetic area. Utilizing direct model visualization and a preview of the fi-
nal occlusion and smile design can improve the overall treatment outcome.
Additionally, incorporating morphometric software can assist in evaluating
the accuracy of the prosthesis, particularly in terms of marginal adaptations
for crowns and abutments [130].

From a technical standpoint, digital impressions can be combined with


other digital datasets, such as cone-beam computed tomography (CBCT) and
optical facial scans [131]. This integration is crucial for ensuring the accuracy
of intraoral scan data when reproducing different oral environments. Recent
studies have consistently found that using direct intraoral scanners is a satis-
factory substitute for conventional methods, as concluded in comparisons be-
tween the two approaches [132-135].

The increasing popularity of directly digitalized intraoral impressions has


resulted in a corresponding increase in research studies that investigate the
technical reliability of various intraoral scanners and indirect digitization pro-
tocols using a desktop scanner. In their comprehensive research, Guth et al.
[136] evaluated the scan quality produced by five different sets of direct digi-
tizing scanners.The researchers examined spatial disparities and noted sub-
stantial variations across direct digital scanners, as well as with indirect cast
digitization.

Despite using a conventional metal-based (titanium) reference, the au-


thors were unable to definitively endorse the use of powder before scanning
and suggested additional evaluation.

51
Tomita et al. [137] conducted a study that was similar to previous work.
They used a standard epoxy model and compared the digitalization of plaster
models created from alginate and silicone-based impressions, which are com-
monly used in dentistry. They used a unique coordinate system-based ap-
proach and their findings supported the findings of previous studies. They
also found that intraoral scanning processes tended to result in smaller mea-
surements compared to the set reference, indicating negative deviations. In the
same study, indirect digitization showed positive deviations.

Nevertheless, they proposed that the intraoral scanning procedure might


offer greater precision compared to traditional plaster model techniques when
assessing digital linear distance.

Nevertheless, the accuracy of results obtained from intraoral scanners


has been compromised by disparities in restorative materials and variations in
tissue surfaces caused by uneven light scattering [138]. Li et al. [139] found
that scanning highly translucent materials with an intraoral scanner led to re-
duced scan accuracy and morphological variations. Bocklet et al. [140] also
observed that both the type of restoration material and the type of intraoral
scanner influenced the accuracy of the scan data.

In order to address certain difficulties encountered in direct digital scan-


ning, the application of powder before scanning has been suggested. How-
ever, there is currently no established procedure for this, and the outcomes
vary depending on the scanning method used .

When manufacturing a prosthesis, it is necessary to consider both infor-


mation about the abutment teeth and the relationships with the surrounding
teeth. In a recent typodont study, Son et al. [141] evaluated the impact of in-
terproximal distance on scan accuracy to record tooth preparations, conclud-
ing that trueness of a digitalized impression improved with the interproximal
52
distance (1.0 > 0.6 mm). These observations were believed to impact access
to light from the scanner on the contour of the adjacent tooth, where sharp and
angled surfaces lead to a false bridging-like effect. While the underlying sur-
face geometry is significant, the discrepancy in optical interaction could also
play a significant role and might warrant an algorithmic adjustment. Clini-
cally, patients can present with translucent or opaque dental materials depend-
ing on their treatment history A recent systematic review concluded that the
accuracy of digital workflow for a fixed dental prosthesis was similar to con-
ventional protocols. It was emphasized that the quality of the scanned data
was critical for determining the accuracy of the additively manufactured casts
[142]. Therefore, the validity of a digital impression can be questioned, partic-
ularly in cases with multiple types of restorations and prosthetics.

The data obtained may vary in quality depending on the shape and sur-
face features of the prosthetic materials found in the neighboring and oppos-
ing teeth. In addition, a recent study examined the internal fitting of a prosthe-
sis using an intraoral scanner [143]. However, there is a lack of research in the
current literature regarding the impact of the relationship between external fit-
ting, such as proximal and occlusal contact, and the surrounding material
(tooth or restoration) from a clinical standpoint. This gap in knowledge leaves
us uncertain about the potential effects of material in the immediate vicinity.

To tackle this problem, we formulated a hypothesis that suggests the ac-


curacy of the final prosthesis is influenced by the material properties of
nearby or opposing prosthetic restorative material when using an intraoral
scanner. Consequently, our objective was to investigate how different restora-
tive materials impact the replication of the outer surfaces.

53
The objective of this study was to assess the precision of digital impres -
sions of crowns made from different materials (polymethyl methacrylate, zir-
conia, gold, and cobalt-chromium alloy) utilizing either an intraoral scanner
or a traditional impression acquisition approach.

The subsequent null hypotheses were examined:

A. There are no variations in accuracy and correctness between den-


tal crowns made from different materials for dental restorations
when created using a digital impression.
B. There are no notable disparities between traditional and digital
techniques for capturing impressions using various materials.
C. The scan quality remains consistent regardless of whether pow-
der is used or not.

54
CHAPTER THREE
DISCUSSION

55
Discussion

the current state of AI research in dentistry, particularly focusing on al-


gorithm development for dental image analysis. One of the central questions
in this area is how we can improve the accuracy and reliability of AI algo-
rithms in detecting dental conditions from various imaging modalities.

One approach is to explore deep learning techniques for feature extrac-


tion and classification in dental images. However, challenges such as limited
annotated data and class imbalance need to be addressed to train robust mod-
els capable of generalizing across different patient populations and imaging
settings. Additionally, we need to investigate the interpretability of AI algo-
rithms in dental diagnostics. How can we ensure that AI systems provide
transparent and clinically meaningful .

Longitudinal studies comparing AI-assisted diagnosis to traditional


methods could provide valuable insights into the effectiveness and efficiency
of AI in improving patient outcomes. Exploring the use of AI in orthodontic
treatment planning or prosthetic design could yield significant advancements
in patient-centric dentistry.

AI research in dentistry holds tremendous promise for improving diag-


nostic accuracy, treatment outcomes, and patient experiences. By addressing
key research questions and collaborating across disciplines, we can harness
the full potential of AI to transform oral healthcare delivery and enhance
overall well-being.

56
CHAPTER FOUR
CONCLUSION AND
RECOMENDATION

57
Conclusion and Recommendation

Smart, new dental technologies offer a way to improve consistency sig-


nificantly and, as a result, patient health. Dental research should grow the re-
lationship between oral and general health in the future to concentrate on in-
dividualized treatment with patient-centred outcomes.

Robotic assistance in dentistry has become possible thanks to technologi-


cal advancements. “Augmented intelligence” has also been embraced a little
too soon in the present scenario. However, the benefits of digital applications
will complement human talents and capabilities to provide the best and more
cost-effective healthcare to patients. Augmented intelligence based on big
data can significantly reduce the number of misdiagnoses and provide more
insightful information quickly, accurately, and efficiently. AI can schedule a
patient list that includes the patient’s ongoing requirements and health infor-
mation, may predict patient-specific drug complications if patient records are
made available and AI could help with diagnosis and staging, as well as pre-
dict outcomes .

Therefore, artificial intelligence facilitates achieving the required preci-


sion in specialized fields of dentistry, saving time and effort while minimizing
potential errors or mitigating their impact.

58
We Recommended :-

1. Continuing Education: Develop comprehensive educational programs


on AI in dentistry, covering its various applications from diagnostics to
treatment planning.
2. Collaboration: Foster interdisciplinary collaborations between dentists
and AI experts, encouraging joint research projects and knowledge
sharing.
3. Professional Associations: Partner with dental organizations to host
regular seminars, workshops, and conferences dedicated to AI advance-
ments in dentistry.
4. Case Studies*: Showcase diverse case studies and success stories illus-
trating the practical benefits of integrating AI technologies into dental
practices.

59
REF-
ERENCES

60
References

1. Deshmukh S. Artificial intelligence in dentistry [Internet]. Vol. 10,


Journal of the International Clinical Dental Research Organization.
2018. p. 47. Available from: http://dx.doi.org/10.4103/jicdro.
jicdro_17_18.
2. . Russell S, Norvig P. Artificial Intelligence: A Modern Approach [In-
ternet]. Createspace Independent Publishing Platform; 2016. 626 p.
Available from: https://books.google.com/books/about/Artificial_ Intel-
ligence.html?hl=&id=PQI7vgAACAAJ.
3. Brickley MR, Shepherd JP, Armstrong RA. Neural networks: a new
technique for development of decision support systems in dentistry [In-
ternet]. Vol. 26, Journal of Dentistry. 1998. p. 305–9. Available from:
http://dx.doi.org/10.1016/s03005712(97)00027-4.
4. Kalappanavar A, Sneha S, Annigeri RG. Artificial intelligence: A den-
tist’s perspective [Internet]. Vol. 5, Journal of Medicine, Radiology,
Pathology and Surgery. 2018. p. 2–4. Available from: http://
dx.doi.org/10.15713/ins.jmrps.123.
5. Bell G, Hey T, Szalay A. 2009. Beyond the data deluge. Science.
323(5919):1297.
6. Turing AM, Haugeland J. Computing machinery and intelligence. MA:
MIT Press Cambridge (1950).
7. McCarthy J, Minsky M, Rochester N, Shannon CE. A proposal for the
dartmouth summer research project on artificial intelligence. AI maga-
zine. (2006) 27(4):12–14.
http://jmc.stanford.edu/articles/dartmouth/dartmouth.pdf
8. Tatnall A. History of computers: hardware and software development.
In: Encyclopedia of Life Support Systems (EOLSS), Developed under

61
the Auspices of the UNESCO. Paris, France: Eolss. (2012).
https://www.eolss.net
9. Weizenbaum J. ELIZA—a computer program for the study of natural
language communication between man and machine. Commun ACM.
(1966) 9(1):36–45. doi: 10.1145/365153.365168
10.Hendler J. Avoiding another AI winter. IEEE Intell Syst. (2008)
23(02):2–4. doi: 10.1109/MIS.2008.20
11. Schmidhuber J. Deep learning. Scholarpedia. (2015) 10(11):32832.
doi: 10.4249/ scholarpedia.32832
12. Liebowitz J. Expert systems: a short introduction. Eng Fract Mech.
(1995) 50(5–6):601–7. doi: 10.1016/0013-7944(94)E0047-K
13.McDermott JP. RI: an expert in the computer systems domain. AAAI
Conference on arti fi cial intelligence (1980).
14.Krizhevsky A, Sutskever I, Hinton GE. Imagenet classification with
deep convolutional neural networks. Commun ACM. (2017) 60(6):84–
90. doi: 10.1145/3065386
15.Russakovsky O, Deng J, Su H, Krause J, Satheesh S, Ma S, et al. Ima-
genet large scale visual recognition challenge. Int J Comput Vis. (2015)
115(3):211–52. doi: 10. 1007/s11263-015-0816-y
16. Campbell M, Hoane Jr AJ, Hsu F-h. Deep blue. Artif Intell. (2002)
134(1–2):57–83. doi: 10.1016/S0004-3702(01)00129-1
17.Chao X, Kou G, Li T, Peng Y. Jie ke versus AlphaGo: a ranking ap-
proach using decision making method for large-scale data with incom-
plete information. Eur J Oper Res. (2018) 265(1):239–47. doi:
10.1016/j.ejor.2017.07.030
18.Open AI. Chat GPT. Optimizing language, odels for dialogue. Avail-
able at: https://openai.com/blog/chatgpt/ (accessed on 7 February
2023).

62
19.Fang G, Chow MC, Ho JD, He Z, Wang K, Ng T, et al. Soft robotic
manipulator for intraoperative MRI-guided transoral laser micro-
surgery. Sci Robot. (2021) 6(57): eabg5575. doi: 10.1126/scirobotic-
s.abg5575
20. Flowers JC. Strong and weak AI: deweyan considerations. AAAI
Spring symposium: towards conscious AI systems (2019).
organizations, or those of the publisher, the editors and the reviewers.
Any product that may be evaluated in this article, or claim that may be
made by its manufacturer, is not guaranteed or endorsed by the pub-
lisher.
21.Hastie T, Tibshirani R, Friedman J. Overview of supervised learning.
In: Hastie T, Tibshirani R, Friedman J, editors. The elements of statisti-
cal learning. New York, NY, USA: Springer (2009). p. 9–41. doi:
10.1007/978-0-387-84858-7_2
22. Ray S. A quick review of machine learning algorithms. International
conference on machine learning, big data, cloud and parallel computing
(COMITCon); 2019 14–16 Feb (2019).
23.Hastie T, Tibshirani R, Friedman J. Unsupervised learning. In: Hastie
T, Tibshirani R, Friedman J, editors The elements of statistical learn-
ing. New York, NY, USA: Springer (2009). p. 485–585.
https://doi.org/10.1007/978-0-387-84858-7_14
24.Zhu X, Goldberg AB. Introduction to semi-supervised learning. Synth
Lect Artif Intell Mach Learn. (2009) 3(1):1–130. doi: 10.1007/978-3-
031-01548-9
25.Zhou Z-H. A brief introduction to weakly supervised learning. Natl Sci
Rev. (2017) 5(1):44–53. doi: 10.1093/nsr/nwx106
26.Agatonovic-Kustrin S, Beresford R. Basic concepts of artificial neural
network (ANN) modeling and its application in pharmaceutical re-

63
search. J Pharm Biomed Anal. (2000) 22(5):717–27. doi: 10.1016/
S0731-7085(99)00272-1
27.LeCun Y, Bengio Y, Hinton G. Deep learning. Nature. (2015)
521(7553):436–44. doi: 10.1038/nature14539
28.Nam CS. Neuroergonomics: Principles and practice. Gewerbestrasse,
Switzerland: Springer Nature (2020). doi: 10.1007/978-3-030-34784-0
29.Kosan E, Krois J, Wingenfeld K, Deuter CE, Gaudin R, Schwendicke
F. 2022. Patients' perspectives on artificial intelligence in dentistry: A
controlled study. J Clin Med. 11(8).
30. Flores M, Glusman G, Brogaard K, Price ND, Hood L. 2013. P4
medicine: How systems medicine will transform the healthcare sector
and society. Personalized medicine. 10(6):565-576.
Citation: Ding H, Wu J, Zhao W, Matinlinna JP, Burrow MF and Tsoi
JKH (2023) Artificial intelligence in dentistry--A review.Front.Dent.
Med 4:1085251. doi: 10. 3389/ /fdmed. 2023. .1085251
31. Cui Y, Yang M, Zhu J, Zhang H, Duan Z, Wang S, Liao Z, Liu W.
2022. Developments in diagnostic applications of saliva in human or-
gan diseases. Medicine in Novel Technology and Devices. 13:100115.
32. Duane B, Taylor T, Stahl-Timmins W, Hyland J, Mackie P, Pollard A.
2014. Carbon mitigation, patient choice and cost reduction-triple bot-
tom line optimisation for health care planning. Public Health.
128(10):920-924.
33.Xie X, Wang L, Wang A. Artificial Neural Network Modeling for De-
ciding if Extractions Are Necessary Prior to Orthodontic Treatment [In-
ternet]. Vol. 80, The Angle Orthodontist. 2010. p. 262–6. Available
from: http://dx.doi.org/10.2319/111608-588.1.
34. Mackin N, Sims-Williams JH, Stephens CD. Artificial intelligence in
the dental surgery: an orthodontic expert system, a dental tool of tomor-

64
row. Dent Update [Internet]. 1991 Oct;18(8):341–3. Available from:
https://www.ncbi.nlm.nih.gov/ pubmed/1810794.
35. Gelfand AE, Hills SE, Racine-Poon A, Smith AF. Illustration of
bayesian inference in normal data models using gibbs sampling. J Am
Stat Assoc 1990;85:972-85.
36. Rohrer B. End-to-end Machine Learning Library; 2020.
37. Croskerry P. The importance of cognitive errors in diagnosis and
strategies to minimize them. Acad Med 2003;78:775-80.
38. Thanathornwong B. Bayesian-based decision support system for as-
sessing the needs for orthodontic treatment. Healthc Inform Res
2018;24:22-8.
39. S. Ö. Arik, B. Ibragimov, and L. Xing, “Fully automated quantitative
cephalometry using convolutional neural networks,” Journal of Medical
Imaging, vol. 4, no. 1, p. 014501, 2017.
40. M. K. Alam and A. A. Alfawzan, “Evaluation of Sella Turcica bridg-
ing and morphology in different types of cleft patients,” Frontiers in
Cell and Developmental Biology, vol. 8, p. 656, 2020.
41. K. J. Dreyer and J. R. Geis, “When machines think: radiology’s next
frontier,” Radiology, vol. 285, no. 3, pp. 713–718, 2017.
42.R. Leonardi, D. Giordano, F. Maiorana, and C. Spampinato, “Auto-
matic cephalometric analysis,” The Angle Orthodontist, vol. 78, no. 1,
pp. 145–151, 2008.
43. A. Richardson, “A comparison of traditional and computerized meth-
ods of cephalometric analysis,” The European Journal of Orthodontics,
vol. 3, no. 1, pp. 15–20, 1981.
44. Miller, R., Dijkman, D., Riolo, M., Moyers, R. Graphic computeriza-
tion of cephalometric data.1971.
45.Huang Y-P, Lee S-Y. An Effective and Reliable Methodology for Deep
Machine Learning Application in Caries Detection. medRxiv (2021).
65
46.Fukuda M, Inamoto K, Shibata N, Ariji Y, Yanashita Y, Kutsuna S, et
al. Evaluation of an artificial intelligence system for detecting vertical
root fracture on panoramic radiography. Oral Radiol. (2020)
36(4):337–43. doi: 10.1007/s11282-019-00409-x
47.Vadlamani R. Application of machine learning technologies for detec-
tion of proximal lesions in intraoral digital images: in vitro study.
Louisville, Kentucky, USA: University of Louisville (2020). doi:
10.18297/etd/3519
48. Setzer FC, Shi KJ, Zhang Z, Yan H, Yoon H, Mupparapu M, et al. Ar-
tificial intelligence for the computer-aided detection of periapical le-
sions in cone-beam computed tomographic images. J Endod. (2020)
46(7):987–93. doi: 10.1016/j.joen. 2020.03.025
49. Jaiswal P, Bhirud S. Study and analysis of an approach towards the
classi fi cation of tooth wear in dentistry using machine learning tech-
nique. IEEE International conference on technology, research, and in-
novation for betterment of society (TRIBES) (2021). IEEE.
50. Shetty H, Shetty S, Kakade A, Shetty A, Karobari MI, Pawar AM, et
al. Threedimensional semi-automated volumetric assessment of the
pulp space of teeth following regenerative dental procedures. Sci Rep.
(2021) 11(1):21914. doi: 10.1038/ s41598-021-01489-8
51. Lee J-H, Kim D-H, Jeong S-N, Choi S-H. Detection and diagnosis of
dental caries using a deep learning-based convolutional neural network
algorithm. J Dent. (2018) 77:106–11. doi: 10.1016/j.jdent.2018.07.015
52.Kühnisch J, Meyer O, Hesenius M, Hickel R, Gruhn V. Caries detec-
tion on intraoral images using artificial intelligence. J Dent Res. (2021)
101(2). doi: 10.1177/ 00220345211032524.
53.Schwendicke F, Rossi J, Göstemeyer G, Elhennawy K, Cantu A,
Gaudin R, et al. Cost-effectiveness of artificial intelligence for proximal

66
caries detection. J Dent Res. (2021) 100(4):369–76. doi:
10.1177/0022034520972335
54.Duane B, Taylor T, Stahl-Timmins W, Hyland J, Mackie P, Pollard A.
2014. Carbon mitigation, patient choice and cost reduction-triple bot-
tom line optimisation for health care planning. Public Health.
128(10):920-924.
55. Jose J, P. A, Subbaiyan H. Different Treatment Modalities followed by
Dental Practitioners for Ellis Class 2 Fracture – A Questionnaire-based
Survey [Internet]. Vol. 14, The Open Dentistry Journal. 2020. p. 59–
65.Availablefrom:http:// dx.doi.org/10.2174/1874210602014010059
56. Ramesh S, Teja K, Priya V. Regulation of matrix metalloproteinase-3
gene expression in inflammation: A molecular study [Internet]. Vol. 21,
Journal of Conservative Dentistry. 2018. p.592. Available from:
http://dx.doi.org/10.4103/jcd. jcd_154_18
57. I. Tsesis, E. Rosen, A. Tamse, S. Taschieri, and A. Kfir, “Diagnosis of
vertical root fractures in endodontically treated teeth based on clinical
and radiographic indices: a systematic review,” Journal of Endodon-
tics, vol. 36, no. 9, pp. 1455– 1458, 2010.
58.M. Fukuda, K. Inamoto, N. Shibata et al., “Evaluation of an artificial
intelligence system for detecting vertical root fracture on panoramic ra-
diography,” Oral Radiology, vol. 36, no. 4, pp. 337–343, 2020.
59. D. Prithviraj, H. Balla, R. Vashisht, K. Regish, and P. Suresh, An
overview of management of root fractures,” Kathmandu University
Medical Journal, vol. 12, no. 3, pp. 222–230, 2015.
60.. S. Kositbowornchai, S. Plermkamon, and T. Tangkosol, “Performance
of an artificial neural network for vertical root fracture detection: an ex
vivo study,” Dental traumatology, vol. 29, no. 2, pp. 151–155, 2013.
61. S. Talwar, S. Utneja, R. R. Nawal, A. Kaushik, D. Srivastava, and S. S.
Oberoy, “Role of cone-beam computed tomography in diagnosis of
67
vertical root fractures: a systematic review and meta-analysis,” Journal
of Endodontics, vol. 42, no. 1, pp. 12–24, 2016.
62. N. Boreak, “Effectiveness of artificial intelligence applications de-
signed for endodontic diagnosis, decision-making, and prediction of
prognosis: a systematic review,” The Journal of Contemporary Dental
Practice, vol. 21, no. 8, pp. 926–934, 2020.
63. V. Nagendrababu, A. Aminoshariae, and J. Kulild, “Artificial intelli-
gence in endodontics: current applications and future directions,” Jour-
nal of Endodontics, vol. 47, no. 9, pp. 1352– 1357, 2021.
64.T. Hiraiwa, Y. Ariji, M. Fukuda et al., “A deep-learning artifi- cial in-
telligence system for assessment of root morphology of the mandibular
first molar on panoramic radiography,” Dentomaxillofacial Radiology,
vol. 48, no. 3, p. 20180218, 2019.
65.Z. S. Madani, N. Mehraban, E. Moudi, and A. Bijani,“Root and canal
morphology of mandibular molars in a selected Iranian population us-
ing cone-beam computed tomography,” Iranian endodontic journal, vol.
12, no. 2, pp. 143–148, 2017.
66.S. Rahimi, H. Mokhtari, B. Ranjkesh et al., “Prevalence of extra roots
in permanent mandibular first molars in Iranian population: a CBCT
analysis,” Iranian endodontic journal, vol. 12, no. 1, pp. 70–73, 2017.
67.Y. Xue, R. Zhang, Y. Deng, K. Chen, and T. Jiang, “A preliminary ex-
amination of the diagnostic value of deep learning in hip osteoarthritis,”
PLoS One, vol. 12, no. 6, article e0178992, 2017.
68.X. Wang, W. Yang, J. Weinreb et al., “Searching for prostate cancer by
fully automated magnetic resonance imaging classification: deep learn-
ing versus non-deep learning,” Scientific Reports, vol. 7, no. 1, pp. 1–8,
2017.
69.T. Hiraiwa, Y. Ariji, M. Fukuda et al., “A deep-learning artificial intel-
ligence system for assessment of root morphology of the mandibular
68
first molar on panoramic radiography,” Dentomaxillofacial Radiology,
vol. 48, no. 3, p. 20180218, 2019.
70.Y. Xue, R. Zhang, Y. Deng, K. Chen, and T. Jiang, “A preliminary ex-
amination of the diagnostic value of deep learning in hip osteoarthritis,”
PLoS One, vol. 12, no. 6, article e0178992,2017.
71.X. Wang, W. Yang, J. Weinreb et al., “Searching for prostate cancer by
fully automated magnetic resonance imaging classification: deep learn-
ing versus non-deep learning,” Scientific Reports, vol. 7, no. 1, pp. 1–8,
2017.
72.P. Chhetri, N. O. Devi, and K. Dem Lepcha, “Artificial intelligence in
dentistry,” Journal of Clinical Research and Community, vol. 1, no. 1,
2021.
73.J. L. Gutmann and J. E. Leonard, “Problem solving in endodontic
working-length determination,” Compendium of Continuing Education
in Dentistry, vol. 16, no. 3, 1995.
74.M. Saghiri, K. Asgar, K. Boukani et al., “A new approach for locating
the minor apical foramen using an artificial neural network,” Interna-
tional endodontic journal, vol. 45, no. 3, pp. 257–265, 2012.
75.N. Boreak, “Effectiveness of artificial intelligence applications de-
signed for endodontic diagnosis, decision-making, and prediction of
prognosis: a systematic review,” The Journal of Contemporary Dental
Practice, vol. 21, no. 8, pp. 926–934, 2020.
76.M. Gordon and N. Chandler, “Electronic apex locators,” International
endodontic journal, vol. 37, no. 7, pp. 425–437, 2004.
77.X. Qiao, Z. Zhang, and X. Chen, “Multifrequency impedance method
based on neural network for root canal length measurement,” Applied
Sciences, vol. 10, no. 21, p. 7430, 2020.

69
78.M. Joseph, “Clinical success of two working length determination tech-
niques: a randomized controlled trial,” in Madha Dental College and
Hospital, Chennai, 2019.
79.L. Campo, I. J. Aliaga, J. F. De Paz et al., “Retreatment predictions in
odontology by means of CBR systems,” Computational Intelligence
and Neuroscience, vol. 2016, 2016.
80.D. Gu, C. Liang, and H. Zhao, “A case-based reasoning system based
on weighted heterogeneous value distance metric for breast cancer di-
agnosis,” Artificial intelligence in medicine, vol. 77, pp. 31–47, 2017.
81.V. Nagendrababu, A. Aminoshariae, and J. Kulild, “Artificial intelli-
gence in endodontics: current applications and future directions,” Jour-
nal of Endodontics, vol. 47, no. 9, pp. 1352– 1357, 2021.
82. Tumbelaka BY, Oscandar F, Baihaki FN, Sitam S, Rukmo MJSEJ.
Identification of pulpitis at dental X-ray periapical radiography based
on edge detection, texture description and artificial neural networks.
2014;4(3):115–21. [Google Scholar].
83. Zheng L, Wang H, Mei L, Chen Q, Zhang Y, Zhang H. Artificial intel-
ligence in digital cariology: a new tool for the diagnosis of deep caries
and pulpitis using convolutional neural networks. Ann Transl Med.
2021;9(9):763. [PMC free article] [PubMed] [Google Scholar].
84.
85.Okada K, Rysavy S, Flores A, Linguraru MG. Noninvasive differential
diagnosis of dental periapical lesions in cone-beam CT scans. Med
Phys. 2015;42(4):1653–65. [PubMed] [Google Scholar].
86.Orhan K, Bayrakdar IS, Ezhov M, Kravtsov A, Özyürek T. Evaluation
of artificial intelligence for detecting periapical pathosis on cone-beam
computed tomography scans. Int Endod J. 2020;53(5):680–9.
[PubMed] [Google Scholar].

70
87.Kong Y, Posada-Quintero HF, Tran H, Talati A, Acquista TJ, Chen IP,
Chon KH. Differentiating between stress- and EPT-induced electroder-
mal activity during dental examination. Comput Biol Med. 2023:155.
[PMC free article] [PubMed] [Google Scholar]
88.Ramezanzade S, Dascalu TL, Ibragimov B, Bakhshandeh A, Bjørndal
L. Prediction of pulp exposure before caries excavation using artificial
intelligence: Deep learning-based image data versus standard dental ra-
diographs. J Dent. 2023:138. [PubMed] [Google Scholar]
89.Choi S, Choi J, Peters OA, Peters CI. Design of an interactive system
for access cavity assessment: A novel feedback tool for preclinical en-
dodontics. Eur J Dent Educ. 2023 [PubMed] [Google Scholar]
90.Suárez A, Díaz-Flores García V, Algar J, Gómez Sánchez M, Llorente
de Pedro M, Freire Y. Unveiling the ChatGPT phenomenon: Evaluating
the consistency and accuracy of endodontic question answers. Int En-
dod J. 2023 [PubMed] [Google Scholar]
91.Farajollahi M, Modaberi A. Can ChatGPT pass the "Iranian Endodon-
tics Specialist Board" exam? Iran Endod J. 2023;18(3):192. [PMC free
article] [PubMed] [Google Scholar].
92.Yang H, Jo E, Kim HJ, Cha I-H, Jung Y-S, Nam W, et al. Deep learn-
ing for automated detection of cyst and tumors of the jaw in panoramic
radiographs. J Clin Med 2020;9:1–14.
https://doi.org/10.3390/jcm9061839.
93.Poedjiastoeti W, Suebnukarn S. Application of Convolutional Neural
Network in the Diagnosis of Jaw Tumors. Healthc Inform Res
2018;24:236–41. https://doi.org/10.4258/hir.2018.24.3.236.
94.Effiom OA, Ogundana OM, Akinshipo AO, Akintoye SO. Ameloblas-
toma: current etiopathological concepts and management. Oral Dis
2018;24:307–16. https://doi.org/10.1111/odi.12646.

71
95.Orhan K, Bayrakdar IS, Ezhov M, Kravtsov A, Özyürek T. Evaluation
of artificial intelligence for detecting periapical pathosis on cone-beam
computed tomography scans. Int Endod J 2020;53:680– 9.
https://doi.org/10.1111/iej.13265. /95/
96.Fukuda M, Ariji Y, Kise Y, Nozawa M, Kuwada C, Funakoshi T, et al.
Comparison of 3 deep learning neural networks for classifying the rela-
tionship between the mandibular third molar and the mandibular canal
on panoramic radiographs. Oral Surg Oral Med Oral Pathol Oral Radiol
2020;130:336–43. /96/
97.https://doi.org/10.1016/j.oooo.2020.04.005.[46] Zhang W, Li J, Li Z-B,
Li Z. Predicting postoperative facial swelling following impacted
mandibular third molars extraction by using artificial neural networks
evaluation. Sci Rep 2018;8:12281. https://doi.org/10.1038/s41598-018-
29934-1. /97/
98.Revilla-León M, Gómez-Polo M, Vyas S, Barmak BA, Galluci GO, Att
W, et al. Artificial intelligence applications in implant dentistry: A sys-
tematic review. J Prosthet Dent 2021:S00223913(21)00272-9.
https://doi.org/10.1016/j.prosdent.2021.05.008.
99.Gómez I, Seoane J, Varela-Centelles P, Diz P, Takkouche B. Is diag-
nostic delay related to advanced-stage oral cancer? A meta-analysis.
Eur J Oral Sci 2009;117:541–6. https://doi.org/10.1111/j.1600-
0722.2009.00672.x.
100. van Staveren HJ, van Veen RL, Speelman OC, Witjes MJ, Star
WM, Roodenburg JL. Classification of clinical autofluorescence spec-
tra of oral leukoplakia using an artificial neural network: a pilot study.
Oral Oncol 2000;36:286–93. https://doi.org/10.1016/s1368-
8375(00)00004-x.
101. Shamim MZ, Syed S, Shiblee M, Usman M, Zaidi M, Ahmad Z,
et al. Detecting benign and precancerous tongue lesions using deep
72
convolutional neural networks for early signs of oral cancer. Basic Clin
Pharmacol Toxicol 2019;125:184–5.
102. Fu Q, Chen Y, Li Z, Jing Q, Hu C, Liu H, et al. A deep learning
algorithm for detection of oral cavity squamous cell carcinoma from
photographic images: A retrospective study. Eclinicalmedicine
2020;27:100558. https://doi.org/10.1016/j.eclinm.2020.100558
103. Tomita H, Yamashiro T, Heianna J, Nakasone T, Kimura Y,
Mimura H, et al. Nodal-based radiomics analysis for identifying cervi-
cal lymph node metastasis at levels I and II in patients with oral squa-
mous cell carcinoma using contrast-enhanced computed tomography.
Eur Radiol 2021. https://doi.org/10.1007/s00330-021-07758-4.
104. Napel S, Mu W, Jardim-Perassi BV, Aerts HJWL, Gillies RJ.
Quantitative imaging of cancer in the postgenomic era:
Radio(geno)mics, deep learning, and habitats. Cancer 2018;124:4633–
49. https://doi.org/10.1002/cncr.31630.
105. Pan X, Zhang T, Yang Q, Yang D, Rwigema J-C, Qi XS. Sur-
vival prediction for oral tongue cancer patients via probabilistic genetic
algorithm optimized neural network models. Br J Radiol
2020;93:20190825. https://doi.org/10.1259/bjr.20190825
106. Alabi RO, Elmusrati M, Sawazaki-Calone I, Kowalski LP,
Haglund C, Coletta RD, et al. Comparison of supervised machine learn-
ing classification techniques in prediction of locoregional recurrences
in early oral tongue cancer. Int J Med Inf 2020;136.
https://doi.org/10.1016/j.ijmedinf.2019.104068
107. Tighe D, Thomas AJ, Hills A, Quadros R. Validating a risk strat-
ification tool for audit of early outcome after operations for squamous
cell carcinoma of the head and neck. Br J Oral Maxillofac Surg
2019;57:873–9. https://doi.org/10.1016/j.bjoms.2019.07.008

73
108. Poedjiastoeti W, Suebnukarn S. Application of Convolutional
Neural Network in the Diagnosis of Jaw Tumors. Healthc Inform Res
2018;24:236–41. https://doi.org/10.4258/hir.2018.24.3.236.
109. Yang H, Jo E, Kim HJ, Cha I-H, Jung Y-S, Nam W, et al. Deep
learning for automated detection of cyst and tumors of the jaw in
panoramic radiographs. J Clin Med 2020;9:1–14.
https://doi.org/10.3390/jcm9061839.
110. Orhan K, Bayrakdar IS, Ezhov M, Kravtsov A, Özyürek T. Eval-
uation of artificial intelligence for detecting periapical pathosis on
cone-beam computed tomography scans. Int Endod J 2020;53:680– 9.
https://doi.org/10.1111/iej.13265.
111. Poedjiastoeti W, Suebnukarn S. Application of Convolutional
Neural Network in the Diagnosis of Jaw Tumors. Healthc Inform Res
2018;24:236–41. https://doi.org/10.4258/hir.2018.24.3.236.
112. Kikuchi H, Ikeda M, Araki K. Evaluation of a virtual reality sim-
ulation system for porcelain fused to metal crown preparation at Tokyo
Medical and Dental University. J Dent Educ [Internet]. 2013
Jun;77(6):782–92. Available from: https://www. ncbi.nlm.nih.gov/
pubmed/23740915.
113. Sims-Williams JH, Brown ID, Matthewman A, Stephens CD. A
computer-controlled expert system for orthodontic advice [Internet].
Vol. 163, British Dental Journal. 1987. p. 161–6. Available from:
http://dx.doi.org/10.1038/sj.bdj.4806228.
114. 42.Baliga MS. Artificial intelligence - The next frontier in pedi-
atric dentistry. J Indian Soc Pedod Prev Dent [Internet]. 2019
Oct;37(4):315. Available from:
http://dx.doi.org/10.4103/JISPPD.JISPPD_319_19.
115. Rajput, G.; Ahmed, S.; Chaturvedi, S.; Addas, M.K.; Bhagat,
T.V.; Gurumurthy, V.; Alqahtani, S.M.; Alobaid, M.A.; Alsubaiy, E.F.;
74
Gupta, K.; et al. Comparison of Microleakage in Nanocomposite and
Amalgam as a Crown Foundation Material Luted with Different Luting
Cements under CAD-CAM Milled Metal Crowns: An In Vitro Micro-
scopic Study. Polymers 2022, 14, 2609.
116. Abouzeid, H.L.; Chaturvedi, S.; Abdelaziz, K.M.; Alzahrani,
F.A.; AlQarni, A.A.S.; Alqahtani, N.M. Role of Robotics and Artificial
Intelligence in Oral Health and Preventive Dentistry-Knowledge, Per-
ception and Attitude of Dentists. Oral Health Prev. Dent. 2021, 19,
353–363.
117. Amornvit, P.; Rokaya, D.; Sanohkan, S. Comparison of Accu-
racy of Current Ten Intraoral Scanners. Biomed Res. Int.
2021,2021,2673040. [CrossRef].
118. Cabanes-Gumbau, G.; Palma, J.C.; Kois, J.C.; Revilla-León, M
Transferring the tooth preparation finish line on intraoral digital scans
to dental software programs: A dental technique. J. Prosthet. Dent.
2022, S0022-3913, 00582-5. [CrossRef].
119. Rekow, E.D. Digital dentistry: The new state of the art—Is it dis-
ruptive or destructive? Dent. Mater. 2020, 36, 9–24. [CrossRef]
[PubMed].
120. Jreige, C.S.; Kimura, R.N.; Segundo, Â.R.T.C.; Coachman, C.;
Sesma, N. Esthetic treatment planning with digital animation of the
smile dynamics: A technique to create a 4-dimensional virtual patient.
J. Prosthet. Dent. 2022, 128, 130–138. [CrossRef].
121. Rekow, E.D. Digital dentistry: The new state of the art—Is it dis-
ruptive or destructive? Dent. Mater. 2020, 36, 9–24. [CrossRef]
[PubMed].
122. Delgado, A.H.S.; Sauro, S.; Lima, A.F.; Loguercio, A.D.; Della
Bona, A.; Mazzoni, A.; Collares, F.M.; Staxrud, F.; Ferracane, J.;Tsoi,
J.; et al. Risk of bias tool and guideline to support reporting of pre-clin-
75
ical Dental Materials Research and assessment of Systematic Reviews.
J. Dent. 2022, 127, 104350. [CrossRef].
123. Kirubarajan, A.; Young, D.; Khan, S.; Crasto, N.; Sobel, M.;
Sussman, D. Artificial Intelligence and Surgical Education: A system-
atic scoping review of interventions. J. Surg. Educ. 2022, 79, 500–515.
[CrossRef].
124. Afrashtehfar, K.I.; Alnakeb, N.A.; Assery, M.K. Accuracy of in-
traoral scanners versus traditional impressions: A Rapid UmbrellaRe-
view. J. Evid. Based Dent. Pract. 2022, 22, 101719. [CrossRef].
125. The Glossary of Prosthodontic Terms: Ninth Edition. J. Prosthet.
Dent. 2017, 117, e1–e105. [CrossRef].
126. Persson, A.S.; Andersson, M.; Oden, A.; Sandborgh-Englund, G.
Computer aided analysis of digitized dental stone replicas by
dental CAD/CAM technology. Dent. Mater. 2008, 24, 1123–1130.
[CrossRef].
127. Berrendero, S.; Salido, M.P.; Ferreiroa, A.; Valverde, A.;
Pradies, G. Comparative study of all-ceramic crowns obtained from
conventional and digital impressions: Clinical findings. Clin. Oral In-
vestig. 2019, 23, 1745–1751. [CrossRef]].
128. Su, T.S.; Sun, J. Comparison of repeatability between intraoral
digital scanner and extraoral digital scanner: An in-vitro study. J.
Prosthodont. Res. 2015, 59, 236–242. [CrossRef] [PubMed].
129. Ahrberg, D.; Lauer, H.C.; Ahrberg, M.; Weigl, P. Evaluation of
fit and efficiency of CAD/CAM fabricated all-ceramic restorations
based on direct and indirect digitalization: A double-blinded, random-
ized clinical trial. Clin. Oral Investig. 2016, 20, 291–300. [CrossRef]
[PubMed].
130. Sim, J.Y.; Jang, Y.; Kim, W.C.; Kim, H.Y.; Lee, D.H.; Kim, J.H.
Comparing the accuracy (trueness and precision) of models of fixed
76
dental prostheses fabricated by digital and conventional workflows. J.
Prosthodont. Res. 2019, 63, 25–30. [CrossRef] [PubMed].
131. Yilmaz, H.; Aydin, M.N. Digital versus conventional impression
method in children: Comfort, preference and time. Int. J. Paediatr.
Dent. 2019, 29, 728–735. [CrossRef].
132. Cattoni, F.; Tete, G.; Calloni, A.M.; Manazza, F.; Gastaldi, G.;
Cappare, P. Milled versus moulded mock-ups based on the superimpo-
sition of 3D meshes from digital oral impressions: A comparative in
vitro study in the aesthetic area. BMC Oral Health 230.CrossRef]
[PubMed].
133. Kim, J.E.; Park, Y.B.; Shim, J.S.; Moon, H.S. The Impact of
Metal Artifacts Within Cone Beam Computed Tomography Data on the
Accuracy of Computer-Based Implant Surgery: An In Vitro Study. Int.
J. Oral Maxillofac. Implant. 2019, 34, 585–594. [CrossRef]
134. Ender, A.; Zimmermann, M.; Mehl, A. Accuracy of complete-
and partial-arch impressions of actual intraoral scanning systems in
vitro. Int. J. Comput. Dent. 2019, 22, 11–19.
135. Malik, J.; Rodriguez, J.; Weisbloom, M.; Petridis, H. Compari-
son of Accuracy Between a Conventional and Two Digital Intraoral
Impression Techniques. Int. J. Prosthodont. 2018, 31, 107–113. [Cross-
Ref] [PubMed].
136. Guth, J.F.; Runkel, C.; Beuer, F.; Stimmelmayr, M.; Edelhoff,
D.; Keul, C. Accuracy of five intraoral scanners compared to indirect
digitalization. Clin. Oral Investig. 2017, 21, 1445–1455. [CrossRef].
137. Tomita, Y.; Uechi, J.; Konno, M.; Sasamoto, S.; Iijima, M.; Mi-
zoguchi, I. Accuracy of digital models generated by conventional im-
pression/plaster-model methods and intraoral scanning. Dent. Mater. J.
2018, 37, 628–633. [CrossRef] [PubMed].

77
138. Gonzalez de Villaumbrosia, P.; Martinez-Rus, F.; Garcia-Orejas,
A.; Salido, M.P.; Pradies, G. In vitro comparison of the accuracy ( true-
ness and precision) of six extraoral dental scanners with different scan-
ning technologies. J. Prosthet. Dent. 2016, 116, 550-543.e541. [Cross-
Ref].
139. Li, H.; Lyu, P.; Wang, Y.; Sun, Y. Influence of object translu-
cency on the scanning accuracy of a powder-free intraoral scanner:A
laboratory study. J. Prosthet. Dent. 2017, 117, 93–101. [CrossRef].
140. Bocklet, C.; Renne, W.; Mennito, A.; Bacro, T.; Latham, J.;
Evans, Z.; Ludlow, M.; Kelly, A.; Nash, J. Effect of scan substrates on
accuracy of 7 intraoral digital impression systems using human maxilla
model. Orthod. Craniofac. Res. 2019, 22 (Suppl. 1), 168–174. [Cross-
Ref].
141. Son, S.A.; Kim, J.H.; Seo, D.G.; Park, J.K. Influence of different
inlay configurations and distance from the adjacent tooth on the accu-
racy of an intraoral scan. J. Prosthet. Dent. 2021. [CrossRef].
142. Parize, H.; Dias Corpa Tardelli, J.; Bohner, L.; Sesma, N.;
Muglia, V.A.; Candido Dos Reis, A. Digital versus conventional work-
flow for the fabrication of physical casts for fixed prosthodontics: A
systematic review of accuracy. J. Prosthet. Dent. 2021. [CrossRef].
143. Hasanzade, M.; Aminikhah, M.; Afrashtehfar, K.I.; Alikhasi, M.
Marginal and internal adaptation of single crowns and fixed dental
prostheses by using digital and conventional workflows: A systematic
review and meta-analysis. J. Prosthet. Dent. 2020. [CrossRef].

78
Republic of Iraq
Ministry of higher education and scientific research
University of Al-Qadisiyah
College of Dentistry

Artificial Intelligence In Dentistry

A graduation project Submitted to the college of Dentistry in par-


tial fulfillment of the requirement for the Degree of Bachelor
in dental and oral surgery (B.D.S)

BY
Zahraa Salih Hadi
Abbas Hamza Yasir
Khayrat Ahmed Zaghir
Ruqayya Ahmed Abdul-Amir
Sura Nazar

Superviserd By

79
‫‪Dr. Ghassan Majid Jasim‬‬
‫‪2023-2024‬‬

‫الخالصة‬

‫برز الذكاء االصطناعي (‪ )AI‬كقوة تحويلية في طب األسنان‪ ،‬حيث أحدث ثورة في جوانب مختلفة من‬
‫رعاية المرضى والتشخيص وتخطيط العالج‪ .‬يوفر هذا التكامل لتقنيات الذكاء االصطناعي في طب‬
‫األسنان حلوًال فعالة‪ ،‬تتراوح من تحليل الصور للتفسير الشعاعي إلى توصيات العالج الشخصية‪.‬‬

‫إن استخدام خوارزميات التعلم اآللي‪ ،‬مثل الش¡¡بكات العص¡¡بية التالفيفي¡¡ة (‪ )CNNs‬ونم¡¡اذج التعلم‬
‫العميق‪ ،‬يعزز دقة التشخيص وي¡¡تيح التحليالت التنبؤي¡¡ة لنت¡¡ائج ص¡¡حة الفم‪ .‬باإلض¡¡افة إلى تحس¡¡ين س¡¡ير‬
‫العمل السريري‪ ،‬تساهم تطبيق¡¡ات ال¡¡ذكاء االص¡¡طناعي في طب األس¡¡نان في تعزي¡¡ز تج¡¡ارب المرض¡¡ى‬
‫وتدخالت الرعاية الصحية األكثر تخصيًصا‪.‬‬

‫‪80‬‬

You might also like