Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 44

Automatic detection of adenoid

hypertrophy on cone-beam computed


tomography based on deep learning
Wenjie Dong,A Yaosen Chen,A Ankang Li,B Xiaoguang Mei,C and Yan Yanga
Wuhan, Hubei, China

Dr Muhammad Asif
INTRODUCTION

 Obstructive sleep apnea (OSA) is a breathing disorder during sleep because of


prolonged partial and/or intermittent complete upper airway obstruction that
disrupts normal ventilation during sleep.
INTRODUCTION

 Currently a lot of parents come to the orthodontists with complaints of


malocclusion in their kids’ teeth.
 The cause of these malocclusions is often adenoid hypertrophy and mouth
breathing.
 Thus, early, accurate screening of adenoid hypertrophy is necessary for the
dentist.
INTRODUCTION

 Recently, CBCT has been increasingly used in the formulation of orthodontic


diagnosis and treatment plans11 and research on upper airway morphology.12,13
INTRODUCTION

 Deep learning algorithms have been applied to orthodontic cephalometric


measurements.
 Some studies have provided skeletal classification systems and landmark system-
based convolutional neural networks (CNNs) with lateral cephalograms. 18,19
 Neural networks have recently been used to segment the sinonasal cavity and
upper airway on CBCT images,20,21 allowing a better understanding of the upper
airway.
INTRODUCTION

 However, no study has attempted the automatic classification of normal or


blocked upper airways with CBCT images based on deep learning.
INTRODUCTION

 Therefore, this study was designed to develop and search for a new deep
learning-based method for diagnosing abnormal upper airway morphologies on
CBCT.
 The proposed system consists of a neural network based on deep learning, which
segments the upper airway accurately in 3D space, and a classification model,
which judges whether the upper airway is abnormal according to its morphology.
INTRODUCTION

 Finally, we compared the performance of the proposed system with a set of


experienced orthodontics to test its accuracy.
MATERIAL AND METHODS

 Helsinki Declaration on medical protocols and ethics.


 Pre-operative CBCTs of 129 patients.
 Settings: 90 kV, 4.0 mA,
 Scan time of 11.3 seconds.
 Minimal resolution was 0.3 mm voxels,
 Field of view was at least 16 × 3 cm.
 All images were stored in digital imaging and communications in medicine
format.
MATERIAL AND METHODS

First, we evaluated adenoid enlargement on CBCT using a previously


established 4-grade scale on electronic nasopharyngoscopy.22,23
MATERIAL AND METHODS

 All CBCT images were annotated by 2 experienced orthodontists


MATERIAL AND METHODS
MATERIAL AND METHODS

 If 2 orthodontists disagreed, a third orthodontic specialist (Y.Y., with 30 years of


clinical experience) provided an opinion.
 If the third orthodontist was unsure of the diagnosis, the confusing CBCT image
was excluded.
 Among the 129 CBCT images, 87 were finally retained in our study, including 35
patients with adenoid enlargement and 52 with normal adenoids, after excluding
images of low quality and with controversial diagnoses.
MATERIAL AND METHODS

 Data augmentation techniques were applied to the original medical image dataset
to alleviate the problem of small data set.
 In our practice, the original dataset was augmented by performing 5 random
affine transformations and 4 random elastic transformations using the MONAI
medical image deep learning framework, producing 3D volumes from the
original dataset (n=870).
MATERIAL AND METHODS

 Existing deep learning-based adenoid hypertrophy diagnostic methods were


generally designed only for segmentation, not for classification tasks. 20,21
 Specifically, those studies only provided doctors with the segmentation result,
and the final decision still had to be made manually.
MATERIAL AND METHODS

 Unlike these methods, our proposed method involves 2 deep learning models that
not only segment the upper airway more accurately but also use the precise
segmentation results to determine whether the airway is blocked.
Hierarchical masks self-attention U-net
(HMSAU-Net)

 Despite the excellent performance of CNN-based methods in current medical


image segmentation tasks, they are not very suitable for upper airway
segmentation tasks.
 Compared with other organs, the upper airway contains more long-distance
semantic dependencies that are not easily captured by traditional CNN-based
structures.27
 Self-attention U-Net (SAU-Net) was used to capture those features that were
highly relevant to our task
Hierarchical masks self-attention U-net
(HMSAU-Net)

 SAU-Net replaces the traditional U-Net encoder path with a string of vision
transformer (ViT)28 encoders, thus obtaining the ability to capture long-distance
semantic dependencies.
 Hierarchical masks (HMs) were introduced to ensure that HMSAU-Net could
capture sufficient local semantic information because the global self-attention
mechanism of ViT weakens the local feature extraction ability.
ResNet classification model

 ResNet Classification model has demonstrated excellent performance on various


image classification tasks.
 ResNet possesses too many layers and thus was originally not suitable for our
task because our dataset was relatively small.
 To avoid overfitting, we implemented 3 modified ResNet models with fewer
layers (ResNet10, ResNet18, and ResNet34).
ResNet classification model

 The patients’ CBCT images were sent to the well-trained HMSAU-Net to obtain a
high-precision airway segmentation result.
 This result was automatically input to the well-trained 3D-ResNet to obtain an
adenoid hypertrophy diagnosis.
 Finally, the orthodontic doctors can use both results to make a final decision.
FLOW CHART OF THE PROPOSED
SYSTEM
STATISTICAL ANALYSIS

 All statistical analyses performed by Python programming language.


 Dice coefficient was used to evaluate the performance of the trained model
segmentation of the upper airway.
 Diagnostic accuracy, sensitivity, specificity, positive predictive value (PPV),
negative predictive value (NPV), F1 score, the receiver operating characteristic
curve, and the area under the curve (AUC) were used to assess the performance
of the 3D-ResNet mode
STATISTICAL ANALYSIS

 The intraclass correlation coefficient (ICC) 30 was used to test intraevaluator and
interevaluator agreement regarding the segmentation by comparing the manually
segmented volumes of the upper airway.
 Cohen’s kappa31 and Fleiss’s kappa32 were applied to assess the intra- and
interobserver agreement.
RESULTS
RESULTS
RESULTS
RESULTS
RESULTS

 The proposed system was also fast in delivering a diagnosis.


 The 3D-ResNet10 model took the least time (2.61 min) to diagnose all 170 CBCT
images.
 The average speed required to analyze a CBCT image was 0.92 seconds, whereas
a human expert needed an average of 9.8 seconds to diagnose 1 CBCT image.
DISCUSSION

 Our results showed that our model performance was very close to the
performance of human experts while only requiring one-tenth of their time.
DISCUSSION

 In our study, we not only proposed a segmentation model for the upper airway on
the basis of a deep neural network but also obtained a diagnosis of adenoid
hypertrophy by evaluating the morphology of the segmented airway through a
second neural network.
 The average Dice value of upper airway segmentation in our model was 0.960,
which is higher than that of other organs.17
DISCUSSION

 Some commercial software programs have been published that can segment the
upper airway, but first requires setting a manual point, framing the range of the
upper airway, and then forming the 3D structure of the airway.
 Establishing landmarks produces errors that affect the final diagnosis and
consume more time.
DISCUSSION

 These commercial software programs are partly based on semiautomatic


segmentation, whose segmentation accuracy relies on the gray value and
threshold value defined by the operator.
 There is no significant gray gradient between the upper airway and surrounding
tissues on CBCT images.
DISCUSSION

 After commercial software segments the upper airway, the edge morphology
must be manually adjusted, which is time-consuming and complex.
 Therefore, our study applies the end-to-end characteristics of deep learning and
directly obtains diagnostic results by inputting a CBCT image, eliminating the
process of locating landmarks and reducing the doctor’s workload.
DISCUSSION

 As a screening tool, our deep learning-based diagnosis system demonstrates good


sensitivity (0.976), reflecting the ability to detect disease, and good specificity
(0.867), reflecting the ability to rule out normal adenoids.
 This diagnostic system can help dentists detect adenoid hypertrophy in children at
an early stage to avoid irreversible damage to their physical and mental health.
 We expect this auxiliary diagnostic technique to be accepted and promoted in
clinical work because of its rapid monitoring ability and minimal invasiveness
LIMITATIONS

 Our dataset consists only of data from our hospital.


 Although our model was tested on a public dataset, the result was not as good as
that with our dataset, probably because the public dataset was derived to solve a
multiclassification problem.
 In future steps, it will be necessary to train our model by collecting images from
various patients from more clinical centers to increase its generalizability
LIMITATIONS

 Second, a variety of factors often must be taken into consideration in the


diagnosis and treatment of OSA.
 Compared with human experts, our deep learning model only considers 1 factor
because of the limitations of technology and the datasets, which may lead to
misjudgment.
 We expect to add text information and image information extractors in future
versions of the model to fuse different extracted features and finally form a
multimodal diagnostic system to provide clinicians with efficient and convenient
services.
LIMITATIONS

 Third, the HMSAU-Net model does not perform as well as 3DU-Net in capturing
local information.
 Even if we add extra 4-layer supervision information to our task, the essence of
those network structures is not changed, so their generalization is still
questionable
CONCLUSIONS

 We achieved the automatic diagnosis of adenoid hypertrophy on CBCT based on


deep learning for screening OSA in children.
 The HMSAU-Net model showed good performance in segmenting the upper
airway on CBCT, and 3D-ResNet10 demonstrated sensitivity and specificity
similar to that of human experts in diagnosing adenoid hypertrophy at one-tenth
of their speed.
 The diagnostic system can assist dentists in the early screening of OSA children
and in making treatment plans.
REFERENCES

1. Marcus CL, Brooks LJ, Draper KA, Gozal D, Halbower AC, Jones J, 7. Saedi B, Sadeghi M, Mojtahed M, Mahboubi H. Diagnostic efficacy
et al. Diagnosis and management of childhood obstructive sleep of different methods in the assessment of adenoid hypertrophy.
apnea syndrome. Pediatrics 2012;130:576-84. Am J Otolaryngol 2011;32:147-51.
2. Smith DF, Amin RS. OSA and cardiovascular risk in pediatrics. 8. Pakbaznejad Esmaeili E, Ilo AM, Waltimo-Siren J, Ekholm M. Minimum size and positioning of
imaging field for CBCT scans of
Chest 2019;156:402-13.
impacted maxillary canines. Clin Oral Investig 2020;24:897-905.
3. Joosten KF, Larramona H, Miano S, Van Waardenburg D,
9. Aydemir CA, Arısan V. Accuracy of dental implant placement via
Kaditis AG, Vandenbussche N, et al. How do we recognize the child
dynamic navigation or the freehand method: a split-mouth randomized controlled clinical trial. Clin Oral
with OSAS? Pediatr Pulmonol 2017;52:260-71. Implants Res 2020;31:
4. Galeotti A, Festa P, Viarani V, D’Anto V, Sitzia E, Piga S, et al. Prev- 255-63.
alence of malocclusion in children with obstructive sleep apnoea. 10. Parker J, Mol A, Rivera EM, Tawil P. CBCT uses in clinical endodontics: the effect of CBCT on the
Orthod Craniofac Res 2018;21:242-7. ability to locate MB2 canals in

5. Marcus CL, Moore RH, Rosen CL, Giordani B, Garetz SL, Taylor HG, maxillary molars. Int Endod J 2017;50:1109-15.

et al. A randomized trial of adenotonsillectomy for childhood sleep 11. De Grauwe A, Ayaz I, Shujaat S, Dimitrov S, Gbadegbegnon L, Vande Vannet B, et al. CBCT in
orthodontics: a systematic review on justification of CBCT in a paediatric population prior to orthodontic
apnea. N Engl J Med 2013;368:2366-76. treatment. Eur J Orthod 2019;41:381-9.

6. Aboudara C, Nielsen I, Huang JC, Maki K, Miller AJ, Hatcher D. 12. Hsu WC, Kang KT, Yao CJ, Chou CH, Weng WC, Lee PL, et al. Evaluation of upper airway in
children with obstructive sleep apnea using cone-beam computed tomography. Laryngoscope 2021;131:
Comparison of airway space with conventional lateral headfilms 680-5.
and 3-dimensional reconstruction from cone-beam computed tomography. Am J Orthod Dentofacial
Orthop 2009;135:468-79.
REFERENCES

13. Zimmerman JN, Vora SR, Pliska BT. Reliability of upper airway Musumeci G, et al. Fully automatic segmentation of sinonasal cavity and pharyngeal airway based on
convolutional neural networks. Am J Orthod Dentofacial Orthop 2021;159:824-35.e1.
assessment using CBCT. Eur J Orthod 2019;41:101-8.
21. Shujaat S, Jazil O, Willems H, Van Gerven A, Shaheen E, Politis C,
14. Chen H, van Eijnatten M, Aarab G, Forouzanfar T, de Lange J, van
et al. Automatic segmentation of the pharyngeal airway space with
der Stelt P, et al. Accuracy of MDCT and CBCT in threedimensional evaluation of the oropharynx
morphology. Eur J Orthod 2018;40:58-64. convolutional neural network. J Dent 2021;111:103705.
15. Peng CL, Ma JY. Semantic segmentation using stride spatial pyramid 22. Parikh SR, Coronel M, Lee JJ, Brown SM. Validation of a new
pooling and dual attention decoder. Pattern Recognit 2020;107. grading system for endoscopic examination of adenoid hypertrophy. Otolaryngol Head Neck Surg
2006;135:684-7.
16. Cheng JZ, Ni D, Chou YH, Qin J, Tiu CM, Chang YC, et al. Computer-aided diagnosis with deep
learning architecture: applications to breast lesions in US images and pulmonary nodules in 23. Major MP, Witmans M, El-Hakim H, Major PW, Flores-Mir C.
CT scans. Sci Rep 2016;6:24454. Agreement between cone-beam computed tomography and nasoendoscopy evaluations of adenoid
hypertrophy. Am J Orthod
17. Wang Y, Wei X, Liu FZ, Chen JN, Zhou YY, Shen W, et al. Deep distance transform for tubular
structure segmentation in CT scans. Dentofacial Orthop 2014;146:451-9.
Proc IEEE/CVF Conf Comput Vis Pattern Recognit. Silver Spring: 24. Yushkevich PA, Piven J, Hazlett HC, Smith RG, Ho S, Gee JC, et al.
IEEE 2020;3832-41. User-guided 3D active contour segmentation of anatomical structures: significantly improved efficiency
and reliability. Neuroimage 2006;31:1116-28.
18. Yu HJ, Cho SR, Kim MJ, Kim WH, Kim JW, Choi J. Automated skeletal classification with lateral
cephalometry based on artificial intelligence. J Dent Res 2020;99:249-56.
19. Kim H, Shim E, Park J, Kim YJ, Lee U, Kim Y. Web-based fully automated cephalometric analysis
by deep learning. Comput Methods
Programs Biomed 2020;194:105513.
20. Leonardi R, Lo Giudice A, Farronato M, Ronsivalle V, Allegrini S,
REFERENCES

25. Garcia-Uso M, Lima TF, Trindade IEK, Pimenta LAF, Trindade- reliability. Psychol Bull 1979;86:420-8.

Suedam IK. Three-dimensional tomographic assessment of the upper airway using 2 different imaging software 31. Cohen J. A coefficient of agreement for nominal scales. Educ Psychol Meas 1960;20:37-46.
programs: a comparison study. Am J Orthod Dentofacial Orthop 2021;159:217-23.
32. Fleiss JL. Measuring nominal scale agreement among many raters.
26. C¸ic¸ek O, Abdulkadir A, Lienkamp SS, Brox T, Ronneberger O. 3D €
Psychol Bull 1971;76:378-82.
U-net: learning dense volumetric segmentation from sparse annotation. In: Ourselin S, Joskowicz L, Sabuncu M,
Unal G, Wells W, 33. Casalegno F, Newton T, Daher R, Abdelaziz M, Lodi-Rizzini A,

editors. International Conference on Medical Image Computing Schurmann F, et al. Caries detection with near-infrared transillu- €

and Computer-Assisted Intervention. Berlin: Springer; 2016. p. mination using deep learning. J Dent Res 2019;98:1227-33.

424-32. 34. Setzer FC, Shi KJ, Zhang Z, Yan H, Yoon H, Mupparapu M, et al.

27. Chen S, Huang S, Pandey S, Li B, Gao GR, Zheng L, et al. rethinking self-attention for transformer models Artificial intelligence for the computer-aided detection of periapical lesions in cone-beam computed tomographic
on GPUs. In: Proceedings of the International Conference for High Performance images. J Endod

Computing, Networking, Storage and Analysis. St Louis: Association for Computing Machinery; 2021. p. 1-18. 2020;46:987-93.
28. Xie E, Wang W, Yu Z, Anandkumar A, Alvarez JM, Luo P. 35. Lee JH, Kim DH, Jeong SN. Diagnosis of cystic lesions using panoramic and cone beam computed
tomographic images based on
SegFormer: simple and efficient design for semantic segmentation with transformers. Adv Neural Inf Process
Syst 2021;34: deep learning neural network. Oral Dis 2020;26:152-8.

12077-90. 36. Shen Y, Li X, Liang X, Xu H, Li C, Yu Y, et al. A deep-learning-based

29. He KM, Zhang XY, Ren SQ, Sun J. Deep residual learning for image approach for adenoid hypertrophy diagnosis. Med Phys 2020;47:

recognition. In: Proc IEEE/CVF Conf Comput Vis Pattern Recognit. 2171-81.

Silver Spring: IEEE; 2016. p. 770-8. 37. Liu JL, Li SH, Cai YM, Lan DP, Lu YF, Liao W, et al. Automated

30. Shrout PE, Fleiss JL. Intraclass correlations: uses in assessing rater radiographic evaluation of adenoid hypertrophy based on VGGlite. J Dent Res 2021;100:1337-43.
THANKS

You might also like