Download as pdf or txt
Download as pdf or txt
You are on page 1of 14

BJR https://doi.org/10.1259/bjr.

20210527
© 2022 The Authors. Published by the British Institute of Radiology
Received: Accepted: Published online: under the terms of the Creative Commons Attribution-­NonCommercial
28 April 2021 28 July 2021 10 December 2021 4.0 Unported License http://creativecommons.org/licenses/by-nc/4.0/,
which permits unrestricted non-­commercial reuse, provided the original
author and source are credited.

Cite this article as:


San José Estépar R. Artificial intelligence in functional imaging of the lung. Br J Radiol (2022) 10.1259/bjr.20210527.

FUNCTIONAL IMAGING OF THE LUNG SPECIAL FEATURE:


REVIEW ARTICLE
Artificial intelligence in functional imaging of the lung
RAÚL SAN JOSÉ ESTÉPAR
Applied Chest Imaging Laboratory, Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, United
States

Address correspondence to: Dr Raúl San José Estépar


E-mail: rsanjose@bwh.harvard.edu

ABSTRACT
Artificial intelligence (AI) is transforming the way we perform advanced imaging. From high-­resolution image recon-
struction to predicting functional response from clinically acquired data, AI is promising to revolutionize clinical evalua-
tion of lung performance, pushing the boundary in pulmonary functional imaging for patients suffering from respiratory
conditions. In this review, we overview the current developments and expound on some of the encouraging new fron-
tiers. We focus on the recent advances in machine learning and deep learning that enable reconstructing images,
quantitating, and predicting functional responses of the lung. Finally, we shed light on the potential opportunities and
challenges ahead in adopting AI for functional lung imaging in clinical settings.

INTRODUCTION modalities utilize the underlying physics of the image


The advent of artificial intelligence (AI) heralds a new era in properties related to different disease conditions of the
digital data analysis and empowers us to interpret complex lung.3 The amount of data elements generated in functional
systems through unprecedented modeling capabilities. imaging acquisitions, such as multiple MRI snapshots
This power of AI has led to an explosion of applications during free-­breathing acquisitions or different CT energies,
across multiple disciplines including computer vision, and is amenable to applying data-­driven approaches to discover
more recently, health care. Clinical care stands to benefit novel relationships across different imaging phases, which
tremendously from AI to expose meaningful relationships otherwise would be difficult to identify. Various functional
in complex data sets obtained from clinical imaging to imaging modalities rely on advanced acquisitions and
molecular medicine. Although AI still is a nascent field in post-­processing approaches, and hence AI is attractive as a
many health-­care domains, initial applications and proof-­ primary modeling strategy.
of-­concept studies have shown promising and impactful
results in diagnosing different disease conditions using Although AI applications in diagnostic imaging have
only raw data sources like diagnostic imaging.1,2 Thus, increased rapidly in the last few years,4,5 its clinical appli-
the immense analytic capacity of AI technology based on cation to functional lung imaging is currently more of
machine learning and deep learning will power human an evolving opportunity than a tested reality. Farhat et
decision-­making and complement human cognitive capa- al6 recently reviewed the application of deep learning in
bilities. Beyond equipping physicians with new abilities, pulmonary medicine imaging and noticed that the use of
data-­driven modeling, as opposed to just model-­ based AI in lung imaging is mostly circumscribed to chest CT
methods, is serving as a robust paradigm that can further and X-­rays (CXR). In this review, we take a comprehen-
improve the current cutting-­edge algorithmic approaches sive look at the growing interest in applying AI technology
in image formation, reconstruction, and post-­processing. specifically to pulmonary functional imaging and assess
the underlying concepts of the proposed methodologies
The functional lung imaging community is recognizing the that utilize machine- and deep learning for state-­of-­the-­art
transformative power of AI. The data-­driven approaches image reconstructions, functional assessment, and func-
are well-­positioned to invigorate established techniques tional imaging synthesis. We evaluate the opportunities AI
in this field, improving robustness and often surpassing presents and weigh in on the challenges ahead for success-
existing capabilities. Current functional lung imaging fully implementing AI in pulmonary functional imaging.
Artificial Intelligence in Functional Imaging of the Lung BJR

Figure 1. Schematic of a CNN architecture. (A) Traditional CNN architecture is used for image classification or regression. An input
image is decomposed into multiple globally aggregated features by a final-­stage fully connected neural network. Convolutional
layers are the main component in CNNs. Additional layers include data pooling to downsample the image domain, drop-­out
for model simplification, and batch normalization. (B) U-­Net architecture is a type of fully convolutional network that is widely
employed in medical imaging applications. U-­Net contains two convolutional steps: an encoder and a decoder. The encoder
reduces the input data to a latent space, and the decoder uses this information to recreate a new image. Adapted from Chartrand
et al and Zha et al10,11 with permission. CNN, convolutional neural network.

DEEP LEARNING IN MEDICAL IMAGING layer. Several such layers that function differently but comple-
The emergence of AI as a key component in medical imaging mentary make up the CNN (Figure 1A). Information flows in
techniques is largely propelled by vast improvements in machine a forward fashion, and deeper and deeper layers aggregate it in
learning, specifically, deep learning. Deep learning performs a a non-­linear manner. The success of CNNs in medical imaging
wide variety of challenging tasks, including classification, regres- inspired the development of other deep learning paradigms to
sion, clustering, image reconstruction artifact reduction, lesion exploit the various aspects of the information flowing through
detection, segmentation, and registration.7 Deep learning is the network. A few examples of such advanced network methods
an extension of artificial neural networks8 as a core building are recurrent neural networks (RNN), autoencoders (AE), and its
block. Deep learning gained importance in computer vision variations like U-­Nets, generative adversarial networks (GANs),
when neural networks outperformed other methods on several and more recently, transformers, among others.12–14 Figure 1B
visual recognition tasks. Deep learning in medical imaging is illustrates the architecture of a U-­Net used in medical appli-
primarily based on the convolutional neural network (CNN) cations to generate an output image from an input image after
paradigm. LeCun9 introduced the CNNs to extend the use of aggregating information at different scales. For more informa-
neural networks from 1D signals to multi dimensional signals tion, we refer the readers to the recent reviews of deep learning
like 2D or 3D volumes and provide a powerful way to learn in radiology.5,10,12,15
representations of images and solve recognition tasks. CNNs are
constructed with units of a compact kernel of neurons that slides Machine learning approaches can be classified into four major
across an image to produce an output image map. Neurons act categories depending on the nature of the problem being solved
like logistic regressors that generate a response at each image and the data elements used as part of the training, viz. supervised
location as a weighted sum of the image intensities. The kernels learning, unsupervised learning, semi-­supervised learning, and
define the weight of each location, and these neural kernels are reinforcement learning. A model maps a set of inputs to given
assembled in multiple channels to create a CNN convolutional outputs in supervised learning and requires annotated data

2 of 14 birpublications.org/bjr Br J Radiol;95:20210527
BJR San José Estépar

sets. Unsupervised learning aims at finding structure in a data k-­space.32 Reconstruction can be achieved with higher SNR
set, as it is common in clustering problems. Semi-­supervised values than compressed sensing reconstruction, paving the way
learning has emerged as an exciting approach that combines for real-­time reconstruction of contrast-­enhanced MRI of the
supervised and unsupervised techniques to take advantage of lung. Unlike compressed sensing, CNN reconstruction models
non-­annotated datasets that can improve supervised learning rely on incorporating prior information learned as part of the
by matching specific characteristics of the non-­annotated data training process to solve the inverse reconstruction problem.33
set. An example of semi-­supervised learning is image-­to-­image However, the reliance on data to define a model implies that
translation using GANs.16 Finally, reinforcement learning is rigorous validation is needed.34
based on agents that learn from their environments through
trial and error while optimizing some objective functions. An Another area where deep learning can impact is the inherent
example of reinforming learning in medical imaging is landmark need to perform motion correction in dynamic MRI acqui-
detection methods.17 sitions. For example, Fourier Decomposition MRI for V/Q
Imaging relies on registration techniques as a critical step in their
Finally, machine learning approaches define the model param- reconstruction paradigm. Likewise, different approaches have
eters using training data to solve an optimization problem. The been proposed based on traditional functional optimization that
proper definition of the training data set in terms of character- shows stable quality results.35 Deep learning registration offers
istics, sample size, and image conditions are key to converge to an alternative with low computational cost during the inference
a solution that can be generalized to other data sets beyond the stage once the registration model is trained.36,37 Deep learning
training examples. This implies that machine learning needs a in MRI also has been attempted to estimate quantitative tissue
thorough validation and testing of the models using data points parameters using quantitative susceptibility mapping (QMS) and
that have been employed in training. Different techniques known MRI fingerprinting to achieve more standardized biomarkers.38
as cross-­validation are used to check the stability of the model Although these techniques are yet to be applied in both preclin-
when the training data change. It is essential to understand the ical and clinical MRI lung imaging, deep learning could catalyze
conditions under which the model was derived, and the modelers the translation of these advanced quantitative tools.
need to follow good practices and careful documentation of the
training process.18
Computed tomography (CT)
Volumetric CT (VCT) has high-­density contrast between air
AI IN FUNCTIONAL IMAGE RECONSTRUCTION and tissue and is a mainstay of clinical chest radiology. The
Magnetic resonance imaging (MRI) introduction of helical multislice CT scanning facilitated spatio-
MRI has been a primary modality in functional lung imaging temporal 4DCT as a tool in radiation oncology for measuring
because of its safety characteristics and the exceptional ability and managing overall respiratory motion.39 Patient safety is
to discover functional properties.19 The early challenges due increased because only low dose radiation is required when
to a lack of protons and signal inhomogeneities in the lungs combined with advanced iterative reconstruction techniques,
have been overcome, and now MRI can be used for static and and hence functional CT imaging (both 4D and dual-­energy)
dynamic lung imaging.20 The arrival of ultrashort TE (UTE) is preferred for broader clinical use. Like MRI reconstruction,
MRI with sophisticated clinical hardware has advanced lung new AI methods are pushing ultra-­low-­dose CT image recon-
imaging, both at the structural and functional levels.21 From struction to another level. Major manufacturers are intro-
oxygen-­enhanced and hyperpolarized gases MRI for ventilation ducing new deep learning schemes that show higher SNR
imaging21 to Fourier Decomposition proton MRI for ventila- and contrast and improved object detectability than standard
tion/perfusion (V/Q) imaging and dynamic contrast enhance- statistical or model-­based iterative techniques.40–42 New tech-
ment (DCE) MRI for microvascular perfusion,19 MRI has niques under development and current iterative reconstruction
become the modality of choice to examine the complex venti- approaches capable of denoising CNNs promise to improve the
lation and perfusion functions in different pathological condi- image SNR further.43 In addition to supporting low-­dose image
tions.22 Essential to MRI pulse sequence design is the need for reconstruction, deep neural networks have also been employed
short echo times and the balance between acquisition time and to reduce breathing artifacts and enhance image quality.44 All
signal-­to-­noise ratio (SNR) that can be achieved with parallel these advances will make temporal ultra-­low CT a safer and
imaging.23 Many of the computational approaches in MRI appli- more versatile functional modality in clinical applications of
cations have been focused on improving optimal phase encoding CT.
from an under sampled version of the k-­space that could reduce
the acquisition time while keeping SNR levels compatible with Cone-­beam CT (CBCT) system is becoming a key device in the
image quality.24 Compressed sensing techniques were developed interventional suite due to portability and high reconstruction
two decades ago for fast MRI reconstruction, and using diffusion quality for volumetric images. In addition, deep learning is cata-
MRI with hyperpolarized 129Xe.25,26 In the past few years, CNNs lyzing dynamic applications with real-­time reconstruction from
and Recurrent NNs have taken a prominent role in improving sparse projection data permitting real-­time ventilation imaging
static and dynamic MRI reconstruction to learning the spatio- in image-­ guided radiotherapy.45,46 The combination of these
temporal dependencies in heavily under sampled k-­ space improvements can open the door for these preclinical CBCT
data.27–31 Duan et al32 showed improved ventilation imaging applications to broader adoption as a lung functional imaging
using a coarse-­ to-­
fine neural network from under sampled modality.47

3 of 14 birpublications.org/bjr Br J Radiol;95:20210527
Artificial Intelligence in Functional Imaging of the Lung BJR

Dual-­energy CT scanning (DECT) with contrast agents (iodine Deformable image registration (DIR)
or Xenon) has also enabled the assessment of regional ventila- Ventilation imaging. DIR is one of the most employed methods
tion and perfusion by taking advantage of the difference in linear to assess ventilation defects from temporal imaging modalities
attenuation coefficient at different X-­ray energies.48–52 CNNs are like 4DCT, CBCT, and MRI. DIR-­enabled CT-­based ventilation
being applied to improve DECT imaging fundamentals related assessment has been successfully used in radiation oncology to
to material decomposition,53,54 simplify dual-­energy acquisitions avoid damage from radiation therapy as well as performing dose–
based on single-­energy material decomposition55 and combine response assessment.39 Recently, MRI-­based mechanical assess-
virtual single-­energy structural imaging from dual-­energy acqui- ment of the lung via elastic registration has also been used to
sitions. The translation of these techniques can expand the role assess SSc-­related fibrosis.66 Ventilation assessment using tissue
of DECT in ventilation and perfusion imaging as dual-­energy is expansion metrics based on the deformation fields generated by
more readily available. DIR or the differences in tissue density between the coregistered
image pairs have shown reasonable correlation with the regional
Positron emission tomography (PET-­ CT) and single-­ photon assessment of ventilation using Xenon CT48,67 and Xenon-­MRI.68
emission computed tomography (SPECT) have also been However, variability between registration approaches has led to
employed to perform V/Q imaging to improve planar lung scin- a poor correlation between DIR-­based ventilation metrics and
tigraphy56 and assess pulmonary inflammation.57 Deep learning reference modalities at the voxel level.69
solutions are being developed to enhance PET reconstruction
and attenuation correction58,59 ; however, up to date, no valida- Traditional DIR approaches describe the mapping of two
tion studies have been performed to show the impact of AI-­en- images via a deformable field by finding the elastic transforma-
hanced molecular imaging in the lung. Thus, this area remains an tion parameters that minimize the difference between images
exciting opportunity for AI in the years to come. acquired at different moments during the respiratory cycle.
Deformable registration in the lung has been challenging by the
complexities of describing the transformation in a parametric
AI IN FUNCTIONAL QUANTIFICATION
way when dealing with large displacements commonly found
Automated lung segmentation in functional in registration between images acquired between TLC and FRC
modalities while preserving known invariants like lung mass.70 Neverthe-
For a functional imaging modality, it is important to define less, traditional methods have partially addressed lung registra-
the structural components of the lung, such as lung field, lobar tion with reasonable accuracy performance, albeit with complex
compartments, fissures, and the bronchovascular tree, to locate methodologies that lack robustness and require long computa-
and quantitate image-­based data. Deep learning is significantly tion times due to the numerical minimization needed for each
evolving and transforming the post-­ acquisition upstream registration instance.71
operations necessary to resolve the lung’s structural compo-
nents to interpret and quantify regional functional markers. Deep learning-­based deformation image registration (DLDIR)
Deep learning is indeed replacing the rule-­based approaches has emerged in the last 5 years as a new paradigm for regis-
to segment the lung60 and the lobes with more precise and reli- tration. One of the main advantages of DLDIR approaches is
able mapping methods based on CNNs that have shown more the explicit or implicit definition of the deformation field via a
consistent results across modalities.6 In particular, the use of CNN that can better capture the complexities of the deforma-
U-­nets, a specialized neural network architecture for semantic tion in a particular problem with relatively low computational
segmentation, has provided compelling results in multiple needs during the inference step. DLDIR can be classified into
medical and biomedical imaging segmentation tasks.61,62 These supervised and unsupervised registration methods. Supervised
new approaches to image segmentation are superior in part approaches that regress the displacement vector field between
due to their enhanced ability to encode shape priors of the two images using a CNN model were initially employed in
segmented organ based on the provided training data without DLDIR72,73R These methods were trained with previously
explicitly modeling the shape. One example of the application aligned images using either a reference method72 or synthetic
of U-­Nets to functional modalities is the use of a 2D U-­Net to deformations.73,74 Although these approaches improve the regis-
perform volumetric lung segmentation from UTE proton MRI tration computing times from minutes to just a few seconds,
in a multiplane fashion.11 Despite reduced contrast around the their accuracy is defined by the characteristics of the reference
lung boundaries, the lung volume estimates in a set of asthmatic approach used for learning. The reported registration errors on
and cystic fibrotic patients closely matched the reference values reference data sets are on par with their traditional techniques
(Figure 2). One caveat for the application of deep learning is the that have been extensively used in ventilation studies. Unsuper-
limited availability of training data. Recently, Guo and colleagues vised registration approaches have been explored to overcome
showed increased robustness in UTE MRI lung segmentation by the limitation of using an explicit reference deformation. Among
including an adaptive k-­mean after the initial U-­net segmenta- them, unsupervised DLDIR has captured the attention in the last
tion.63 Robust lung segmentation in MRI is essential for quan- few years because it needs only limited training data.75 Unsu-
titative analysis of functional parameters and its use in clinical pervised techniques use a mismatch metric between the moving
studies. Similarly, a multi resolution U-­Net architecture has been image and the reference image within the training data, as occurs
proposed for robust lobar segmentation in CT images to enable in a traditional registration framework. A CNN model encodes
regional quantification of dynamic CT series.64,65 the deformation parameters, and the optimization is done over

4 of 14 birpublications.org/bjr Br J Radiol;95:20210527
BJR San José Estépar

Figure 2. Segmentation of the lung field on oxygen-­enhanced UTE MRI images using a multiplane (axial and coronal and final
consensus) U-­Net approach in a 37-­year-­old female with cystic fibrosis. The delineation of the lung boundaries can be achieved
despite the reduced contrast. Adapted from Zha et al11 with permission. UTE, ultrashort TE.

the parameters of the CNNs rather than the deformation param- the most exciting characteristic of DLDIR is the need for low
eters. Once the training is completed, the CNN is employed to computation to resolve a deformation field once the method has
generate new deformation parameters from unseen data sets. De been trained. This opens the opportunity for bringing DIR-­based
Vos and colleagues37 pioneered this framework in lung regis- ventilation metrics closer to the patient point-­ of-­
care when
tration using the multiscale ConvNet architecture (Figure 3). A applied to lower-­cost setups like 4D CBCT. These exciting tech-
similar approach has been shown to be feasible to register CT to niques are potential modalities for ventilation assessment during
CBCT and CBCT to CBCT76 and one-­shot methods have been treatment in the near future.39
tailored to track periodic breathing motion patterns.77 Finally,
Fu et al77 proposed a LungRegtNet for 4DCT registration that Multiparametric assessment. Registration is also a fundamental
employs vascular landmarks to achieve superior performance processing component of multiparametric structural and func-
compared to current methods based on unsupervised registra- tional imaging analyses to correlate structural changes with
tion in the DIRLAb data set.78 functional defects in lung pathophysiology.80–82 MacNeil et al83
used volume-­matched CT and hyperpolarized helium-­3 (3He)
The new breed of lung DLDIR approaches can lead to higher MRI using static and diffusion-­weighted imaging to define a
accuracy and more robust registration results that could improve multiparametric response map (mPRM). Structural changes
the assessment of regional ventilation at the voxel level; however, measured on CT were coupled with regional MRI-­based venti-
extensive validation studies in larger prospective samples should lation and microstructure based on the apparent diffusion coef-
be conducted to confirm this possibility.69,79 Inaccurate regis- ficient (ADC) as shown in Figure 4. mPRM metrics were able
trations can result in lung tissue being mapped to blood vessel to reveal emphysema and small airways disease not otherwise
voxels which will cause artifacts in the CT-­ventilation image in identified with CT or MRI, reflecting the power of multimodal
both the Jacobian and HU formulations. Without any doubt, approaches in disease characterization. Registration approaches

5 of 14 birpublications.org/bjr Br J Radiol;95:20210527
Artificial Intelligence in Functional Imaging of the Lung BJR

Figure 3. An example of Unsupervised Deep Learning Deformable Image Registration from an expiratory (moving) to an inspira-
tory (fixed) CT scan. The CNN models the deformation field depicted as a warped grid. The Jacobian map estimates the volume
change and can be used to compute ventilation maps. Registration inference can be performed in a few seconds in comparison to
classical techniques enabling real-­time deployment. Adapted from Vos BD de et al37 with permission. CNN, convolutional neural
network.

that include deep learning schemes will likely translate these with ventilation defect labels obtained from 1H and 3He MRI
upcoming multiparametric approaches to clinical applications using a k-­mean approach. Different classifiers were compared,
beyond correlative studies. and the most relevant features were selected in a cross-­validation
experimental setup. The AUC of the best model was 82%,
Functional prediction with high specificity (91%) and moderate sensitivity (49%).
Deep learning approaches have been also postulated to predict Ventilation-­defect percentage (VPD) predicted by the model
the functional parameters from structural modalities. Westcott and the one computed using the reference MRI modality show
and colleagues showed how textural-­based features extracted a strong correlation (90%); an encouraging sign of the ability of
from a volume of interest on CT scans can predict regional venti- these approaches to offer patient-­specific information on func-
latory effects in subjects with COPD.84 The method was trained tional impairment conditions. However, it is hard to ascertain

6 of 14 birpublications.org/bjr Br J Radiol;95:20210527
BJR San José Estépar

Figure 4. Multiparametric imaging mapping from 3He MRI and CT in COPD. Functional and structural images (top) are aligned
to produce a multiparametric Response Map (bottom). DL Registration techniques can enable accurate and real-­time response
mapping assessment. Adapted from MacNeil et al83 with permission. COPD, chronic obstructive pulmonary disease.

how stable the features proposed by this study could be gener- COPD. Gonzalez et al85 used a three-­layer feed-­forward CNN
alized to a larger COPD population with milder disease condi- to predict COPD functional status based on spirometry. The
tions because of the limited sample size used for training. Larger correlation between FEV1 measurements and deep learning
sample size and reproducibility studies are needed to define the CT-­based measurements was 73%. Tang and colleagues used a
generalization power of the proposed features. more complex network—a residual Network with 152 layers—to
diagnose COPD from CT volumetric imaging.86 The AUC in the
CNNs have also been used to extract features from CT images testing cohort for the best model was 86%. This result was consis-
that can define spirometric status in smokers with and without tent with the performance reported by Gonzalez and colleagues.

7 of 14 birpublications.org/bjr Br J Radiol;95:20210527
Artificial Intelligence in Functional Imaging of the Lung BJR

These results suggest that different architectures can extract Supervised functional synthesis
complementary feature information from CT imaging to predict One of the first demonstrations of image translation approaches
an outcome. in functional lung images has been synthesizing ventilation
imaging from 4DCT without explicit use of DIR. Unfortunately,
In sum, the best network architecture design in terms of combi- 4DCT-­derived ventilation images are sensitive to the choice of
nations of neural layers must strike a trade-­off between model DIR algorithm and its accuracy.90 Direct approaches can over-
complexity and the ability to generalize to different populations come this limitation by directly learning tissue expansion char-
and imaging characteristics. Meta-­learning techniques are being acteristics from multiple snapshots across a breathing cycle.
actively researched and developed to improve upon the predic- Zhong et al91 proposed a fully convolutional model composed
tion of single learning techniques in multiple learning episodes of seven layers without any downsampling step to preserve the
that integrates different approaches.87 image resolution. Despite reasonable results, fully convolutional
networks are limited to local relations between the inspiratory
and expiratory images around a voxel that could lead to inconsis-
AI IN FUNCTIONAL ASSESSMENT
tent results if the mismatch between inspiratory and expiratory
Function assessment is one of the most exciting emerging appli- images is significant.
cations of AI where a direct functional response is synthesized to
mimic a target functional modality, e.g. dual energy CT pulmo- To overcome some of the limitations of fully convolutional
nary perfusion, from source modalities that require simpler or approaches, encoder–decoder convolutional like the U-­ net
a more direct imaging reconstruction setup. These techniques architecture have been extensively applied in image-­to-­image
aim to resolve intrinsic relations across functional modalities or reconstruction tasks. The U-­Net architecture includes multiple
even the resolution of functional information from structural convolutional steps followed by a data down-­sampling step in the
modalities like CT. These approaches are referred to as image-­ encoder phase and up-­sampling layers in the decoder phase. Also,
to-­image translation within the AI community. They are based information from the encoding phase at a given level is trans-
on an array of supervised and semi-­supervised techniques that ferred to the decoder phase, similar to the fully convolutional
range from fully CNNs like convolutional generators based on approach. These architectures have shown promising results in
autoencoders and U-­nets61 to Generative Adversarial Networks synthesizing different functional ventilation images.92,93 Capaldi
(GAN)13 that combine a generator and a discriminator network. et al93 demonstrated the use of U-­nets to estimate hyperpo-
Image-­to-­image translation techniques were borne off in the larized noble gas MRI ventilation maps from free-­breathing
context of computer graphics applications88 and one promi- proton (1H) MRI after breathing phase sorting and interpola-
nent application is artificial style representation from natural tion (Figure 5). Training, validation, and testing were done in
images using paired (conditional) or unpaired (cycle) GANs.16,89 a set of 114 subjects with different pulmonary conditions, i.e.
In paired approaches, the training is performed in a data set asthma, COPD, bronchiectasis, and NSCLC, and healthy volun-
containing paired instances of the target and source modality, teers. The deep learning-­based VDP estimation showed good
while unpaired approaches can use instances from the source agreement with reference values based on hyperpolarized 3He
and the target modalities that are not matched or even belong to MRI. In a similar fashion to Zhong et al.,89 Gerard et al92 used
the same population of subjects. a multi resolution U-­net to provide a direct estimation of the

Figure 5. Deep learning ventilation MRI for the synthesis of 3He MRI ventilation imaging from free-­breathing proton (1H) MRI. (A)
Illustration of the MRI pipeline to register and sort out free-­breathing MRI images before consumption by the image-­to-­image
U-­Net network. The training was performed to predict ventilation maps from 3He MRI. (B) Comparison between the reference
ventilation maps and DL ventilation MRI synthetic imaging for subjects with different types of obstructive airway diseases. Agree-
ment in ventilation defect percentage between modalities was high with good correspondence. Adapted from Capaldi et al93 with
permission.

8 of 14 birpublications.org/bjr Br J Radiol;95:20210527
BJR San José Estépar

Figure 6. Illustration of image-­to-­image translation techniques for synthetic ventilation and perfusion assessment based on single
energy CT. (A) Direct Jacobian ventilation map estimation using a multi resolution deep learning approach without deformation
image registration from inspiratory and expiratory CT scans. (B) Estimation of dual-­energy perfusion maps from single energy CT
angiograms to assess perfusion defects using a functional consistency CycleGAN.

ventilation response based on the deformation Jacobian without three encoding CNNs and three discriminators to generate the
a DIR (Figure 6A). Unlike prior approaches, this network was functional output with moderate local correlations (0.52 and
trained in an extensive database of inspiratory and expiratory CT 0.66 in the core and peel lung regions, respectively). Although
scans from the COPDGene cohort and showed high voxel-­wide unpaired GAN approaches are more complex and more chal-
correlations with ventilation images based on a classical mass-­ lenging to train due to the need to find an equilibrium point
preserving DIR approach. in a min-­ max optimization problem, they seem relevant to
approximate the statistical characteristics of the image that is
Like ventilation imaging, recent studies have also shown the being estimated. Unpaired GAN approaches can be employed
use of CNN approaches to estimate lung perfusion from single with larger databases of unpaired datasets to predict the target
energy CT scans. Ren et al94 employed an attention U-­net archi- functional modality without the need for scanning the same
tecture to synthesize albumin SPECT/CT perfusion mapping subject with both modalities as required by plain convolutional
from non-­contrast CT scans to enable functional lung avoid- approaches.93,94 Thus, the application of GANs presents a greater
ance in radiotherapy planning.94 Their proposed neural network opportunity in the context of functional imaging. GAN-­based
is superior to the traditional U-­Net architecture and is able to learning can also be applied in various domains related to image
identify features from the CT domain that are compatible with reconstruction and preprocessing stages like super-­resolution
perfusion defects with moderate correlation. Despite the limited and multimodal registration and modality synthesis for multi-
size of the training (31 subjects) and testing data (11 subjects), parametric analysis.
these results illustrate the ability of deep learning approaches
to estimate both ventilation and perfusion functional imaging
from routine non-­contrast CT scans under a common imaging Opportunities and challenges
platform. AI applications in medical imaging have exploded over the past
5 years, driven by multiple factors. First, the maturity of the deep
Adversarial functional imaging learning approaches exploiting non-­linear relations in the data
Semi-­supervised approaches based on GANs are also under has been instrumental. Second, advances in optimization and
development as an improved alternative in image translation regularization techniques have made it tractable to fit models
that aims at increasing the stability of the results of multi layered with a large number of parameters to a limited set of training
neural networks.15 For example, Nardelli et al95 illustrated the data points. Third, the availability of methods in well-­maintained
use of a modified conditional CycleGAN to synthesize dual-­ open-­source libraries has empowered a broad community with
energy-­derived iodine perfusion maps from single energy AI techniques, including non-­experts in the field with limited
contrast CT scans (Figure 6B). The cycleGAN leverages both skills. Finally, specialized computing architectures based on
CT imaging and structural vascular information in a setup with Graphics Processing Units (GPUs) have delivered the necessary

9 of 14 birpublications.org/bjr Br J Radiol;95:20210527
Artificial Intelligence in Functional Imaging of the Lung BJR

computing power to train advanced models within a reasonable (1) Validation: data-­ driven approaches require rigorous
amount of time. validation studies to gauge the generality and robustness of
the methods. Until now, most of the studies that apply AI to
While AI is still an emerging discipline in functional lung functional lung imaging were performed with small datasets.
imaging, there are clear and tangible opportunities worth Although they provide early evidence of what AI can do,
mentioning: they lack the rigor needed to qualify as bonafide approaches.
Large databases on diverse populations will be required to
(1) Multifunctional assessment: AI has the potential to unleash train and validate the techniques before translating them
the power of multiple functional assessments under a single into clinical use.
imaging platform. Currently, ventilation and perfusion (2) Model transparency: one of the major criticisms of deep
imaging require the use of different imaging contrast learning is a lack of transparency and interpretability. In
agents in CT. One potential integrated solution could be other words, users (clinicians and researchers) should be
the emerging combination of 4D CBCT and simulated able to understand the “reasoning” of the AI model; why it
dual-­energy imaging for functional imaging. The benefits renders one verdict and not the other. Model developers and
of synthetic multifunctional assessment include reduced data scientists must make didactic efforts to teach the users
radiological tests that require hard-­ to-­
obtain radioactive how the models operate and decide outcomes. Transparency
contrast agents, reduced radiation exposure, and improved is crucial to defining a modality’s operational realm and
care delivery as imaging synthesis is performed without the proactively restricting deviations from the model that can
patient as part of the radiological and clinical evaluation. affect image quality and diagnostic interpretability.
However, realizing these opportunities will require an (3) Model robustness: one collateral effect of the lack of model
extensive validation process to define the interval confidence transparency is model instability to adversarial attacks
in which the synthetic images are consistent with the (negligible input variations resulting in significant changes
underlying functional ground truth. The outcome of the of the model output) and intrinsic model biases. Adversarial
validation studies will also determine the potential of AI-­ attack prevention is an oft-­discussed topic in AI and they
enabled synthetic imaging for clinical adoption and whether pose a substantial barrier to the use of AI for image synthesis
it could eventually be circumscribed to narrower clinical in critical applications like diagnostic imaging.97 Careful
scenarios where an initial triage based on a sub optimal model design and training considerations must be taken to
approach might be useful. avoid adversarial attacks overall if models are trained with
(2) Clinical translation to low footprint radiological setups: off-­the-­shelve components.98 In a similar fashion, biases and
current functional imaging relies on advanced modalities disparity in functional expression may be translated into AI
that require specialized equipment like hyperpolarizers. systems trained with imaging data in which those underlying
The potential use of AI-­driven image-­to-­image translation biases exist.99 Understanding the specific performance
could bring the benefit of functional information to standard characteristics of each model is crucial to move beyond the
radiological imaging modalities that are available in primary preclinical scenario and successfully introduce it into clinical
and secondary care facilities. practice.
(3) Novel biomarkers: functional modalities provide (4) Unlocking data silos: the unresolved complexities of
voluminous multiparametric data that need to be functional imaging imply that the number of training
laboriously synthesized into specific markers of disease. AI cases is limited compared to training scenarios available
provides an alternative computational approach to define for modalities like CT and CXR. Training sample size is
novel biomarkers of the disease. Supervised CNNs can be a key factor in deep learning that depends on the specific
used to extract relevant image features that are associated characteristics of the problem begin addressed and the model
with a specific outcome. Unsupervised autoencoder that is used. Unlocking the available data silos is paramount
techniques can also be applied for dimensionality reduction for implementing new data-­driven advances in functional
to define novel biomarkers from multiparametric imaging lung imaging. Open data repositories and challenges like
sources. VAMPIRE79 are crucial for developing machine learning-­
(4) Unraveling lung structure and function: the relationship centric approaches that improve functional lung imaging
between structure and function of the lung has been well-­ quality and performance reasonably and reproducibly. Issues
described, but we are still limited in linking the structural about data integrity and privacy could be overcome with
changes to the functional impairment and achieving a federated solutions that enable de-­centralized AI modeling
better characterization of the disease. Studies that combine to exploit pan-­institutional datasets.100,101
structural and functional modalities83,96 can take advantage
of AI as an exploratory tool to gain further insight into the
structure–function relationship. CONCLUSION
AI continues to evolve rapidly and push the limits in many
Despite the exciting and compelling preliminary evidence prom- spheres, and its interest in medicine is growing exponentially
ising a more significant and elaborate role for AI in pulmonary in recent years, especially in the functional imaging domain.
functional imaging, several challenges remain that need to be Public and private entities recognize this as a thrust area, and
carefully evaluated and resolved before realizing AI as a reliable their initiatives have begun to catalyze this field.100 The pulmo-
component of clinical functional lung imaging: nary functional imaging community can benefit from this

10 of 14 birpublications.org/bjr Br J Radiol;95:20210527
BJR San José Estépar

frenetic activity in data science as novel approaches using rich developers. Therefore, a multidisciplinary approach is essential
data sets are proposed to redefine disease conditions. Machine to introduce AI in functional pulmonary imaging. Successful
learning models that link imaging, functional, biomarkers, and incorporation of AI in functional imaging holds promise to
multi omics data can advance our understanding of the complex transform the field, delivering significant benefits in the coming
and intimate connection between structure and function.102 years.
AI can also play a transformative role in adopting functional
imaging approaches to clinical settings that are now restricted COMPETING INTERESTS
to preclinical scenarios due to their complexity. The use of func- Dr. San José Estépar has no conflicts of interest to disclose related
tional modalities in diseases like chronic obstructive pulmonary
to the context of this manuscript. He is the founder and co-­owner
disease (COPD), asthma, Interstital Lung Disease (ILD), or Cystic
of Quantitative Imaging Solutions, a company that provides
Fibrosis (CF) can bring a new dimensionality to define relevant
image-­based consulting and develops software for data sharing
markers of disease heterogeneity and progression.82,103,104 At the
same time, the application of AI is not free of limitations and and artificial intelligence applications.
perils stemming from the experimental nature of current tech-
niques. The reliance on vast amounts of data exemplars rather FUNDING
than well-­understood “fixed” models could act as a double-­edged This study is supported by funding from the National Heart,
sword if AI is applied without careful methodological consid- Lung, and Blood Institute of the NIH to Dr. San José Estépar
eration. This issue is even more relevant in functional imaging under the award numbers R01HL149877, R01 HL116473, and
scenarios where functional metrics describe subtle pathophys- R21HL140422. He is also supported by the National Library
iological processes that need to be well-­understood by the AI Medicine award R21LM013670.

REFERENCES
1. Gulshan V, Peng L, Coram M, Stumpe 7. Hatcher WG, Yu W. A survey of deep Adversarial networks: a primer for
MC, Wu D, Narayanaswamy A, et al. learning: platforms, applications and radiologists. Radiographics 2021; 41: 200151.
Development and validation of a deep emerging research trends. IEEE Access 2018; doi: https://doi.org/10.1148/rg.2021200151
learning algorithm for detection of diabetic 6: 24411–32. doi: https://doi.org/10.1109/​ 16. Isola P, Zhu J-­Y, Zhou T, Efros AA. Image-­
retinopathy in retinal fundus Photographs. ACCESS.2018.2830661 to-­Image translation with conditional
JAMA 2016; 316: 2402–10. doi: https://doi.​ 8. McCulloch WS, Pitts W. A logical calculus Adversarial networks. arXiv 2016.
org/10.1001/jama.2016.17216 of the ideas immanent in nervous activity. 17. Ghesu F-­C, Georgescu B, Zheng Y, Grbic
2. Esteva A, Kuprel B, Novoa RA, Ko J, Swetter 1943. Bull Math Biol 1990; 52(1-­2): 115–33. S, Maier A, Hornegger J, et al. Multi-­Scale
SM, Blau HM, et al. Dermatologist-­level 9. LeCun Y, Bengio Y, Hinton G. Deep deep reinforcement learning for real-­time
classification of skin cancer with deep learning. Nature 2015; 521: 436–44. doi: 3D-­Landmark detection in CT scans.
neural networks. Nature 2017; 542: 115–8. https://doi.org/10.1038/nature14539 IEEE Trans Pattern Anal Mach Intell 2019;
doi: https://doi.org/10.1038/nature21056 10. Chartrand G, Cheng PM, Vorontsov E, 41: 176–89. doi: https://doi.org/10.1109/​
3. Ohno Y, Seo JB, Parraga G, Lee KS, Gefter Drozdzal M, Turcotte S, Pal CJ, et al. TPAMI.2017.2782687
Deep learning: a primer for radiologists. 18. Haibe-­Kains B, Adam GA, Hosny A,
WB, Fain SB, et al. Pulmonary functional
Radiographics 2017; 37: 2113–31. doi: Khodakarami F, Waldron L, et al.Massive
imaging: part 1-­State-­of-­the-­Art technical
https://doi.org/10.1148/rg.2017170077 Analysis Quality Control (MAQC) Society
and physiologic underpinnings. Radiology
11. Zha W, Fain SB, Schiebler ML, Evans Board of Directors Transparency and
2021; 299: 508–23. doi: https://doi.org/10.​
MD, Nagle SK, Liu F. Deep convolutional reproducibility in artificial intelligence.
1148/radiol.2021203711
neural networks with multiplane consensus Nature 2020; 586: E14–16. doi: https://doi.​
4. Hosny A, Parmar C, Quackenbush J,
labeling for lung function quantification org/10.1038/s41586-020-2766-y
Schwartz LH, Aerts HJWL. Artificial
using UTE proton MRI. J Magn Reson 19. Biederer J, Mirsadraee S, Beer M, Molinari
intelligence in radiology. Nat Rev Cancer
Imaging 2019; 50: 1169–81. doi: https://doi.​ F, Hintze C, Bauman G, et al. MRI of the
2018; 18: 500–10. doi: https://doi.org/10.​
org/10.1002/jmri.26734 lung (3/3)-­current applications and future
1038/s41568-018-0016-5 12. Schmidhuber J. Deep learning in neural perspectives. Insights Imaging 2012; 3:
5. Savadjiev P, Chong J, Dohan A, networks: an overview. Neural Netw 2015; 373–86. doi: https://doi.org/10.1007/s13244-​
Vakalopoulou M, Reinhold C, Paragios N, 61: 85–117. doi: https://doi.org/10.1016/j.​ 011-0142-z
et al. Demystification of AI-­driven medical neunet.2014.09.003 20. Mills GH, Wild JM, Eberle B, Van Beek EJR,
image interpretation: past, present and 13. Goodfellow IJ, Pouget-­Abadie J, Mirza M, Beek E. Functional magnetic resonance
future. Eur Radiol 2019; 29: 1616–24. doi: Xu B, Warde-­Farley D, Ozair S. Generative imaging of the lung. Br J Anaesth 2003; 91:
https://doi.org/10.1007/s00330-018-5674-x Adversarial networks. arXiv 2014. 16–30. doi: https://doi.org/10.1093/bja/aeg149
6. Farhat H, Sakr GE, Kilany R. Deep learning 14. Vaswani A, Shazeer N, Parmar N, Uszkoreit 21. Kruger SJ, Nagle SK, Couch MJ, Ohno Y,
applications in pulmonary medical imaging: J, Jones L, Gomez AN. Attention is all you Albert M, Fain SB. Functional imaging of
recent updates and insights on COVID-­19. need. Arxiv 2017. the lungs with gas agents. J Magn Reson
Mach Vis Appl 2020; 31: 53. doi: https://doi.​ 15. Wolterink JM, Mukhopadhyay A, Leiner T, Imaging 2016; 43: 295–315. doi: https://doi.​
org/10.1007/s00138-020-01101-5 Vogl TJ, Bucher AM, Išgum I. Generative org/10.1002/jmri.25002

11 of 14 birpublications.org/bjr Br J Radiol;95:20210527
Artificial Intelligence in Functional Imaging of the Lung BJR

22. Biederer J, Heussel C, Puderbach M, optimize the reconstruction of sparsely iterative reconstruction for Ultra-­Low-­
Wielpuetz M. Functional magnetic sampled MRI. Magn Reson Imaging 2020; Dose CT of the chest: image quality and
resonance imaging of the lung. Semin Resp 66: 93–103. doi: https://doi.org/10.1016/j.​ Lung-­RADS evaluation. American Journal
Crit Care 2014; 35: 74–82. doi: https://doi.​ mri.2019.03.012 of Roentgenology 2020; 215: 1321–8. doi:
org/10.1055/s-0033-1363453 34. Antun V, Renna F, Poon C, Adcock B, https://doi.org/10.2214/AJR.19.22680
23. Wild JM, Marshall H, Bock M, Schad LR, Hansen AC. On instabilities of deep 44. Mori S, Hirai R, Sakata Y. Using a deep
Jakob PM, Puderbach M, et al. MRI of the learning in image reconstruction and the neural network for four-­dimensional
lung (1/3): methods. Insights Imaging 2012; potential costs of AI. Proc Natl Acad Sci U S CT artifact reduction in image-­guided
3: 345–53. doi: https://doi.org/10.1007/​ A 2020; 117: 30088–95. doi: https://doi.org/​ radiotherapy. Phys Med 2019; 65: 67–75. doi:
s13244-012-0176-x 10.1073/pnas.1907377117 https://doi.org/10.1016/j.ejmp.2019.08.008
24. Ouriadov A, Guo F, McCormack DG, 35. Ljimani A, Hojdis M, Stabinska J, Valentin 45. Chen G, Zhao Y, Huang Q, Gao H. 4D-­
Parraga G. Accelerated 129Xe MRI B, Frenken M, Appel E, et al. Analysis of AirNet: a temporally-­resolved CBCT
morphometry of terminal airspace different image-­registration algorithms for slice reconstruction method synergizing
enlargement: feasibility in volunteers and Fourier decomposition MRI in functional analytical and iterative method with deep
those with alpha‐1 antitrypsin deficiency. lung imaging. Acta Radiol 2021; 62: learning. Phys Med Biol 2020; 65: 175020.
Magnet Reson Med 2020; 84: 416–26. 028418512094490. doi: https://doi.org/10.​ doi: https://doi.org/10.1088/1361-6560/​
25. Zhang H, Xie J, Xiao S, Zhao X, Zhang 1177/0284185120944902 ab9f60
M, Shi L, et al. Lung morphometry using 36. Yang X, Kwitt R, Styner M, Niethammer 46. Jiang Z, Chen Y, Zhang Y, Ge Y, Yin F-­F, Ren
hyperpolarized 129Xe Xe multi-­b diffusion M. Quicksilver: Fast predictive image L. Augmentation of CBCT reconstructed
MRI with compressed sensing in healthy registration - A deep learning approach. from Under-­Sampled projections using deep
subjects and patients with COPD. Med Phys Neuroimage 2017; 158: 378–96. doi: https://​ learning. IEEE Trans Med Imaging 2019; 38:
2018; 45: 3097–108. doi: https://doi.org/10.​ doi.org/10.1016/j.neuroimage.2017.07.008 2705–15. doi: https://doi.org/10.1109/TMI.​
1002/mp.12944 37. de Vos BD, Berendsen FF, Viergever MA, 2019.2912791
26. Collier GJ, Hughes PJC, Horn FC, Chan H, Sokooti H, Staring M, Išgum I. A deep 47. Murrie RP, Werdiger F, Donnelley M, Lin
Tahir B, Norquay G, et al. Single breath‐held learning framework for unsupervised affine Y-­wei, Carnibella RP, Samarage CR, et al.
acquisition of coregistered 3D 129Xe lung and deformable image registration. Med Real-­Time in vivo imaging of regional lung
ventilation and anatomical proton images of Image Anal 2019; 52: 128–43. doi: https://​ function in a mouse model of cystic fibrosis
the human lung with compressed sensing. doi.org/10.1016/j.media.2018.11.010 on a laboratory X-­ray source. Sci Rep 2020;
Magnet Reson Med 2019; 82: 342–7. 38. Lundervold AS, Lundervold A. An overview 10: 447. doi: https://doi.org/10.1038/s41598-​
27. Wang S, Su Z, Ying L, Peng X, Zhu S, Liang of deep learning in medical imaging 019-57376-w
F, et al. Accelerating Magnetic Resonance focusing on MRI. Z Med Phys 2019; 29: 48. Kong X, Sheng HX, Lu GM, Meinel FG,
Imaging via Deep Learning. 2016 Ieee 13th 102–27. doi: https://doi.org/10.1016/j.​ Dyer KT, Schoepf UJ, et al. Xenon-­enhanced
Int Symposium Biomed Imaging Isbi. 2016; zemedi.2018.11.002 dual-­energy CT lung ventilation imaging:
2016: 514–7. 39. Vinogradskiy Y. Ct-­Based ventilation techniques and clinical applications. AJR
28. Schlemper J, Caballero J, Hajnal JV, imaging in radiation oncology. BJR Open Am J Roentgenol 2014; 202: 309–17. doi:
Price AN, Rueckert D. A deep cascade of 2019; 1: 20180035. doi: https://doi.org/10.​ https://doi.org/10.2214/AJR.13.11191
Convolutional neural networks for dynamic 1259/bjro.20180035 49. Hoey ETD, Mirsadraee S, Pepke-­Zaba J,
Mr image reconstruction. IEEE Trans Med 40. Park C, Choo KS, Jung Y, Jeong HS, Hwang Jenkins DP, Gopalan D, Screaton NJ. Dual-­
Imaging 2018; 37: 491–503. doi: https://doi.​ J-­Y, Yun MS. Ct iterative vs deep learning Energy CT angiography for assessment of
org/10.1109/TMI.2017.2760978 reconstruction: comparison of noise and regional pulmonary perfusion in patients
29. Zhu B, Liu JZ, Cauley SF, Rosen BR, Rosen sharpness. Eur Radiol 2021; 31: 3156–64. with chronic thromboembolic pulmonary
MS. Image reconstruction by domain-­ doi: https://doi.org/10.1007/s00330-020-​ hypertension: initial experience. AJR Am J
transform manifold learning. Nature 2018; 07358-8 Roentgenol 2011; 196: 524–32. doi: https://​
555: 487–92. doi: https://doi.org/10.1038/​ 41. Lenfant M, Chevallier O, Comby P-­O, doi.org/10.2214/AJR.10.4842
nature25988 Secco G, Haioun K, Ricolfi F, et al. Deep 50. Goo HW, Goo JM. Dual-­Energy CT: new
30. Strack R. AI transforms image learning versus iterative reconstruction horizon in medical imaging. Korean J Radiol
reconstruction. Nat Methods 2018; 15: 309. for CT pulmonary angiography in the 2017; 18: 555–69. doi: https://doi.org/10.​
doi: https://doi.org/10.1038/nmeth.4678 emergency setting: improved image quality 3348/kjr.2017.18.4.555
31. Lin DJ, Johnson PM, Knoll F, Lui YW. and reduced radiation dose. Diagnostics 51. Fuld MK, Halaweish AF, Haynes SE, Divekar
Artificial intelligence for MR image 2020; 10: 558. doi: https://doi.org/10.3390/​ AA, Guo J, Hoffman EA. Pulmonary
reconstruction: an overview for clinicians. J diagnostics10080558 perfused blood volume with dual-­energy
Magn Reson Imaging 2021; 53: 1015–28. doi: 42. Brady SL, Trout AT, Somasundaram E, CT as surrogate for pulmonary perfusion
https://doi.org/10.1002/jmri.27078 Anton CG, Li Y, Dillman JR. Improving assessed with dynamic multidetector CT.
32. Duan C, Deng H, Xiao S, Xie J, Li H, Sun image quality and reducing radiation dose Radiology 2013; 267: 747–56. doi: https://​
X, et al. Fast and accurate reconstruction of for pediatric CT by using deep learning doi.org/10.1148/radiol.12112789
human lung gas MRI with deep learning. reconstruction. Radiology 2021; 298: 52. Iyer KS, Newell JD, Jin D, Fuld MK, Saha
Magn Reson Med 2019; 82: 2273–85. doi: 180–8. doi: https://doi.org/10.1148/radiol.​ PK, Hansdottir S, et al. Quantitative dual-­
https://doi.org/10.1002/mrm.27889 2020202317 energy computed tomography supports
33. Wu Y, Ma Y, Capaldi DP, Liu J, Zhao W, 43. Hata A, Yanagawa M, Yoshida Y, Miyata T, a vascular etiology of smoking-­induced
Du J, et al. Incorporating prior knowledge Tsubamoto M, Honda O, et al. Combination inflammatory lung disease. Am J Respir
via volumetric deep residual network to of deep Learning–Based denoising and Crit Care Med 2016; 193: 652–61. doi:

12 of 14 birpublications.org/bjr Br J Radiol;95:20210527
BJR San José Estépar

https://doi.org/10.1164/rccm.201506-​ 63. Guo F, Capaldi DP, McCormack DG, images, third International workshop,
1196OC Fenster A, Parraga G. Ultra-­Short echo-­ RAMBO 2018, fourth International
53. Zhang W, Zhang H, Wang L, Wang X, Hu time magnetic resonance imaging lung workshop, BIA 2018, and first International
X, Cai A, et al. Image domain dual material segmentation with under-­Annotations and workshop, TIA 2018, held in conjunction
decomposition for dual-­energy CT using domain shift. Med Image Anal 2021; 72: with MICCAI 2018, Granada, Spain,
butterfly network. Med Phys 2019; 46: 102107. doi: https://doi.org/10.1016/j.media.​ September 16 and 20, 2018, proceedings.
2037–51. doi: https://doi.org/10.1002/mp.​ 2021.102107 Lect Notes Comput Sc 2018; 11040: 284–94.
13489 64. Gerard SE, Patton TJ, Christensen GE, doi: https://doi.org/10.1007/978-3-030-​
54. Abascal JFPJ, Ducros N, Pronina V, Rit S, Bayouth JE, Reinhardt JM. FissureNet: a 00946-5
Rodesch P-­A, Broussaud T, et al. Material deep learning approach for pulmonary 73. Sokooti H, de VB, Berendsen F, Lelieveldt
decomposition in spectral CT using deep fissure detection in CT images. IEEE Trans BPF, Išgum I, Staring M. Medical image
learning: a Sim2Real transfer approach. Med Imaging 2019; 38: 156–66. doi: https://​ computing and computer assisted
IEEE Access 2021; 9: 25632–47. doi: doi.org/10.1109/TMI.2018.2858202 intervention − MICCAI 2017: 20th
https://doi.org/10.1109/ACCESS.2021.​ 65. Gerard SE, Herrmann J, Kaczka DW, Musch International Conference, Quebec City,
3056150 G, Fernandez-­Bustamante A, Reinhardt Qc, Canada, September 11-­13, 2017,
55. Lyu T, Zhao W, Zhu Y, Wu Z, Zhang Y, Chen JM. Multi-­resolution convolutional neural proceedings, part I. Lect Notes Comput Sc
Y, et al. Estimating dual-­energy CT imaging networks for fully automated segmentation 2017; 3: 232–9. doi: https://doi.org/10.1007/​
from single-­energy CT data with material of acutely injured lungs in multiple species. 978-3-319-66182-7_27
decomposition convolutional neural Med Image Anal 2020; 60: 101592. doi: 74. Eppenhof KAJ, Pluim JPW. Pulmonary CT
network. Med Image Anal 2021; 70: 102001. https://doi.org/10.1016/j.media.2019.101592 registration through supervised learning
doi: https://doi.org/10.1016/j.media.2021.​ 66. Chassagnon G, Martin C, Marini R, with Convolutional neural networks. IEEE
102001 Vakalopolou M, Régent A, Mouthon L, et al. Trans Med Imaging 2019; 38: 1097–105. doi:
56. Dimastromatteo J, Charles EJ, Laubach VE. Use of elastic registration in pulmonary https://doi.org/10.1109/TMI.2018.2878316
Molecular imaging of pulmonary diseases. MRI for the assessment of pulmonary 75. Haskins G, Kruger U, Yan P. Deep learning
Respir Res 2018; 19: 17. doi: https://doi.org/​ fibrosis in patients with systemic sclerosis. in medical image registration: a survey.
10.1186/s12931-018-0716-0 Radiology 2019; 291: 487–92. doi: https://​ Mach Vis Appl 2020; 31(1-­2): 8. doi: https://​
57. Vass L, Fisk M, Lee S, Wilson FJ, Cheriyan doi.org/10.1148/radiol.2019182099 doi.org/10.1007/s00138-020-01060-x
J, Wilkinson I. Advances in PET to assess 67. Reinhardt JM, Ding K, Cao K, Christensen 76. Jiang Z, Yin F-­F, Ge Y, Ren L. A multi-­scale
pulmonary inflammation: a systematic GE, Hoffman EA, Bodas SV. Registration-­ framework with unsupervised joint training
review. Eur J Radiol 2020; 130: 109182. doi: based estimates of local lung tissue of convolutional neural networks for
https://doi.org/10.1016/j.ejrad.2020.109182 expansion compared to xenon CT measures pulmonary deformable image registration.
58. Dong X, Lei Y, Wang T, Higgins K, Liu of specific ventilation. Med Image Anal 2008; Phys Med Biol 2020; 65: 015011. doi: https://​
T, Curran WJ, et al. Deep learning-­based 12: 752–63. doi: https://doi.org/10.1016/j.​ doi.org/10.1088/1361-6560/ab5da0
attenuation correction in the absence of media.2008.03.007 77. Fechter T, Baltas D. One-­Shot learning for
structural information for whole-­body 68. Tahir BA, Marshall H, Hughes PJC, deformable medical image registration and
positron emission tomography imaging. Brightling CE, Collier G, Ireland RH, et al. periodic motion tracking. IEEE Trans Med
Phys Med Biol 2020; 65: 055011. doi: https://​ Comparison of CT ventilation imaging and Imaging 2020; 39: 2506–17. doi: https://doi.​
doi.org/10.1088/1361-6560/ab652c hyperpolarised gas MRI: effects of breathing org/10.1109/TMI.2020.2972616
59. Gong K, Guan J, Kim K, Zhang X, Yang J, manoeuvre. Phys Med Biol 2019; 64: 055013. 78. Castillo R, Castillo E, Fuentes D, Ahmad M,
Seo Y. Iterative PET image reconstruction doi: https://doi.org/10.1088/1361-6560/​ Wood AM, Ludwig MS, et al. A reference
using Convolutional neural network ab0145 dataset for deformable image registration
representation. IEEE Trans Med Imaging 69. Hegi-­Johnson F, de Ruysscher D, Keall P, spatial accuracy evaluation using the
2019; 38: 675–85. doi: https://doi.org/10.​ Hendriks L, Vinogradskiy Y, Yamamoto COPDgene study archive. Phys Med Biol
1109/TMI.2018.2869871 T, et al. Imaging of regional ventilation: 2013; 58: 2861–77. doi: https://doi.org/10.​
60. Sluimer I, Schilham A, Prokop M, is CT ventilation imaging the answer? A 1088/0031-9155/58/9/2861
van Ginneken B, van GB. Computer analysis systematic review of the validation data. 79. Kipritidis J, Tahir BA, Cazoulat G, Hofman
of computed tomography scans of the lung: Radiother Oncol 2019; 137: 175–85. doi: MS, Siva S, Callahan J, et al. The vampire
a survey. IEEE Trans Med Imaging 2006; 25: https://doi.org/10.1016/j.radonc.2019.03.010 challenge: a multi-­institutional validation
385–405. doi: https://doi.org/10.1109/TMI.​ 70. Yin Y, Hoffman EA, Lin C-­L. Mass study of CT ventilation imaging. Med Phys
2005.862753 preserving nonrigid registration of CT lung 2019; 46: 1198–217. doi: https://doi.org/10.​
61. Ronneberger O, Fischer P. Brox T. U-­Net: images using cubic B-­spline. Med Phys 2009; 1002/mp.13346
Convolutional Networks for Biomedical 36: 4213–22. doi: https://doi.org/10.1118/1.​ 80. Guo F, Svenningsen S, Kirby M, Capaldi
Image Segmentation. In: Navab N, Joachim 3193526 DP, Sheikh K, Fenster A, et al. Thoracic CT-­
H, Wells William M, Frangi A. F, eds.; 2015. 71. Murphy K, van Ginneken B, Reinhardt JM, MRI coregistration for regional pulmonary
pp.. 234–41. Kabus S, Ding K, Deng X, et al. Evaluation structure-­function measurements of
62. Isensee F, Jaeger PF, Kohl SAA, Petersen of registration methods on thoracic CT: obstructive lung disease. Med Phys 2017; 44:
J, Maier-­Hein KH. nnU-­Net: a self-­ the EMPIRE10 challenge. IEEE Trans Med 1718–33. doi: https://doi.org/10.1002/mp.​
configuring method for deep learning-­ Imaging 2011; 30: 1901–20. doi: https://doi.​ 12160
based biomedical image segmentation. Nat org/10.1109/TMI.2011.2158349 81. Hoffman EA, Lynch DA, Barr RG, van Beek
Methods 2021; 18: 203–11. doi: https://doi.​ 72. Onieva JO, Marti-­Fuster B. Image analysis EJR, Parraga G. IWPFI Investigators
org/10.1038/s41592-020-01008-z for moving organ, breast, and thoracic Pulmonary CT and MRI phenotypes that

13 of 14 birpublications.org/bjr Br J Radiol;95:20210527
Artificial Intelligence in Functional Imaging of the Lung BJR

help explain chronic pulmonary obstruction Consistent Adversarial networks. ​arXiv.​org systems. Pattern Recognit 2021; 110: 107332.
disease pathophysiology and outcomes. J 2017;: cs.CV:. doi: https://doi.org/10.1016/j.patcog.2020.​
Magn Reson Imaging 2016; 43: 544–57. doi: 90. Latifi K, Forster KM, Hoffe SE, Dilling TJ, 107332
https://doi.org/10.1002/jmri.25010 van Elmpt W, Dekker A, et al. Dependence 98. Bortsova G, González-­Gonzalo C, Wetstein
82. Washko GR, Parraga G. COPD biomarkers of ventilation image derived from 4D CT SC, Dubost F, Katramados I, Hogeweg L,
and phenotypes: opportunities for better on deformable image registration and et al. Adversarial attack vulnerability of
outcomes with precision imaging. Eur Respir ventilation algorithms. J Appl Clin Med Phys medical image analysis systems: unexplored
J 2018; 52: 52. doi: https://doi.org/10.1183/​ 2013; 14: 150–62. doi: https://doi.org/10.​ factors. Med Image Anal 2021; 73: 102141.
13993003.01570-2018 1120/jacmp.v14i4.4247 doi: https://doi.org/10.1016/j.media.2021.​
83. MacNeil JL, Capaldi DPI, Westcott AR, 91. Zhong Y, Vinogradskiy Y, Chen L, Myziuk 102141
Eddy RL, Barker AL, McCormack DG, et al. N, Castillo R, Castillo E, et al. Technical 99. Larrazabal AJ, Nieto N, Peterson V, Milone
Pulmonary imaging phenotypes of chronic note: deriving ventilation imaging from DH, Ferrante E. Gender imbalance in
obstructive pulmonary disease using 4DCT by deep convolutional neural medical imaging datasets produces biased
multiparametric response maps. Radiology network. Med Phys 2019; 46: 2323–9. doi: classifiers for computer-­aided diagnosis.
2020; 295: 227–36. doi: https://doi.org/10.​ https://doi.org/10.1002/mp.13421 Proc Natl Acad Sci U S A 2020; 117:
1148/radiol.2020191735 92. Gerard SE, Reinhardt JM, Christensen 201919012. doi: https://doi.org/10.1073/​
84. Westcott A, Capaldi DPI, McCormack DG, GE, Estépar RSJ. Estimating local pnas.1919012117
Ward AD, Fenster A, Parraga G. Chronic tissue expansion in thoracic computed 100. Langlotz CP, Allen B, Erickson BJ,
obstructive pulmonary disease: thoracic CT tomography images using Convolutional Kalpathy-­Cramer J, Bigelow K, Cook TS,
texture analysis and machine learning to neural networks. 2020; 1856–60. doi: https://​ et al. A roadmap for foundational research
predict pulmonary ventilation. Radiology doi.org/10.1109/ISBI45749.2020.9098413 on artificial intelligence in medical imaging:
2019; 293: 676–84. doi: https://doi.org/10.​ 93. Capaldi DPI, Guo F, Xing L, Parraga G. from the 2018 NIH/RSNA/ACR/The
1148/radiol.2019190450 Pulmonary ventilation maps generated Academy workshop. Radiology 2019; 291:
85. Serrano GG, Ash SY, Vegas-­Sánchez-­Ferrero with Free-­breathing proton MRI and a deep 190613. doi: https://doi.org/10.1148/radiol.​
G, Onieva JO, Rahaghi FN, Ross JC. Convolutional neural network. Radiology 2019190613
Disease staging and prognosis in smokers 2021; 298: 427–38. doi: https://doi.org/10.​ 101. Kaissis GA, Makowski MR, Rückert D,
using deep learning in chest computed 1148/radiol.2020202861 Braren RF. Secure, privacy-­preserving and
tomography. Am J Respir Crit Care Med 94. Ren G, Lam S-­K, Zhang J, Xiao H, Cheung federated machine learning in medical
2017; 197: 193–203. doi: https://doi.org/10.​ AL-­Y, Ho W-­Y, et al. Investigation of a imaging. Nat Mach Intell 2020; 2: 305–11.
1164/rccm.201705-0860OC novel deep Learning-­Based computed doi: https://doi.org/10.1038/s42256-020-​
86. Tang LYW, Coxson HO, Lam S, Leipsic tomography perfusion mapping framework 0186-1
J, Tam RC, Sin DD. Towards large-­scale for functional lung avoidance radiotherapy. 102. Weibel ER. It takes more than cells to make
case-­finding: training and validation of Front Oncol 2021; 11: 644703. doi: https://​ a good lung. Am J Respir Crit Care Med
residual networks for detection of chronic doi.org/10.3389/fonc.2021.644703 2013; 187: 342–6. doi: https://doi.org/10.​
obstructive pulmonary disease using 95. Nardelli P, Estépar RSJ, Rahaghi FN, Estépar 1164/rccm.201212-2260OE
low-­dose CT. Lancet Digit Health 2020; 2: RSJ. Functional-­Consistent CycleGAN for 103. Hatabu H, Ohno Y, Gefter WB, Parraga
e259–67. doi: https://doi.org/10.1016/S2589-​ CT to iodine perfusion map translation. G, Madore B, Lee KS, et al. Expanding
7500(20)30064-9 Lecture Notes In Computer Science 2020; applications of pulmonary MRI in the
87. Hospedales T, Antoniou A, Micaelli 12502: 109–17. doi: https://doi.org/10.1007/​ clinical evaluation of lung disorders:
P, Storkey A. Meta-­Learning in neural 978-3-030-62469-9_10 Fleischner Society position paper. Radiology
networks: a survey. Arxiv 2020. 96. Eddy RL, Parraga G. Pulmonary xenon-­129 2020; 297: 286–301. doi: https://doi.org/10.​
88. Ruan C, Yuan L, Hu H, Chen D. Image MRI: new opportunities to unravel enigmas 1148/radiol.2020201138
translation with dual‐directional generative in respiratory medicine. Eur Respir J 2020; 104. Woods JC, Wild JM, Wielpütz MO, Clancy
adversarial networks. IET comput. vis 2021; 55: 1901987. doi: https://doi.org/10.1183/​ JP, Hatabu H, Kauczor H-­U, et al. Current
15: 73–83. doi: https://doi.org/10.1049/cvi2.​ 13993003.01987-2019 state of the art MRI for the longitudinal
12011 97. Ma X, Niu Y, Gu L, Wang Y, Zhao Y, Bailey assessment of cystic fibrosis. J Magn Reson
89. Zhu J-­Y, Park T, Isola P, Efros AA. Unpaired J, et al. Understanding adversarial attacks on Imaging 2020; 52: 1306–20. doi: https://doi.​
Image-­to-­Image translation using Cycle-­ deep learning based medical image analysis org/10.1002/jmri.27030

14 of 14 birpublications.org/bjr Br J Radiol;95:20210527

You might also like