Download as pdf or txt
Download as pdf or txt
You are on page 1of 4

In this class

• Medical image analysis?


• Fundamental problems: segmentation, registration, visualization
H06W8a: Medical image analysis •

Challenges: complex data, scenes, applications, validation
Model-based image analysis
Class 1: Introduction – MAP formulation
– Flexible model fitting
– Classification
Prof. Frederik Maes • Fundamental issues in model-based approaches
• Medical image analysis vs computer vision
frederik.maes@esat.kuleuven.be

Medical image analysis ? Imaging modalities

• RX: conventional radiography, attenuation of X-rays by tissue, 2D projection images


• CT: 3D reconstruction from projections, high spatial resolution (sub-mm), dose
issues, visualization of dense tissues: bone, masses, calcified plaques,
• MRI: tissue-dependent magnetic properties of 1H (magnetic dipole), large soft
tissue contrast, resolution (~1 mm) restricted by organ motion & clinical constraints
on scan time
extraction of quantitative information from medical images to support • PET: radio-actively labeled tracer molecule (e.g. 18FDG), local concentration of
clinical decisions in diagnosis and treatment (and biomedical research) metabolised tracer (e.g. glucose uptake), high sensitivity, limited resoluton (~3
mm), “molecular imaging” using specfic tracers
• US: acoustic wave propagation and reflection, high temporal resolution, real time
imaging of moving organs, e.g. cardiology, also in 3D
• Medical imaging technology evolves continuously, e.g. multi-slice CT, cone beam
CT, dual energy CT, MRI-DTI, high field strength MRI, PET-CT, PET-MRI, 3D US,
interventional imaging… leading to new clinical applications
• Medical image analysis research is application–driven!
diagnosis therapy research

Example RX: cardiothoracic index Example CT: coronary calcium scoring


A = maximal width of cardiac shadow
B = maximal internal thoracic width
CTR = A/B
CT > 0.5 è cardiomegaly…

A
A

3D artery segmentation Ca scoring per artery

B
• CT calcium score (“Agatston score”) measures extent of coronary artery calcification on cardiac CT
B • Based on size and maximal density (HU) of calcified lesions, per coronary artery
• High negative predictive value for risk of major adverse cardiac events

CTR = 0.38 CTR = 0.56


Example MRI: muscle fat fraction quantification Example MRI: brain lesions

IP OP Multi-parametric MRI: T1/T2/PD-weighted, FLAIR, Diffusion, Perfusion)


Tumor

W F

MS

Early Intermediate Late stage

FF%
Stroke
• Fat fraction of the muscles in the upper leg is indicative for disease progress in muscular dystrophy
patients (as muscular tissue is gradually being replaced by fat in different stages of the disease)
• MRI chemical shift imaging (‘DIXON’) depicts water and fat either in-phase (IP) or out-of-phase (OP)
• Water-only (W) and fat-only (F) images are then obtained as W ~ (IP+OP)/2 and F ~ (IP-OP)/2
• The fat fraction FF% = F/(F+W) can be quantified per muscle, provided a precise 3D segmentation of
each muscle is available (18 in total, hip to knee)

Example US: strain imaging

• Quantification of tissue deformation


over the heart cycle in 2D+time cardiac
US images by speckle tracking
• Strain = spatial derivative of
deformation
• Strain analyzed over the cardiac cycle
along radial (A), circumferential (B) and
longitudinal direction (C) within
different cardiac segments
• Ischemic segments (right) characterized
by abnormal strain curves compared to
normal (middle)

• Example of an MS report as generated by the icobrain software (icometrix, Leuven, Belgium)


• Evolution over time based on 2 consecutive MRI scans of the patient Victoria Delgado et al.
• Left: lesion load (based on FLAIR and T1); right: gray matter volume (atrophy) JACC 2008;51:1944-1952

Example: PET Fundamental problems

• Image segmentation = detection + delineation of the object of interest in the image


• Defining the boundaries between different objects or background
• Prerequisite for volumetry, shape analysis, …

• Image registration = spatial alignment of different images of the ‘same’ scene


• Compensating for different view, patient positioning, motion, deformation, …
• Prerequisite for fusion of complementary data, detecting temporal changes in
the images, group analysis of different subjects, …

• Image visualization = present the information that was extracted from the images
• Image annotation, contours / surfaces, colour overlays, 3D rendering, …
• Prerequisite for clinical interpretation, application-specific

Standardized Uptake Value (SUV)


= regional activity concentration / total injected concentration
Challenges ? Complex data

• Complex data: special nature of medical images… • Medical images provide information about the interior of the body
• Projection images (2D, e.g. RX): overlying tissues…
• Complex scenes: anatomical objects, natural variability, pathology… • Tomographic images (3D, e.g. CT, MRI, US):
– 3D: more information, but more complex topology
• Complex applications: clinical requirements defined by medical experts… – intrinsic limitations on resolution & contrast
– imaging artifacts (e.g. due to reconstruction algorithm, motion, implants…)
• Complex validation: lack of in-vivo ground truth… • Dynamic image sequences (3D+time = 4D, e.g. US, cardiac MRI)
• Anatomical imaging: bone, soft tissues, vasculature, pathology, …
• Functional imaging: perfusion, diffusion, oxygenation, metabolism, …
• Serial imaging: pre/post therapy, pre/post contrast, follow-up scans
• Imaging modalities based on different physical properties provide different kinds of
information, with different resolution & contrast, hence (usually) requiring a different
kind of analysis

Complex scenes Complex applications


Diagnosis Therapy Research
• Anatomical objects >< man-made objects e.g. radiotherapy, surgery e.g. neurosciences
e.g. oncology
• Complex 3D shape (e.g. brain surface, vascular tree) CT MR
CT
• Variable over time
– Motion: periodic (e.g. heart beat, breathing) or spurious (e.g. bowel)
– Deformations of soft tissues
– Pre- and post-intervention
• Significant biological variability between subjects
PET MR
• Pathology = deviation from normality MR

detection & localisation, target definition, morphometry,


volumetry, therapy planning, normal biological variation,
changes over time follow-up abnormality detection

Complex validation Model-based image analysis

• No direct access to the scene (= the patient’s interior) ➔ ground truth? • Medical images are ambiguous (limited resolution & contrast, noise, artifacts)
• Alternatives:
– Validation by simulations: hardware or software phantoms • Prior knowledge about the objects of interest is needed for proper analysis:
• self-designed ground truth – Geometry: position, shape, motion, deformation
• realism? clinical relevance?
– Validation on clinical data with manually defined ground truth (by the clinical expert) – Photometry: intensity, contrast, texture, image appearance
• consistency of the algorithm with the human observer – Context: relations to other objects in the scene
• intra- and inter-observer agreement?
– Validation on standardized data sets • Modeling strategies:
• ”Challenges” on publicly available data sets, e.g. BRATS for brain tumor segmentation – Heuristic: “ad hoc”: “the shape of the left ventricle of the heart is an
• Instead of each researcher validating her method on her own data… ellipsoid”
• ‘Precision’, ‘Robustness’ and ‘Consistency’ often more important than ‘Accuracy’
– Accuracy = agreement w.r.t. some predefined reference – Biomechanical: based on physics: “the left ventricle deforms elastically”
• assumes the chosen reference to be absolutely correct – Statistical: learned from the data itself (based on pre-analyzed images)
– Precision = agreement when repeating the same measurement multiple times
• fully automated versus fully manual analysis… • Models in medical image analysis should be able to cope with natural
– Robustness = similar performance on images of similar quality (predictable behavior) variability of anatomical objects, as well as pathology… è flexible models !
• reduces the need for parameter tuning
– Consistency = differences between measurements indicate real effects
• possibly with bias on the individual measurements
Model-based image analysis MAP-formulation
Given data I and a model M with parameters q, find the
• Model M(q) = parameterized representation of the appearance of objects of model instance q* that is “most likely” given the data I
interest in the images based on prior knowledge
• Model representations (“what is q ?”): see later Using Bayes’ rule:

– Landmark-based: contour, surface, mesh… (models with explicit geometry) • Prob(I) = the likelihood of observing the given image I
• q = landmark coordinates è independent of q …

– Image-based: template, atlas (models with implicit geometry) X • Prob(I|M(q)) = data likelihood = likelihood that image
• q = deformation field I is obtained if the model instance M(q) is imaged
è based on a model of the image acquisition (contrast,
• Model-based image analysis = fitting the model to the image data noise…)
– Finding the model instance with parameters q* that best matches the data
• Prob(M(q)) = model likelihood = prior distribution on
– Requires a measure of the “goodness of fit” between the data I and the the properties of the model itself (“the prior”)
model M(q) for any parameters q è e.g. some shapes are more likely than others…
• Can be casted as a “maximum a posteriori” probability (MAP) problem
If all model instances M(q) are equally likely: “maximum
X likelihood” (ML) problem (instead of MAP)

Approach 1: Flexible model fitting Approach 2: Classification


Data likelihood and model prior Set of previously analyzed images used as training data
formulated using a Gibbs distribution (I = input image, q = output of the analysis)
E = energy function (to be defined…)
Z = normalization constant Estimate Prob(q|I ) directly from training data
è supervised classification
Eint = internal energy
è measures fidelity to the prior F = classifier / regressor : maps given input image I
Eext = external energy onto most likely model instance q*
è measures conformity to the data
Instead of I: feature vector f (= dimensionality reduction)
Energy minimization problem
g = weight (hyper-parameter) Current state of the art:
è Tunable behavior… “deep learning” with classifier f taking the mathematical form of a deep neural network
è Large flexibility to define Eint, Eext

Examples: snakes, level sets, active


shape models, non-rigid registration,
atlas-based segmentation…

Fundamental issues in model-based image analysis Medical image analysis vs computer vision?
Medical image analysis Computer vision
• Explicit versus implicit representation of geometry
Different Anatomical objects Man-made objects, natural scenes
– Explicit = a collection of points (coordinates) data: Tomographic images (3D) Photographic / video images (2D+t)
– Implicit = an image Intrinsic limitations on image quality Image quality often controllable
– Hybrid representations? Different Ground truth usually not available Ground truth often available
• Global versus local representations of appearance validation: Expertise is scarce Everyone is an expert
Performance is usually critical Performance is often not critical
– Global = more context
Different Volumetry & regional quantification Object detection / recognition / tracking
– Local = more flexibility applications: Morphometry (shape analysis) 3D reconstruction
– Multi-scale representations ? Quantification of temporal changes Machine inspection
Detection of pathology Image or video synthesis (‘deep fake’)
• Deterministic versus statistical models
Similar Representations (=models) of object appearance and its variability
– Deterministic = analytically constructed, often heuristic… fundamental Measures of similarity / correspondence between models and images
– Statistical = derived from training data, also heuristic… problems: Fitting models to images based on suitable features
• Data congruency versus model fidelity Detection of outliers
Similar Learning suitable models from data,
– Tunable parameters … computational e.g. “Deep Learning”
– Proper MAP modeling is complicated by lack of training data strategies:

You might also like