Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 5

Introduction

Image analysis: Measure something from images, complex or simple. Input is an image, output
can be a tumour, a volume…

Image processing: Input is an image, output also an image.

1. Image processing 2. Image analysis 3. Reach a conclusion

Our case, Biomedical images: Acquire -> Process and analyse -> Apply

Applications:

- Recognize and evaluate structures (organs, cells, etc.)


- Find abnormal areas
- Identify illness
- Assess (the evolution of) an illness/treatment

Image processing and analysis process:

- Image acquisition: Acquire the image.


- Image enhancement: Improving the visual appearance of an image acting directly on
the pixels or in the frequency domain.
- Image restoration: Remove degradation or artefact (noise).
- Morphological processing: Tools to deal with objects.
- Segmentation: Extracting the constitutive parts of an image.
- Representation and description: We won’t see it.
- Object recognition: Identify specific objects and classify them (if a tumour is malignant,
benign, or normal).
- Registration and fusion: Put together images to make it more complete. Registration
means to make all images that will be fused each pixel represents the same space
(scale, change size, rotate…).
- Image compression: Can be lossless or lossy, if we compress the image too much, we
will lose data. On medical image, we use lossless compression to avoid losing
important information.

In the case of colour images, we should consider they are multi-channel.


Block 1: Digital image fundamentals
Digital image: A function of at least 2 independent variables (spatial coordinates). It is
represented by a matrix of size MxN (M rows and N columns). In every position of the matrix
there is a value (PIXEL = picture element) representing the intensity.

Example: If I have 8 bits per pixel, I have 28 = 256 (=L) intensities of grey.

- Level 0 = 00000000 > black


- Level 1 = 00000001

- Level 255 = 11111111 -> white

Notation:

2D function f(x,y) where x,y are the spatial coordinates and the amplitude of f at x,y is the
intensity level at that point.

We can also have 3D images, each coordinate contains a VOXEL (volume pixel) with
coordinates (x,y,z).

We can have:

- Scalar images: a single number is represented by each pixel


- Vector image: Pixel is a vector representing velocity or direction -> blood flow
- Matrix image: Each pixel contains more information -> diffusion MRI

Analogical to digital conversion:

We sample the continuous time signal analog


signal every time interval, obtaining a discrete-
time analog signal.

Then we quantify and encode the previous signal


to obtain a discrete-time digital signal. We
digitise the amplitude values.
We have to adjust the signal to the spatial resolution (pixels per unit distance), and to the
intensity resolution (number of intensity levels).

Example of Spatial resolution effect:

Sample by 4 = take 1 pixel for every 4 pixels -> more artifact errors = pixelization
Example of Intensity level resolution effect:

We will obtain false contouring.

Neighbours of a pixel:

Interpolation: is the process of using known data to estimate unknown values.

- Nearest Neighbour interpolation: Assign to each new location the intensity of its
known nearest neighbour.

- Linear interpolation: Straight line between two adjacent knows locations.


- Bilinear interpolation: Weighted average -> the closer the known point is to the
unknown point, the more influence it has on it.

It can be used to enhance resolution, to inpaint (paint areas that do not contain information),
geometric corrections (Rotate the image without loosing information).

You might also like