Notebook 6

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 4

Notebook #6

Steps involved in processing the typical digital image:


1) Image processing operations
a. Purpose: to change the input values of the pixels to improve the
diagnostic quality of the output image
b. Includes:
i. Point processing (most important to understand for digital
radiography systems)
ii. Local processing
iii. Geometric processing
2) Point Processing
a. Definition: performed between the receipt of the input image from the
image receptor and the output image that is viewed on the monitor
b. Involve: adjusting the value of an input pixel (point) to the
corresponding output pixel
c. Are the most common processing operations in digital radiography
d. Grayscale processing
i. Definition: allows for adjustments to image brightness and
contrast
ii. Involves (3):
1. (1) Creation of histogram
a. Histogram is generated during initial processing
from the image data that allows the digital system
to find the useful signal by locating the minimum
and maximum signal within the anatomical regions
of interest in the image
b. To generate a histogram: the scanned area is
divided into pixels and the signal intensity for each
pixel is determined
c. Shape of histogram: corresponds to the specific
anatomy and technique used for the exam
d. Histogram analysis used to determine values of
interest (VOI) and exposure index (EI)
i. VOI is used to locate the minimum and
maximum exposure values for the body part
e. If exposure is outside the range from
underexposure or overexposure, the computer will
correct the image by shifting or rescaling the
histogram to the correct area (histogram
modification or stretching)
i. Wide histogram = higher contrast
ii. Narrow histogram = lower contrast
iii. Low comparative values = dark/dim
iv. High comparative values = bright/light
2. Next step is adjusting contrast
a. Digital detector has a linear response that would
result in a low-contrast image if it is displayed
without this step
b. (2) Application of the look-up-table (LUT)
Notebook #6

i. LUT is applied to the data that has the


standard contrast for that exam to give the
desired image contrast for display
ii. To ensure that the proper LUT will be applied
when the image is processed, you must
select the proper projection by telling the
processing system which projection to use
iii. The proper LUT will provide the proper
grayscale regardless of variations in kVp and
mAs, resulting in consistent images
1. If exposure values are far outside
the normal range, the system
cannot compensate and produce a
diagnostic image
2. Therefore, radiographers must set
exposure factors that will produce
images within normal exposure ranges
for specific projections if a digital
system is to function properly
3. (3) Windowing
a. Definition: a point processing operation that
changes the contrast and brightness of the image
on the monitor
b. The brightness and contrast of the digital image
depends on the shades of gray, which are
controlled by varying the numerical values of each
pixel
i. Because the range of stored densities is so
much wider than the visual range, any digital
image is only a small part of the total data
obtained by the computer
1. Each image is only a window on the
total range of data
ii. The window with (WW) is the range of
densities that will be displayed
1. Narrow width = few densities = high
contrast
2. Large width = many densities = low
contrast
c. Window level is the center of the window width and
controls the brightness of the image
i. Low end of scale = light image
ii. High end of scale = dark image
d. Radiologists can manipulate the data that is
sent to them; however, the more the
technologist changes the original data before
it is sent to PACS, the less information the
radiologist has to work with
Notebook #6

3) Local processing operations


a. Mathematical calculations are applied to only a small group of pixels
b. Kernel = processing code that is mandatory and common to the
computer system
i. Normally applied over and over to the entire set of data being
processed
c. Spatial frequency filtering is considered a type of local processing
operation
i. Used to:
1. Sharpen
2. Smooth
3. Blur
4. Reduce noise
5. Or pull elements of interest from an image
ii. Occurs in:
1. Spatial location domain
2. Spatial frequency domain
iii. When spatial frequency filtering is done in the spatial frequency
domain
1. The Fourier transform is used
iv. When it is done in the spatial location domain
1. The pixels themselves are used
d. High-pass filtering (edge enhancement)
i. Definition = uses a Fourier transform algorithm to convert the
image into the spatial frequency domain
ii. Then a high-pass filter is applied to remove low spatial
frequency and produce a sharper output image
iii. The result is an image with edge enhancement and greatly
increased contrast
iv. Extremely small structures can sometimes be buried in an edge,
and undesirable edges are usually enhanced as well as those
that are of diagnostic value
e. Low-pass filtering (smoothing)
i. Definition = uses a similar process to intentionally blur the
image
1. Thus, reducing noise and the displayed brightness levels
of the pixels
ii. This process also decreases image detail
iii. Unsharp masking (blurring) subtracts a low-pass filtered image
from the original image
1. Thus, producing a new subtracted and sharper image
f. Spatial location filtering (convolution)
i. A kernel is applied repeatedly to each pixel in a matrix in order
to weigh the values or to apply a coefficient across the matrix
4) Geometric processing operations
a. Are used to change the position or orientation of the pixels in the
image
i. This allows:
1. Rotation
Notebook #6

2. Magnification
3. Etc.
5) Digital image quality
a. Resolution
i. Spatial resolution is controlled by:
1. The matrix size
2. How many pixels can be displayed by the monitor
ii. Direct relationship between matrix, pixel size, and spatial
resolution
1. Increased matrix size = decrease in pixel size = increased
spatial resolution
iii. Dynamic range
1. The ability to respond to varying levels of exposure
2. The more tissue densities on the digital image are seen,
the appearance of more detail
b. Noise
i. Classified as:
1. Electronic system noise
a. A result of undesirable signals from the digital
system itself
b. Random background information that is detected
but doesnt contribute to image quality
c. Measured as:
i. Signal-to-noise ratio (SNR)
1. A high SNR = little noise on image
2. Image noise has an inverse
relationship to contrast
3. Increased image contrast tends to
obscure or decrease noise
2. Quantum mottle noise
a. Results from an insufficient quantity of photons
from improperly set exposure factors
i. Produces a grainy image
ii. Modulation transfer function (MTF)
1. The ability of a system to record
available spatial frequencies
c. Detective quantum efficiency (DQE)
i. Definition = a measure of the sensitivity and accuracy by which
the image receptor converts the incoming data to the output
viewing device
ii. A perfect device would perform this task with 100% efficiency or
with a DQE of 1

You might also like