Professional Documents
Culture Documents
(Tutorial Texts in Optical Engineering) Glenn D. Boreman - Modulation Transfer Function in Optical and Electro-Optical systems-SPIE (2021)
(Tutorial Texts in Optical Engineering) Glenn D. Boreman - Modulation Transfer Function in Optical and Electro-Optical systems-SPIE (2021)
(Tutorial Texts in Optical Engineering) Glenn D. Boreman - Modulation Transfer Function in Optical and Electro-Optical systems-SPIE (2021)
BOREMAN
low-frequency normalization. Some generic measurement-instrument designs are
compared, and the book closes with a brief consideration of the MTF impacts of
motion, vibration, turbulence, and aerosol scattering.
P.O. Box 10
Bellingham, WA 98227-0010
Glenn D. Boreman
ISBN: 9781510639379
SPIE Vol. No.: TT121
TT121
Tutorial Texts Series Related Titles
• Aberration Theory Made Simple, Second Edition, Virendra N. Mahajan, TT93
• Analysis and Evaluation of Sampled Imaging Systems, Richard H. Vollmerhausen,
Donald A. Reago, and Ronald Driggers, TT87
• Introduction to Optical Testing, Joseph M. Geary, TT15
• Laser Beam Quality Metrics, T. Sean Ross, TT96
• Modeling the Imaging Chain of Digital Cameras, Robert D. Fiete, TT92
• Optical Design: Applying the Fundamentals, Max J. Riedl, TT84
• Optical Design for Visual Systems, Bruce H. Walker, TT45
• Optical Design of Microscopes, George H. Seward, TT88
Published by
SPIE
P.O. Box 10
Bellingham, Washington 98227-0010 USA
Phone: +1 360.676.3290
Fax: +1 360.647.1445
Email: books@spie.org
Web: http://spie.org
All rights reserved. No part of this publication may be reproduced or distributed in any
form or by any means without written permission of the publisher.
The content of this book reflects the work and thought of the author. Every effort
has been made to publish reliable and accurate information herein, but the publisher
is not responsible for the validity of the information or for any outcomes resulting
from reliance thereon.
v
Contents
Preface to the Second Edition xi
Preface to the First Edition xiii
vii
viii Contents
Index 139
Preface to the Second Edition
It had been 19 years since the first edition of this book, when the extended
quarantine period of 2020 afforded me the rare opportunity of quiet time
away from my usual administrative and research activities. I have significantly
expanded the treatment of several topics, including bar-target measurements,
noise-target measurements, effects of aberrations, and slant-edge measure-
ments. I have been gratified by the recent industrial and government-lab
interest in the speckle techniques, which, after all, comprised a good portion
of my dissertation at University of Arizona some 36 years ago. All other
topics in the book were reviewed and updated, with recent references added.
I have kept my original emphasis on practical issues and measurement
techniques.
I acknowledge with pleasure discussions about MTF with colleagues and
their students here at UNC Charlotte, among whom are Profs. Angela Allen,
Chris Evans, and Thomas Suleski. During the writing process, I appreciated
receiving daily encouragement by telephone from Dot Graudons, daily
encouragement via WhatsApp from Prof. Mike Sundheimer of the
Universidade Federal Rural de Pernambuco in Recife Brazil, and weekly
encouragement via Zoom from Skye Engel. I am grateful for the permissions
granted for reproductions of some of the figures from their original sources, to
the two anonymous reviewers for their insightful and helpful comments, and
to Dara Burrows of SPIE Press for her expert copyediting.
Last but surely not least, I want to thank Maggie Boreman – my wife of
30 years, my main encourager, and technical editor. You have graciously
taken time from your equestrian pleasures to struggle, once again, with
turning my writing into something approaching standard English. Thanks.
Glenn D. Boreman
Emerald Rose Farm
23 November 2020
xi
Preface to the First Edition
I first became aware that there was such a thing as MTF as an undergraduate
at Rochester, scurrying around the Bausch and Lomb building. There was, in
one of the stairwells, a large poster of the Air Force bar target set. I saw that
poster every day, and I remember thinking. . . gee, that’s pretty neat. Well,
more than 25 years later, I still think so. I have had great fun making MTF
measurements on focal-plane arrays, SPRITE detectors, scanning cameras,
IR scene projectors, telescopes, collimators, and infrared antennas. This book
is an outgrowth of a short course that I have presented for SPIE since 1987. In
it, I emphasize some practical things I have learned about making MTF
measurements.
I am grateful for initial discussions on this subject at Arizona with Jack
Gaskill and Stace Dereniak. Since then, I have had the good fortune here at
Central Florida to work with a number of colleagues and graduate students
on MTF issues. I fondly recall discussions of MTF with Arnold Daniels, Jim
Harvey, Didi Dogariu, Karen MacDougall, Marty Sensiper, Ken Barnard, Al
Ducharme, Ofer Hadar, Ric Schildwachter, Barry Anderson, Al Plogstedt,
Christophe Fumeaux, Per Fredin, and Frank Effenberger. I want to thank
Dan Jones of the UCF English Department for his support, as well as Rick
Hermann, Eric Pepper, and Marshall Weathersby of SPIE for their assistance
and enthusiasm for this project. I also appreciate the permissions granted for
reproductions of some of the figures from their original sources.
Last but surely not least, I want to thank Maggie Boreman – my wife,
main encourager, and technical editor. Once again, Meg, you have wrestled
with my occasionally tedious expository and transformed it, if not into poetry,
then at least into prose. Thanks.
GDB
Cocoa Beach
15 March 2001
xiii
Chapter 1
MTF in Optical Systems
Linear-systems theory provides a powerful set of tools with which we can
analyze optical and electro-optical systems. The spatial impulse response of
the system is Fourier transformed to yield the spatial-frequency optical
transfer function. These two viewpoints are equivalent ways to describe an
object—as a collection of points or as a summation of spatial frequencies.
Simply expressing the notion of image quality in the frequency domain does
not by itself generate any new information. However, the conceptual change
in viewpoint—instead of a spot size, we now consider a frequency response—
provides additional insight into the behavior of an imaging system,
particularly in the common situation where several subsystems are combined.
We can multiply the individual transfer function of each subsystem to give the
overall transfer function. This procedure is easier than the repeated
convolutions that would be required for a spatial-domain analysis, and
allows immediate visualization of the performance limitations of the
aggregate system in terms of the performance of each of the subsystems.
We can see where the limitations of performance arise and which crucial
components must be improved to yield better overall image quality. We
directly see the effects of diffraction and aberrations at various spatial
frequencies.
In Chapter 1 we develop the transfer-function concept and apply it to
classical optical systems—imaging systems alone without detectors or
electronics. We will first define terms and then discuss image-quality issues.
1
2 Chapter 1
The ideal image f(x,y) is the irradiance distribution that would exist in the
image plane (taking into account the system magnification) if the system had
perfect image quality, in other words, a delta-function impulse response. The
ideal image is thus a magnified version of the input-object irradiance, with all
detail preserved. For conceptual discussions, we typically assume that the
imaging system has unit magnification, so for the ideal image we can directly
take f(x,y) as the object irradiance distribution, albeit as a function of image-
plane coordinates x and y. We can see from Eq. (1.1) that if h(x,y) ¼ d(x,y),
the image would be a perfect replica of the object. A perfect optical system is
capable of forming a point image of a point object. However, because of the
blurring effects of diffraction and aberrations, a real imaging system has an
impulse response that is not a point. For any real system, h(x,y) has a finite
spatial extent. It is within this context that h(x,y) is referred to as the point-
spread function (PSF)—the image-plane irradiance corresponding to a point
source input. The narrower the PSF, the less blurring occurs in the image-
forming process. A more compact impulse response indicates better image
quality.
As Fig. 1.1 illustrates, we represent mathematically a point object as a
delta function at location (x0 ,y0 ) in object-plane coordinates:
f ðxobj , yobj Þ ¼ dðx0 xobj , y0 yobj Þ: (1.2)
Assuming that the system has unit magnification, the ideal image is a delta
function located at (x0 ,y0 ) in image-plane coordinates:
gðx, yÞ ¼ dðx0 x, y0 yÞ: (1.3)
Figure 1.1 A delta function in the object plane is mapped to a blur function, the impulse
response, in the image plane.
MTF in Optical Systems 3
The image of each discrete point source will be the impulse response of
Eq. (1.1) at the conjugate image-plane location, weighted by the correspond-
ing object brightness. The image irradiance function g(x,y) becomes the
summation of weighted impulse responses. This summation can be written as
a convolution of the ideal image function f(x,y) with the impulse response
ZZ
gðx, yÞ ¼ hðx0 x, y0 yÞf ðx, yÞdx0 dy0 , (1.5)
which is equivalent to Eq. (1.1). Figure 1.2 illustrates the imaging process
using two methods: the clockwise loop demonstrates the weighted superposi-
tion of the impulse responses, and the counterclockwise loop demonstrates a
convolution with the impulse response. Both methods are equivalent.
Figure 1.2 Image formation can be modeled as a convolutional process. The clockwise
loop is a weighted superposition of impulse responses, and the counterclockwise loop is a
convolution with the impulse response.
4 Chapter 1
Figure 1.5 (a) Two-dimensional spatial period. (b) Two-dimensional spatial frequency.
and
Gðj, hÞ ¼ F ðj, hÞ Hðj, hÞ, (1.7)
As Eqs. (1.8) and (1.9) demonstrate, each subsystem has its own transfer
function—the Fourier transform of its impulse response.
The final result of all subsystems operating on the input object distribution
is a multiplication of their respective transfer functions. Figure 1.7 illustrates
that we can analyze a combination of several subsystems by the multiplication
of transfer functions of Eq. (1.9) rather than the convolution of impulse
responses of Eq. (1.8):
and
For the classical optical systems under discussion in this first chapter, we
ignore the effects of noise and typically assume that H(j,h) has been
normalized to have a unit value at zero spatial frequency. By the central
ordinate theorem of Fourier transforms, this is equivalent to a unit area under
the impulse response. This normalization at low spatial frequencies yields a
relative transmittance as a function of frequency (ignoring constant
attenuation factors such as Fresnel-reflection loss, neutral-density filters,
and atmospheric absorption). The lowest spatial frequency (a flat field, that is,
a uniform irradiance distribution across the entire field of view) is assumed to
come through with unit transmittance. Although this normalization is
common, when we use it, we lose information about the absolute signal
levels. MTF is thus typically not radiometric in the information it conveys. In
some situations, we might want to keep the signal-level information,
particularly when electronics noise is a significant factor. When we want to
We can see from Fig. 1.10 that modulation depth is a measure of contrast,
with
M ! 0 as Amax Amin ! 0 and M ! 1 as Amin ! 0: (1.13)
Figure 1.10 Definition of modulation depth per Eq. (1.10): (a) high contrast and (b) low
contrast.
The finite spatial extent of the impulse response of the optical system
causes a filling in of the valleys and a lowering of the peak levels of the
sinusoid. This decreases the modulation depth in the image waveform relative
to that in the corresponding object waveform. We define the modulation
transfer (MT) as the ratio of modulation in the image to that in the object:
The M of the object waveform does not need to be unity. If a lower input
modulation is used, then the image modulation will be proportionally lower.
Equation (1.14) shows that the modulation of the object waveform does not
need to be unity. We can use non-unit-modulation targets (Fig. 1.12) for MTF
measurements, but a high-contrast target generally produces the best results.
Figure 1.11 Modulation transfer function is the decrease of modulation depth with
increasing frequency.
12 Chapter 1
Figure 1.13 OTF, MTF, and PTF for a defocused impulse response.
Figure 1.14 Radial bar target: comparison of focused and defocused images.
response (Gaussian with a slight linear ramp added), the PTF shows
significant variation as a function of spatial frequency. In practice, we most
often encounter a plot of the PTF in the output from an optical-design
computer program. This is because phase distortion is a sensitive indicator for
the presence of aberrations such as coma with an asymmetric PSF for off-axis
image points. The PTF is not typically measured directly, but the information
is available by Fourier transforming a measured impulse response.
14 Chapter 1
Figure 1.15 Radial bar target that has been blurred in the 45-deg diagonal direction.
Phase reversals exist over certain two-dimensional spatial-frequency ranges.
frequencies are present in their original amounts and only the relative phases
have changed. In the first case, the PTF transition occurs at 4jf. There is no
shift for the fundamental or the third harmonic, and a p shift for higher
frequencies. We see a phase-reversal artifact as a local minimum at the center
of each bar, primarily because the fifth harmonic is out of phase with the third
and the fundamental. In the second case, the transition occurs at 2jf, so the
only in-phase spatial frequency is the fundamental. The bars are sinusoidal at
the center, with secondary-maxima artifacts in the shoulder of each bar and in
the space between them, arising primarily from the third and fifth harmonics.
The step transition for the third case occurs at 0.9jf, shifting the fundamental
and all harmonics with respect to frequencies lower than jf. The most
16 Chapter 1
dramatic artifact is that the image now has five peaks instead of the four seen
in the previous cases.
discuss NEM more fully in Chapter 2. The NEM is also referred to in the
literature as the demand modulation function or threshold detectability curve.
A convenient graphical representation is to plot MTF and the noise-
equivalent modulation depth on the same curve. The limiting resolution is the
spatial frequency where the curves cross.
In general, the best overall image-quality performance is achieved by the
imaging system that has the maximum area between the MTF and NEM
curves over the spatial frequency range of interest. This quantity, seen in
Fig. 1.21, is called MTF area (MTFA), which has been correlated to image
quality in human perception tests.8,9
Using either the spatial-domain definition or the spatial-frequency-
domain definition, resolution is a single-number performance specification
and, as such, it is often seen as being more convenient to use than MTF
(which is a function instead of a single number). However, MTF provides
more complete performance information than is available from simply
specifying resolution, including information about system performance over a
range of spatial frequencies.
As we can see on the left-hand side of Fig. 1.22, two systems may have an
identical limiting resolution but different performances at lower frequencies.
The system corresponding to the higher of the two curves would have the
better image quality. The right-hand side of Fig. 1.22 shows that resolution
Figure 1.21 MTF area (MTFA) is the area between the MTF and NEM curves. Larger
MTFA indicates better image quality.
Figure 1.22 Limiting resolution does not tell the whole story.
18 Chapter 1
Figure 1.23 Comparison of images for detection, recognition, and identification according
to the Johnson criteria (top right Fig. adapted from Ref. 11 with permission; bottom right
figure adapted from Ref. 12 with permission).
alone can be a misleading performance criterion. The system that has the best
resolution (limiting frequency) has poorer performance at the midrange
frequencies. A decision about which system has better performance requires us
to specify the spatial-frequency range of interest.
One way to determine the range of spatial frequencies of interest is to use
the Johnson criteria,10 where certain spatial frequencies are necessary for
various imaging tasks. Across the smallest dimension of an object, the
Johnson criteria state that: detection (an object is present) requires 0.5 to
1 line pair, recognition (the class of object is discerned) requires 4 to 5 line
pairs, and identification (the specific type of object) requires 6 to 8 line pairs.
These criteria can be visualized in Fig. 1.23. Other imaging tasks, such as
lithography, also have critical spatial frequencies. These are needed, for
example, to ensure proper edge definition and reproduction of sharp corners.
where r is the normalized radial distance from the center of the pattern:
Figure 1.24 shows a radial plot of Eq. (1.18). In Fig. 1.25 we see a
two-dimensional plot that is somewhat overexposed (saturating the center
lobe) to better emphasize the ring structure of the diffraction pattern. A two-
dimensional integration of Eq. (1.18) shows that the blur diameter, defined as
the diameter to the first dark ring of the pattern, 2.4l(F/#), contains 84% of
the power in the irradiance distribution.
In Eqs. (1.17) and (1.19), the parameter F/# is used as a scale factor that
determines the physical size of the diffraction spot. Actually, three different
expressions for F/# can be used for diffraction-limited spot size and
diffraction-limited MTF calculations. As seen in Fig. 1.26, we can define a
working F/# in either object space or in image space in terms of the lens
aperture diameter D and either the object distance p or the image distance q:
or
ðF ∕#Þimage-space ¼ q∕D: (1.21)
For the special case of an object at infinity (Fig. 1.27), the image distance
q becomes the lens focal length f, and the image-space F/# becomes
Equations 1.20 to 1.22 assume that the lens aperture D is uniformly filled with
light. For instances where this is not the case, such as illumination with a
Gaussian laser beam, D would be reduced to the effective aperture diameter
that the beam illuminates, with a corresponding increase in F/#.
Using Eqs. (1.20), (1.21), or (1.22), we can calculate, either in object space
or in image space, the diffraction spot size from Eq. (1.17). We can also
Figure 1.28 Variables for calculating the diffraction MTF of a square-aperture system.
For the case of a circular aperture of diameter D, the system has the same
cutoff frequency, jcutoff ¼ 1/[l(F/#)], but the MTF has a different functional
form:
2 n 1 o
MTFðj∕jcutoff Þ ¼ cos ðj∕jcutoff Þ ðj∕jcutoff Þ½1 ðj∕jcutoff Þ2 1∕2 (1.28)
p
MTFðj∕jcutoff Þ ¼ 0 (1.29)
for j . jcutoff. These diffraction-limited MTF curves are plotted in Fig. 1.29.
The diffraction-limited MTF is an easy-to-calculate upper limit to perfor-
mance; we need only l and F/# to compute it. An optical system cannot
perform better than its diffraction-limited MTF; any aberrations will pull the
MTF curve down. We can compare the performance specifications of a given
system to the diffraction-limited MTF curve to determine the feasibility of the
proposed specifications, to decide how much headroom has been left for
MTF in Optical Systems 23
Figure 1.29 Universal curves representing diffraction-limited MTFs for incoherent systems
with either a circular or a rectangular aperture.
or
1 1
jcutoff, img ¼ ¼ ¼ 400 cy=mm: (1.31)
lðF ∕#Þimg space lðq∕DÞ
Let us verify that this angular spatial frequency corresponds to the same
feature as that in Eq. (1.32). Referring to Fig. 1.31, we use the relationship
between object-space angle u and image-plane distance X,
X ¼ uf : (1.34)
Inverting Eq. (1.34) to obtain the angular spatial frequency 1/u, we have
Given that u is in radians, if X and f have the same units, we can verify the
correspondence between the frequencies in Eqs. (1.32) and (1.33):
It is also of interest to verify that the diffraction MTF curves in Fig. 1.29
are consistent with the results of the simple 84% encircled-power diffraction
spot-size formula of 2.4l(F/#). As a heuristic verification, in Fig. 1.32 we
create a pattern with adjacent bright and dark regions, whose widths are
2.4l(F/#). We set the bright regions at a magnitude of 84% and the dark
regions at a magnitude of 16%, consistent with the amount of power inside
and outside the central lobe of the diffraction spot, respectively. Overlapping
adjacent diffraction spots with this spacing would create an irradiance
distribution approximating the situation shown. Considering a horizontal
one-dimensional spatial frequency across the pattern, we can calculate both
the spatial frequency and the modulation depth.
The fundamental spatial frequency of the pattern in Fig. 1.32 is
Figure 1.34 Diffraction-limited MTF curves for obscured-aperture systems (adapted from
Ref. 13).
MTF in Optical Systems 27
We can calculate the OTF of an imaging system in the following way. The
wavefront aberration, defined at the exit pupil, is the departure from
sphericity of the nominally spherical wavefront proceeding toward a
particular image point, bounded by the dimensions of the exit pupil. The
OTF is the autocorrelation of this spatially bounded wavefront aberration.
We use the change of variables j ¼ x/lf and h ¼ y/lf to convert a spatial shift
in the autocorrelation to a spatial frequency. If there are no aberrations, the
calculation is simply the autocorrelation of the pupil transmittance function.
Any aberrations present in the system will reduce the MTF, given that the
positive and negative phase variations on the wavefront will autocorrelate to a
Figure 1.35 The effect of aberrations on MTF is to pull the transfer-function curve down.
28 Chapter 1
lower value than would unaberrated wavefronts without such variations. This
calculation is the typical means of computing MTF in optical-design software
programs, accounting for aberrations (by means of the wavefront error) and
diffraction (by means of the pupil dimension and the wavelength).
Alternately, we can conveniently use a back-of-the-envelope method to
estimate MTF. If we have a raytraced spot diagram for the system or a spot
size based on geometrical-optics aberration formulae, we take this as the PSF
h(x,y) and calculate a geometrical MTF in the manner of Eqs. (1.6) and (1.7)
by Fourier transformation. We can approximate the overall MTF by
multiplying the geometrical and diffraction MTFs. This multiplication will
reduce the MTF below the diffraction-limited curve and will lower the cutoff
spatial frequency. By the convolution theorem of Eqs. (1.6) and (1.7), this is
equivalent to convolving the diffraction irradiance distribution with the PSF
of the aberration.14,15 This approach is suitable for encircled-energy
calculations but does not capture the fine spatial irradiance fluctuations of
the aberrated diffraction image.
Using this relation, we can express the Strehl ratio as the volume (area) under
the actual OTF curve divided by the volume (area) under the diffraction-
limited OTF curve:
RR
OTFactual ðj, hÞ
s ¼ RR : (1.42)
OTFdiffraction ðj, hÞ
MTF in Optical Systems 29
A large Strehl ratio implies a large area under the MTF curve and high
irradiance at the image location. Aberration effects such as those seen in
Fig. 1.35 can be interpreted directly as the decrease in volume (area) under the
MTF curve.
Figure 1.38 Comparison of PSF and MTF for defocus and spherical aberration, for
different amounts of wavefront error (adapted from Ref. 17 with permission).
Figure 1.39 Comparison of PSF and MTF for coma, for different amounts of wavefront
error. Best and worst one-dimensional profiles are shown of the two-dimensional PSF and
MTF functions for different amounts of wavefront error (adapted from Ref. 17 with
permission).
Figure 1.40 Image quality depends on the orientation of the asymmetric PSF with respect
to the two-dimensional spatial-frequency components.
So, the MMC presents the minimum MTF found for any azimuth angle, as a
function of r. This displays information in a familiar one-dimensional form. If
used as a performance specification, the MMC would guarantee that a given
MTF specification is met for any possible orientation of spatial frequencies in
the image.
We illustrate this concept using the example of a Cooke triplet at a field
angle of 20 deg. The two-dimensional MTF for this situation has significant
MTF in Optical Systems 33
Figure 1.41 Comparison of PSF and MTF for astigmatism, for different amounts of
wavefront error (adapted from Ref. 17 with permission). Best and worst one-dimensional
profiles are shown of the two-dimensional PSF and MTF functions for different amounts of
wavefront error.
Figure 1.42 Image quality depends on the orientation of the asymmetric PSF with respect
to the two-dimensional spatial-frequency components.
asymmetry, as seen in the color plot and wireframe plots of Fig. 1.43. We can
see qualitatively from those plots that the horizontal and vertical slices of the
MTF are an inadequate representation of the overall performance. In
Fig. 1.44 we plot the horizontal and vertical MTFs for this lens, along with
the MMC. This shows the potential utility of MMC as a practical
performance specification.
Figure 1.44 Comparison of horizontal and vertical MTFs and MMC for the example lens
(reprinted from Ref. 18).
so we usually find that the quality of the image is best near the optic axis and
not as good toward the edge of the image. In Fig. 1.45 we see some MTF
plots for an example lens at 10, 20, and 40 lp/mm spatial frequencies, as a
function of location of the image with respect to the optic axis. MTF falls for
higher spatial frequencies and for larger field heights, as expected. The solid
lines are for the optimal orientation of the two-dimensional spatial frequency,
and the dashed lines are for the worst orientation. The PSF is increasingly
asymmetric as the field height increases. Of course, the two plots converge for
small field heights.
MTF also depends on the axial position of the image plane with respect to
the optical system, known as the through-focus MTF. In Fig. 1.46 we see
MTF in Optical Systems 35
Figure 1.45 Best and worst MTFs for an example lens at three specific spatial frequencies,
as a function of image-plane field height (adapted from Ref. 19 with permission).
1.7 Conclusion
Expression of image quality in terms of a transfer function provides additional
insight into the performance of an optical system, compared to describing the
irradiance of a blur spot or a specification of resolution. We can conveniently
account for the various contributions to image quality by multiplying transfer
functions for the different subsystems. The transfer function approach allows
us to directly see the effects of diffraction and aberrations at various spatial
frequencies.
36 Chapter 1
Figure 1.46 MTFs for an example lens at three specific spatial frequencies as a function of
axial image-plane location. The zero defocusing location is set as best focus for F/1.4 at
20 lp/mm. Top plots are for F/1.4 aperture, and bottom plots are for F/4 aperture, with
defocusing in millimeters (adapted from Ref. 19 with permission).
MTF in Optical Systems 37
Figure 1.47 Ray trace showing the presence of spherical aberration. Best focus is the
image plane location resulting in the smallest spot size. It is located between the marginal-
ray focus and the paraxial-ray focus. A reduced aperture diameter will tend to shift the
position of best focus away from the lens.
References
1. G. D. Boreman, A. B. James, and C. R. Costanzo, “Spatial harmonic
distortion: a test for focal plane nonlinearity,” Opt. Eng. 30, pp. 609–614
(1991) [doi: 10.1117/12.55832].
2. G. D. Boreman and C. R. Costanzo, “Compensation for gain
nonuniformity and nonlinearity in HgCdTe infrared charge-coupled-
device focal planes,” Opt. Eng. 26, pp. 981–984 (1987) [doi: 10.1117/12.
7974184].
3. M. Beran and G. Parrent, Theory of Partial Coherence, Prentice-Hall,
Englewood Cliffs, New Jersey (1964).
4. J. Gaskill, Linear Systems, Fourier Transforms, and Optics, John Wiley &
Sons, New York (1978).
5. K. J. Barnard, G. D. Boreman, A. E. Plogstedt, and B. K. Anderson,
“Modulation-transfer function measurement of SPRITE detectors:
sine-wave response,” Appl. Opt. 31(3), 144–147 (1992).
6. M. Marchywka and D. G. Socker, “Modulation transfer function
measurement technique for small-pixel detectors,” Appl. Opt. 31(34),
7198–7213 (1992).
7. N. Koren: www.normankoren.com/Tutorials/MTF.html; www.imatest.
com.
8. L. M. Biberman, “Image Quality,” Chapter 2 in Perception of Displayed
Information, L. M. Biberman, Ed., Plenum Press, New York, pp. 52–53
(1973).
9. H. L. Snyder, “Image Quality and Observer Performance,” Chapter 3 in
Perception of Displayed Information, L. M. Biberman, Ed., Plenum Press,
New York, pp. 87–117 (1973).
38 Chapter 1
Further Reading
Baker, L. Selected Papers on Optical Transfer Function: Foundation and
Theory, SPIE Milestone Series, Vol. MS59, SPIE Press, Bellingham,
Washington (1992).
Williams, C. S., and Becklund O. A., Introduction to the Optical Transfer
Function, Wiley, New York (1989); reprinted by SPIE Press, Bellingham,
Washington (2002).
Williams, T., The Optical Transfer Function of Imaging Systems, Institute of
Physics Press, Bristol (1999).
https://www.imatest.com/docs/
https://lenspire.zeiss.com/photo/en/article/overview-of-zeiss-camera-lenses-
technical-articles
Chapter 2
MTF in Electro-optical Systems
In Chapter 1 we applied a transfer-function-based analysis to describe image
quality in classical optical systems, that is, systems with optical components
only. In this chapter we will examine the MTF of electro-optical systems, that
is, systems that use a combination of optics, scanners, detectors, electronics,
signal processors, and displays. To apply MTF concepts in the analysis of
electro-optical systems, we must generalize our assumptions of linearity and
shift invariance. Noise is inherent in any system with electronics. Linearity is
not strictly valid for systems that have an additive noise level because image
waveforms must be of sufficient irradiance to overcome the noise before they
can be considered to add linearly. The classical MTF theory presented in
Chapter 1 does not account for the effects of noise. We will demonstrate how
to broaden the MTF concept to include this issue. Electro-optical systems
typically include detectors or detector arrays for which the size of the detectors
and the spatial sampling interval are both finite. Because of the shift-variant
nature of the impulse response for sampled-data systems, we will develop the
concept of an average impulse response obtained over a statistical ensemble of
source positions to preserve the convenience of a transfer-function analysis.
We will also develop an expression for the MTF impact of irradiance
averaging over the finite sensor size. With these modifications, we can apply a
transfer-function approach to a wider range of situations.
39
40 Chapter 2
configurations shown, we have two closely spaced point sources in the object
plane that fall within one detector footprint. The signal output from the sensor
will not indicate the fact that there are two sources. Our first task is to
quantify the spatial-frequency filtering inherent in an imaging system with
finite-sized detectors.
A square detector of size w w performs spatial averaging of the scene
irradiance that falls on it. When we analyze the situation in one dimension, we
find that the integration of the scene irradiance f(x) over the detector surface is
equivalent to a convolution of f(x) and the rectangle function1 that describes
the detector responsivity:
Z
w∕2
Equation (2.2) shows us that the smaller the sensor photosite dimension, the
broader the transfer function. This equation is a fundamental MTF component
for any imaging system with detectors. In any given situation, the detector
footprint may or may not be the main limitation to image quality, but its
contribution to a product such as Eq. (1.9) is always present. Equation (2.2) is
plotted in Fig. (2.3), where we see that the sinc-function MTF has its first zero
at j ¼ 1/w. Let us consider the following plausibility argument to justify the fact
that the footprint MTF ¼ 0 at j = 1/w.
Figure 2.4 represents spatial averaging of an input irradiance waveform
by sensors of a given dimension w. The individual sensors may represent either
different positions for a scanning sensor or discrete locations in a focal-plane
array. We will consider the effect of spatial sampling in a later section. Here
we consider exclusively the effect of the finite size of the photosensitive regions
of the sensors. We see that at low spatial frequencies there is almost no
reduction in modulation of the image irradiance waveform arising from
spatial averaging over the surfaces of the photosites. As the spatial frequency
increases, the finite size of the detectors becomes more significant. The
averaging leads to a decrease in the maximum values and an increase in the
minimum values of the image waveform, decreasing the modulation depth.
For the spatial frequency j ¼ 1/w, one period of the irradiance waveform just
fits onto each detector. Regardless of the position of the input irradiance
waveform with respect to the photosite boundaries, each sensor will collect
exactly the same power (spatially integrated irradiance). The MTF is zero at
j ¼ 1/w because each sensor reads the same level and there is no modulation
depth in the resulting output waveform.
Extending our analysis to two dimensions, we consider the simple case of
a rectangular detector with different widths along the x and y directions:
and
MTF in Electro-optical Systems 43
sinðpjwx Þ sinðphwy Þ
MTFfootprint ðj, hÞ ¼
phw
:
(2.5)
pjwx y
The impulse response in Eq. (2.3) is separable, that is, hfootprint(x,y) is simply
a function of x multiplied by a function of y. The simplicity of the separable
case is that both h(x,y) and H(j,h) are products of two one-dimensional
functions, with the x and y dependences completely separated. Occasionally, a
situation arises in which the detector responsivity function is not separable.2,3 In
that case, we can no longer write the MTF as the product of two one-
dimensional MTFs, as seen in Fig. 2.5. The MTF along the j and h spatial
frequency directions is affected by both x and y profiles of the detector
footprint. For example, the MTF along the j direction is not simply the Fourier
transform of the x profile of the footprint but is
2.2 Sampling
Sampling is a necessary part of the data-acquisition process in any electro-
optical system. We will sample at spatial intervals Dx ≡ xsamp. The spatial
sampling rate is determined by the location of the detectors in a focal-plane
Figure 2.5 Example of a nonseparable detector footprint (adapted from Ref. 3).
44 Chapter 2
array. The process of spatial sampling has two main effects on image quality:
aliasing and the sampling MTF.
2.2.1 Aliasing
Aliasing is an image artifact that occurs when we insufficiently sample a
waveform. We assume that the image irradiance waveform of interest has
already been decomposed into its constituent sinusoids. Therefore, we can
consider a sinusoidal irradiance waveform of spatial frequency j. If we choose a
sampling interval sufficient to locate the peaks and valleys of the sinewave, then
we can reconstruct that particular frequency component unambiguously from
its sampled values, assuming that the samples are not all taken at the same level
(the 50%-amplitude point of the sinusoid). Thus, the two-samples-per-cycle
minimum sampling rate seen in Fig. 2.6 corresponds to the Nyquist condition:
If the sampling is less frequent [xsamp . 1/(2j)] than required by the Nyquist
condition, then we see the samples as representing a lower-frequency sinewave
(Fig. 2.7). Even though both sinewaves shown are consistent with the samples,
we will perceive the low-frequency waveform when looking at the sampled
values. This image artifact, where samples of a high-frequency waveform
appear to represent a low-frequency waveform, is an example of aliasing.
Aliasing is symmetric about the Nyquist frequency of jNyquist ¼ 1/(2Dxsamp),
which means that the amount by which a waveform’s spatial frequency exceeds
1/(2Dxsamp) is the amount by which we perceive it to be below the Nyquist
frequency. So, a frequency transformation of
takes place between the input waveform and the aliased image data.
Figure 2.8 shows an example of aliasing for the case of a radial bar target,
for which the spatial frequency increases toward the center. The right-hand
image has been sampled using a larger sampling interval. With an insufficient
spatial-sampling rate, we see that the high frequencies near the center are
aliased into the appearance of lower spatial frequencies.
Figure 2.9 is a three-bar target that shows aliasing artifacts. The left image
was acquired with a small spatial-sampling interval, and we see that the bars
have equal lines and spaces, and are of equal density. The right image was
acquired with a larger spatial-sampling interval. Although bar targets are not
periodic in the true sense, we can consider the nth harmonics of the
fundamental spatial frequency jf as njf. Some of these frequencies are above
jNyquist and are not adequately sampled. The fact that not all of the bars in a
46 Chapter 2
given three-bar pattern are of the same width or density in the undersampled
image on the right is evidence of aliasing.
Figure 2.10 shows another example of aliasing using three versions of a
scene. Part (a) is a 512 512 image, which appears spatially continuous
without significant aliasing artifacts evident. Part (b) has been downsampled
to 128 128 pixels, and aliasing artifacts in sharp edges begin to be visible
because of lower jNyquist. In part (c), the image has been downsampled to
64 64 pixels, and we see extensive aliasing artifacts as low-frequency
banding in the folds of the shirt and the sharp edges.
After the irradiance waveform has been sampled, aliasing artifacts cannot
be removed by filtering because, by Eq. (2.8), the aliased components have
been lowered in frequency to fall within the main spatial-frequency passband
of the system. Thus, to remove aliasing artifacts at this point requires the
attenuation of broad spatial-frequency ranges of the image data. We can
avoid aliasing in the first place by prefiltering the image, that is, bandlimiting
it before the sampling occurs. The ideal anti-aliasing filter, seen in Fig. 2.11,
would pass at unit amplitude all frequency components for which j , jNyquist
and attenuate completely all components for which j . jNyquist. The problem
is that neither the detector MTF (a sinc function) nor the optics MTF
(bounded by an autocorrelation function) follows the form of the desired anti-
aliasing filter.
An abrupt filter shape such as the one in Fig. 2.11 can be implemented in
the electronics subsystem. However, at that stage the image irradiance has
already been sampled by the sensors, so the electrical filter cannot effectively
serve an anti-aliasing function. The optics MTF offers some flexibility as an
anti-aliasing filter but, because it is bounded by the autocorrelation function
of the aperture, it does not allow for the abrupt-cutoff behavior desired. By
choosing l and F/# we can control the cutoff frequency of the optics MTF.
However, this forces a tradeoff of reduced MTF at frequencies less than
jNyquist against the amount of residual aliasing. Using the diffraction-limited
MTF as in Eq. (1.26) or (1.28) and Fig. 1.29 as an anti-aliasing filter requires
setting the cutoff so that MTF (j $ jNyquist) ¼ 0. This results in a loss of
considerable area under the MTF curve at frequencies below Nyquist. If we
set the cutoff frequency higher, we preserve additional modulation depth for
j , jNyquist at the expense of nonzero MTF above Nyquist (leading to some
aliasing artifacts). The choice of a higher cutoff frequency for the linear MTF
function preserves more modulation below Nyquist but results in higher
visibility of aliasing artifacts. A small amount of defocus is occasionally used
in a bandlimiting context, but the MTF does not have the desired functional
form either, and hence a similar tradeoff applies.
Birefringent filters that are sensitive to the polarization state of the
incident radiation can be configured to perform an anti-aliasing function,4
although still without the ideal abrupt-cutoff MTF shown in Fig. 2.11. A filter
of the type shown in Fig. 2.12 is particularly useful in color focal-plane arrays,
where different spectral filters (red, blue, green) are placed on adjacent
photosites. Because most visual information is received in the green portion of
the spectrum, it is radiometrically advantageous to set the sampling interval
for the red- and blue-filtered detectors wider than for the green-filtered
detectors. If we consider each color separately, we find a situation equivalent
to the sparse-array configuration seen in Fig. 2.12, where the active photosites
for a given color are shown shaded. The function of the birefringent filter is to
split an incident ray into two components. A single point in object space maps
to two points in image space, with a spacing equal to one-half of the detector-
to-detector distance. The impulse response of the filter is two delta functions:
1
hfilter ðxÞ ¼ fdðxÞ þ dðx þ xsamp ∕2Þg: (2.9)
2
which has its first zero at 1/(xsamp) ¼ 2jNyquist. The birefringent filter thus
provides a degree of prefiltering, in that the bandlimiting function is applied
before the image is sampled by the detector array. The blur obtained using a
We see from Eq. (2.11) that wider-spaced sampling produces an image with
poorer image quality. An average sampling MTF can be defined as the
magnitude of the Fourier transform of hsampling(x, y):
The sampling MTF is equivalent to the average of the MTFs that would be
realized for an ensemble of image locations, uniformly distributed with respect
to the sampling sites. As Fig. 2.13 demonstrates, when the alignment is
optimum, the MTF is broad, but for other source positions, the MTF is
narrower. The sampling MTF is the average over all possible MTFs. Thus
defined, the sampling MTF is a shift-invariant quantity, and we can proceed
with a usual transfer-function-based analysis. The sampling MTF of Eq. (2.13)
is a component that multiplies the other MTF components for the system.
However, this sampling MTF does not contribute in an MTF-measurement
setup where the test target is aligned with the sampling sites because the central
assumption in its derivation is the random position of any image feature with
respect to the sampling sites. In typical MTF test procedures, we adjust the
position of the test target to yield the best output signal (most compact output,
best appearance of bar-target images). In the typical test-setup case, the
sampling MTF equals unity except where random-noise test targets6 that
explicitly include the sampling MTF in the measurement are used. Because
typical test procedures preclude the sampling MTF from contributing to the
measurements, the sampling MTF is often forgotten in a system analysis.
However, when the scene being imaged has no net alignment with respect to the
sampling sites, the sampling MTF will contribute in practice and should
therefore be included in the system-performance modeling.
The combined MTF of the optics, detector footprint, and sampling can be
much less than initially expected, especially considering two common
misconceptions that neglect the detector and sampling MTFs. The first error
is to assume that if the optics blur-spot size is matched to the detector size then
there is no additional image-quality degradation from the finite detector size.
We can see from Eq. (1.9) that the optics and detector MTFs multiply, and
hence both terms contribute. Also, it is quite common to forget the sampling
MTF in Electro-optical Systems 51
Figure 2.14 MTF contributions multiply for detector footprint, optics blur spot, and
sampling.
52 Chapter 2
steerer, in either case slightly displacing the line of sight. Successive frames of
displaced image samples are obtained and interlaced with appropriate spatial
offsets. Usually, we collect four frames of data at half-pixel relative shifts, as
shown in Fig. 2.15. In part (a) we see the original image of the object
superimposed on the detector array. In part (b) the image location with
respect to the detector array is shifted by a half-pixel spacing in the horizontal
direction. In part (c) the image location is shifted by a half-pixel spacing in the
vertical direction. In part (d) the image location is shifted by a half-pixel
spacing in both the horizontal and vertical directions.
The four frames are interlaced to produce an output frame with twice the
effective sampling rate in each direction. Finer sampling yields better
sampling MTF along with higher Nyquist frequencies. The fact that
microscanning produces better pictures (Fig. 2.16) is intuitive proof of the
existence of a sampling MTF contribution because the detector size and the
optics MTF are both unchanged. The single MTF component that is
improved by the microscan technique is the sampling MTF. The drawback to
microscanning, from a systems viewpoint, is that the frame takes longer to
acquire for a given integration time. Alternatively, if we keep the frame rate
constant, the integration time decreases, which can have a negative impact on
Figure 2.15 Illustration of the microscan process (adapted from Ref. 9).
MTF in Electro-optical Systems 53
Figure 2.16 Microscan imagery: (left) the original image, (center) the four shifted images,
and (right) the interlaced image (adapted from Ref. 10).
Figure 2.19 MTF for a contiguous focal-plane array with samples at half-pixel intervals in x
and y directions.
Figure 2.20 Two positions of a detector: at the start and at the end of the overlapping of the
IFOV with a delta function scene feature.
56 Chapter 2
Typical practice in scanned sensor systems is to sample the analog signal from
the detector at time intervals equivalent to two samples per detector width,
known as “twice per dwell.” Finer sampling is certainly possible, but we
obtain the maximum increase in image quality by going from one to two
samples per dwell. Beyond that spatial sampling rate, there is a diminishing
return in terms of image quality. Let us see why two samples per dwell has
been such a popular operating point, with reference to Fig. 2.21. With a
sampling interval of w/2, the x-direction Nyquist frequency has been increased
to j ¼ 1/w. This higher aliasing frequency is beneficial because the usable
bandwidth has been increased, but the other factor is that now the MTF of the
detector footprint goes through its first zero at the Nyquist frequency. Because
the transfer function is zero at Nyquist, the image artifacts arising from
aliasing are naturally suppressed. A final advantage is that the x-direction
MTF has been increased because of the broader sampling-MTF sinc function,
which has its first zero at j ¼ 2/w. Since the detectors are contiguous in the
y direction, the aliasing frequency is h ¼ 0.5/w and the overall MTF as a
function of h is just the sinc-squared function seen in the analysis of the
contiguous FPA in Fig. 2.18.
In Fig. 2.22 we extend this analysis to a pair of staggered linear arrays
offset by half of the detector-to-detector spacing, which is a commonly used
configuration. Once again, we must perform additional data processing to
interlace the information gathered from both sensor arrays into a high-
resolution image. The advantage we gain is that an effective twice-per-dwell
sampling in both the x and y directions is achieved, with wider h-direction
MTF, higher h-direction aliasing frequency, and additional suppression of
directions other than x and y are important for image formation, such as
hexagonal focal-plane arrays, fiber bundles, and laser printers. Once the two-
dimensional sampling MTF is in hand, we multiply it by the two-dimensional
Fourier transform of the pixel footprint to yield the overall sampling-and-
averaging array MTF.13,14 A one-dimensional sinc-function sampling MTF
along the lines of Eq. (2.13) applies to the spacing between the nearest
neighbors in any direction because the distance between samples in any
direction can be modeled as a rect-function impulse response (assuming a
random position of the scene with respect to the sampling sites). The width of
the rect function depends on the particular direction u in which the next-
nearest neighbor is encountered:
and
Figure 2.24 The nearest-neighbor distance, and therefore the sampling MTF, is a
discontinuous function of angle u.
2.3 Crosstalk
Crosstalk arises when the signal of a particular detector on a FPA contributes
to or induces a spurious signal on its neighbor. Origins of crosstalk include
charge-transfer inefficiency, photogenerated carrier diffusion, inter-pixel
capacitance caused by coupling of close capacitors inherent in FPA pixel
structures, and channel-to-channel crosstalk caused by the wiring harness and
readout electronics.
One way to measure inter-pixel crosstalk is by illuminating a single pixel of
the FPA with an image of an x-ray source. The x-ray photons will generate
charge carriers that give a signal from the illuminated pixel (and adjacent pixels).
If we use a short-wavelength source, we can generate a spot that is smaller than
the pixel of the FPA to be measured, which would usually not be possible
because of diffraction if we chose a source that was within the response band of
the FPA. As Fig. 2.25 shows, crosstalk can be approximately modeled with an
impulse response of a Gaussian or negative-exponential form. Typically, there is
not a large number of sample points because only the few closest channels will
have an appreciable crosstalk signal. Thus, we have some flexibility in picking
the fitting function, as long as the samples that are present are appropriately
represented. If we Fourier transform the impulse response, we obtain a crosstalk
MTF component. We then cascade this crosstalk MTF component with other
system MTF contributions such as footprint and sampling MTFs.
Figure 2.25 Modeling the inter-pixel crosstalk response. Only the central sensor is
illuminated using a short-wavelength source.
60 Chapter 2
Figure 2.26 (left) Image of the charge injection pattern on the FPA. (right) Image of a
magnified view of the signal read out from nearby pixels (adapted from Ref 15).
Figure 2.28 Variation of carrier-diffusion MTF with illumination wavelength for a Si FPA.
62 Chapter 2
Fig. 2.29 MTF vs integration length for a SPRITE detector (adapted from Ref. 18).
or
f ½Hz ¼ vscan, angular ½mrad∕s j ½cy∕mrad: (2.19)
portion of the system. The usefulness of a boost filter is limited by the effects
of electronics noise. At any given frequency, an ideal boost filter would
amplify signal and noise equally, but in practice, a boost filter increases the
electrical noise-equivalent bandwidth and hence decreases the image signal-to-
noise ratio (SNR).20 Both a high MTF and a high SNR are desirable, so in the
design of a boost filter we need to decide how much gain to use and what
frequencies we want to emphasize.
An image-quality criterion that we can use to quantify this tradeoff is the
MTF area (MTFA), which has been validated by field trials to correlate well
with image detectability.21 MTFA is the area between the MTF curve and the
noise-equivalent modulation (NEM) curve. The NEM characterizes the
electronics noise in terms of modulation depth, being defined as the amount of
modulation depth needed to yield an SNR of unity. The ratio of MTF to
NEM at any spatial frequency can be interpreted as an SNR. Because the
electronics noise is frequency dependent, the NEM is usually a function of
spatial frequency. A convenient representation is to plot the MTF and the
NEM on the same graph, as seen in Fig. 2.31. The limiting resolution is the
spatial frequency where the curves cross.
Zj2
MTFAbefore-boost ¼ fMTFðjÞ NEMðjÞg dj: (2.21)
j1
An ideal boost filter with gain function B(j) amplifies signal and noise equally
at any frequency, so after the boost, the MTFA becomes
Zj2
MTFAafter-boost ¼ BðjÞfMTFðjÞ NEMðjÞg dj: (2.22)
j1
2.5 Conclusion
We can apply a transfer-function analysis that was originally developed for
classical optical systems to electro-optical systems by generalizing the
assumptions of linearity and shift invariance. Linearity is not strictly valid
for systems that have an additive noise level because image waveforms must
be of sufficient irradiance to overcome the noise before they can be considered
to add linearly. The definition of NEM allows us to consider a spatial-
frequency-dependent signal-to-noise ratio rather than simply a transfer
function. Shift invariance is not valid for sampled-data systems; however, to
MTF in Electro-optical Systems 65
References
1. J. Gaskill, Linear Systems, Fourier Transforms, and Optics, John Wiley &
Sons, New York (1978).
2. G. D. Boreman and A. Plogstedt, “Spatial filtering by a nonrectangular
detector,” Appl. Opt. 28(6), 1165–1168 (1989).
3. K. J. Barnard and G. D. Boreman, “Modulation transfer function of
hexagonal staring focal plane arrays,” Opt. Eng. 30(12), 1915–1919 (1991)
[doi: 10.1117/12.56012].
4. J. E. Greivenkamp, “Color-dependent optical prefilter for suppression of
aliasing artifacts,” Appl. Opt. 29(5), 676–684 (1990).
5. S. K. Park, R. Schowengerdt, and M.-A. Kaczynski, “Modulation-
transfer-function analysis for sampled image systems,” Appl. Opt. 23(15),
2572–2582 (1984).
6. A. Daniels, G. D. Boreman, A. Ducharme, and E. Sapir, “Random
transparency targets for modulation transfer function measurement in the
visible and IR,” Opt. Eng. 34(3), 860–868 (1995) [doi: 10.1117/12.190433].
7. K. J. Barnard, E. A. Watson, and P. F. McManamon, “Nonmechanical
microscanning using optical space-fed phased arrays,” Opt. Eng. 33(9),
3063–3071 (1994) [doi: 10.1117/12.178261].
8. K. J. Barnard and E. A. Watson, “Effects of image noise on
submicroscan interpolation,” Opt. Eng. 34(11), pp. 3165–3173 (1995)
[doi: 10.1117/12/213572].
9. E. A. Watson, R. A. Muse, and F. P. Blommel, “Aliasing and blurring in
microscanned imagery,” Proc. SPIE 1689, pp. 242–250 (1992) [doi: 10.
1117/12.137955].
10. J. D. Fanning and J. P. Reynolds, “Target identification performance of
superresolution versus dither,” Proc. SPIE 6941, 69410N (2008) [doi: 10.
1117/12.782274].
11. L. Huang and U. L. Osterberg, “Measurement of cross talk in order-
packed image-fiber bundles,” Proc. SPIE 2536, pp. 480–488 (1995) [doi:
10.1117/12.218456].
12. A. Komiyama and M. Hashimoto, “Crosstalk and mode coupling
between cores of image fibers,” Electron. Lett. 25(16), 1101–1103 (1989).
13. O. Hadar, D. Dogariu, and G. D. Boreman, “Angular dependence of
sampling modulation transfer function,” Appl. Opt. 36(28), 7210–7216
(1997).
66 Chapter 2
We initially assume that the image receiver is continuously sampled; that is,
we do not need to consider the finite size of pixels nor the finite distance
between samples. We will address these aspects of the measurement-
instrument response later in this chapter. Here we assume that we can
measure the image-irradiance distribution g(x,y) to the necessary spatial
precision. If the object is truly a point source, the two-dimensional image-
irradiance distribution g(x,y) equals the impulse response h(x,y). This is also
called the point-spread function (PSF):
67
68 Chapter 3
Figure 3.3 The LSF is the two-dimensional convolution of the line-source object with the
PSF (adapted from Ref. 1 with permission; © 1978 John Wiley & Sons).
Each point in the line source produces a PSF in the image plane. These
displaced PSFs overlap in the vertical direction, and their sum forms the LSF.
As seen schematically in Fig. 3.3, the LSF is the two-dimensional convolution
(denoted by **) of the line-source object with the impulse response of the
image-forming system:
We can obtain other profiles of the transfer function by reorienting the line
source. For instance, if we turn the line source by an in-plane angle of 90 deg,
we get
Figure 3.4 Comparison of the x-direction functional forms of the PSF and LSF for a
diffraction-limited system. The Airy-disc radius is 1.22 l(F/#).
In Fig. 3.4 we compare the PSF and LSF for a diffraction-limited system. We
see that, while the PSF(x,0) has zeros in the pattern, the LSF(x) does not.
The ESF is the convolution of the PSF with the unit-step function:
The y convolution of the PSF with a constant produces an LSF, and the x
convolution with the step function produces a cumulative integration, as seen
schematically in Fig. 3.6:
Point-, Line-, and Edge-Spread Function Measurement of MTF 71
Figure 3.6 The ESF is the two-dimensional convolution of the edge-source object with the
PSF (adapted from Ref. 1 with permission; © 1978 John Wiley & Sons).
Zx
ESFðxÞ ¼ PSFðx, yÞ stepðxÞ1ðyÞ ¼ LSFðx0 Þdx0 : (3.13)
`
Figure 3.7 Plot of the ESF for a diffraction-limited system. The Airy-disc radius is 1.22l(F/#).
72 Chapter 3
Zx
d d
fESFðxÞg ¼ LSFðx0 Þdx0 ¼ LSFðxÞ, (3.15)
dx dx
`
use blackbodies as the flux sources. Having sufficient flux is usually not an
issue in the visible because hotter sources are typically used.
The LSF method provides more image-plane flux than does the PSF test.
The ESF setup provides even more flux and has the added advantage that a
knife edge avoids slit-width issues. However, the ESF method requires a
spatial-derivative operation, which accentuates noise in the data. If we reduce
noise by convolution with a spatial kernel, the data-smoothing operation itself
has an MTF contribution.
In any wavelength region, we can use a laser source to illuminate the
pinhole. The spatial coherence properties of the laser do not complicate the
interpretation of PSF data if the pinhole is small enough to act as a point
source (by definition, spatially coherent regardless of the coherence of the
illumination source). An illuminated pinhole acts as a point source if it is
smaller than both the central lobe of the PSF of the system that illuminates the
pinhole and the central lobe of the PSF of the system under test, geometrically
projected (with appropriate magnification) to the source plane. Even with a
laser-illuminated pinhole, the PSF measurement yields an incoherent MTF
because the irradiance of the PSF is measured rather than the electric field.
For sources of extended spatial dimension, such as those for LSF and ESF
tests, we must ensure that the coherence properties of the illumination do not
introduce interference-fringe artifacts into the data.
better SNR. Note, however, that larger detectors generally exhibit more noise
than smaller detectors; the rms noise is proportional to the square root of the
sensor area. This dependence on area reduces the SNR gain somewhat if a
larger detector is not fully illuminated. But, if the collected power (and hence
the signal) is proportional to the detector area and the rms noise is
proportional to the square root of the detector area, the more flux we can
collect, the better our measurement SNR, even if that means we must use a
larger detector.
In the configurations illustrated in Figs. 3.1, 3.2, and 3.5, we assumed a
continuously sampled image receiver, which is analogous to a point receiver
that scans continuously. If we want to obtain PSF data, our test setup must
include a point source and this type of point receiver. The only option to
increase SNR if we use a PSF-test setup is to increase the source brightness or
to average over many data sets.
However, for an LSF measurement, we can accomplish the measurement
in a number of ways, some of which yield a better SNR than others. We can
use a linear source and a point receiver, such as the configuration seen in
Fig. 3.2. This system will give us a better SNR than PSF measurement because
we are using a larger-area source. Equivalently, as far as the LSF data set is
concerned, we can use the configuration seen in Fig. 3.9: a point source and a
slit detector (or a slit in front of a large-area detector). The data acquired are
equivalent to the data for an LSF test because of the averaging in the vertical
direction. Similar to the situation using a line source and a point receiver, this
collects more flux than a PSF test and has a better SNR. However, since we are
using a linear detector, we might also want to use a linear source (Fig. 3.10).
This configuration still provides data for an LSF test, but now the source and
the receiver have the same geometry. This arrangement collects the most flux
and will provide the best SNR of any LSF test setup.
A number of different configurations will work for ESF tests, and some
are better than others from an SNR viewpoint. We begin with a knife-edge
source and a scanned point receiver (Fig. 3.5). We can collect more flux with
the configuration of Fig. 3.11, where the ESF measurement is performed with
Figure 3.9 A PSF test performed with a scanning linear detector produces data equivalent
to an LSF test.
Point-, Line-, and Edge-Spread Function Measurement of MTF 75
Figure 3.10 An LSF test performed with a linear detector produces a better SNR than
when performed using a point receiver.
Figure 3.11 ESF test setup using a point source and a scanning knife edge with a large-
area detector.
a point source and where a knife edge in front of a large detector serves as the
image receiver. We will obtain a better SNR (and the same ESF data) using
the setup illustrated in Fig. 3.12, which involves a line source and a scanning
knife edge in front of a large-area detector. We can also use a knife-edge
source and a scanning linear receiver (Fig 3.13) or a knife-edge source and a
scanning knife edge in front of a large-area detector (Fig. 3.14) because the
data set is constant in the vertical direction. The measurement configuration
Figure 3.12 ESF test configuration using a slit source and a scanning knife edge with a
large-area detector.
76 Chapter 3
Figure 3.13 ESF test configuration using an edge source and a scanning linear detector.
Figure 3.14 ESF test configuration using an edge source and a scanning knife edge with a
large-area detector.
of Fig. 3.14 should produce the highest SNR, assuming that the detector is of
an appropriate size to accommodate the image-irradiance distribution (that is,
it is not significantly oversized).
Summing the PSF data along the y direction and accumulating them along the
x direction yields an ESF measurement in the x direction:
Point-, Line-, and Edge-Spread Function Measurement of MTF 77
Figure 3.15 PSF test configuration using a two-dimensional detector array can be used to
produce PSF, LSF, and ESF data.
X
M X
i0
ESFðxi0 Þ ¼ PSFðxi , yj Þ : (3.18)
j¼1 i¼1
Because of signal averaging, the LSF and ESF test data will have a better
SNR than the original PSF test.
Similarly, using a line source oriented along the y direction, summing (or
averaging) the LSF data along y yields an LSF measurement with better
SNR. Accumulating the LSF data along x yields an ESF measurement. If we
sum (or average) the ESF data along y, we obtain an ESF measurement with
better SNR.
However, when using the signal-averaging techniques just described, we
must be sure to subtract any background-signal level in the data. Expressions
such as Eqs. (3.17) and (3.18) assume that the detector data are just the image-
plane flux. Residual dark-background signal at each pixel, even if low-level,
can become significant if many pixels are added. Another item we must
consider is whether or not to window the data before a summation is
performed. Often, we only use data from the central region of the image,
where the flux level is highest and the imaging optic has the best performance.
If the lens under test has field-dependent aberrations, it is particularly
important that we use data from the region of the sensor array that contains
the data from the FOV of interest.
Also, if the signal-processing procedure involves a summation of data
over columns, we must ensure that each column has the same data, i.e., there
is no unintended in-plane angular tilt of a slit or edge source with respect to
columns. In taking a summation, spatial broadening of the measured response
will occur if the slit or edge is not precisely parallel to the columnar structure.
If there is a tilt, two (or more) adjacent columns can receive significant
portions of the signal irradiance. If the tilt is accounted for, and the data from
successive rows are interlaced with the appropriate spatial offset, then the data
do not suffer from unintended broadening. We will take up that issue later in
this chapter.
78 Chapter 3
and Eq. (1.7), where F, G, and H are the object spectrum, the image spectrum,
and the transfer function, respectively:
If the input object f(x) ¼ d(x), then the image g(x) is directly the PSF h(x):
and
where the noise spectrum N(j) is defined as the square root of the power
spectral density (PSD) of the electronics noise:
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
NðjÞ ¼ PSDðjÞ: (3.27)
Figure 3.16 Once the source spectrum has fallen to the level of the noise spectrum, MTF-
test results are invalid because of division-by-zero artifacts.
This term is one component of the instrumental MTF that should be divided
out from the measured MTF to obtain the MTF of the unit under test.
Figure 3.17 (left) Registering scans with a tilted knife edge. (a) Sampling grid with the knife
edge skewed from perpendicular. (b) Knife-edge shift in successive scans. (c) Combined
scan with reregistered edges. (right) Tilted-knife-edge scan. (Left diagram reprinted from
Ref. 4.)
row, allowing the proper position registration of the individual ESF data sets.
The superposition of the data sets will thus have a very fine sampling
(Fig. 3.17). The method assumes that the ESF of the system under test is
constant over the measurement region, so the measurement FOV in the along-
edge direction should be small enough that this condition is satisfied.
Because computation of MTF will require a spatial derivative to be
performed on the ESF data (Fig. 3.8), we may want to smooth and resample
the high-resolution ESF data set with a moving-window average to reduce
noise. A nearest-neighbor smoothing is usually sufficient and will have a
minimal MTF impact. Another technique that we can use to reduce the noise
before taking the derivative is to fit the high-resolution ESF data set to a
suitable functional form such as a cumulative Gaussian, a sigmoid function,
or a low-order polynomial. We can then take the derivative on the functional
form rather than on the data itself.
The other fine point in the design of the measurement involves deciding
what tilt to use. To ensure a uniform distribution of positions of the edge with
respect to the columnar structure, we should use at least a one-pixel difference
Point-, Line-, and Edge-Spread Function Measurement of MTF 83
Figure 3.18 A knife-edge tilt that is too large can produce a small number of redundant
data-registration positions.
between the top and bottom rows of the scan data. Such fine control of the
angular position of the knife edge is not required, and we can use a somewhat
more pronounced tilt. Reference 4 suggests one pixel of horizontal position
difference of the knife edge over 64 scan lines. This criterion should produce a
good oversampling, and the overlap of data-point positions over an entire data
set of more than 64 rows should be helpful in reducing noise. We should avoid
significantly more pronounced tilts because we want a uniform distribution of
positions of the edge with respect to the columnar structure. Consider the
extreme example of a one-pixel horizontal offset over four scan lines (14-deg
tilt), as shown in Fig. 3.18. If the edge were centered on one pixel and on its
adjacent neighbor four rows down, this situation would result in only four edge
positions in the data set, rather than the desired uniform distribution. Thus, we
want to avoid this possibility entirely by using a sufficiently small tilt that any
such repetition of data would be spaced by many rows. We should avoid
pronounced tilts for another reason—because we want to measure MTF in a
particular direction (which we assume to be horizontal in this discussion), and
using a slight tilt provides the closest approximation to that measurement, in
the context of the tilted knife-edge test.
3.9 Conclusion
There are several ways we can measure MTF using targets that have an
impulsive nature, each with positive aspects as well as drawbacks. For point
sources and slit sources, we need to consider the dimensions of the source
object. Edge sources are a versatile method that can measure MTF past the
Nyquist frequency of a sensor array. We can employ a variety of averaging
methods to increase signal-to-noise ratio, with appropriate caveats to ensure
high-fidelity data sets.
84 Chapter 3
References
1. J. Gaskill, Linear Systems, Fourier Transforms, and Optics, John Wiley &
Sons, New York (1978).
2. B. Tatian, “Method for obtaining the transfer function from the edge
response function,” JOSA 55(8), 1014–1019 (1965).
3. R. Barakat, “Determination of the optical transfer function directly from
the edge spread function,” JOSA 55(10), 1217–1221 (1965).
4. S. E. Reichenbach, S. K. Park, and R. Narayanswamy, “Characterizing
digital image acquisition devices;,” Opt. Eng. 30(2), pp. 170–177 (1991)
[doi: 10.1117/12/55783].
Chapter 4
Square-Wave and Bar-Target
Measurement of MTF
Although MTF is defined in terms of the response of a system to sinusoids of
irradiance, we commonly use binary targets in practice because sinusoidal
targets require analog gray-level transmittance or reflectance. Fabricating
targets with analog transmittance or reflectance usually involves photographic
or lithographic processes with spatial resolutions much smaller than the
period of the sinusoid so that we can achieve an area-averaged reflectance or
transmittance. Sinusoidal targets used in MTF testing should have minimal
harmonic distortion so that they present a single spatial frequency to the
system under test. This is difficult to achieve in the fabrication processes.
Conversely, binary targets of either 1 or 0 transmittance or reflectance are
relatively easy to fabricate. We can fabricate binary targets for low spatial
frequencies by machining processes. For targets of higher spatial frequency,
we can use optical-lithography processes of modest resolution because the
metallic films required to produce the binary patterns are continuous on a
micro scale. In this chapter, we will first consider square-wave targets and
then three-bar and four-bar targets. Square-wave targets not only consist of
the fundamental spatial frequency, but also contain higher harmonic terms.
Bar targets contain both higher and lower harmonics of the fundamental.
Because of these harmonics, we must correct modulation-depth measurements
made with binary targets to produce MTF data, either with a series approach
or with digital filtering.
85
86 Chapter 4
harmonics is 2/(pn), which decreases with order number n. For this case, the
amplitude at dc is 0.5, which is the average value of the transmittance waveform.
Of course, we never use an infinite square wave in practice. However, it is
quite feasible to fill the field of view of the system under test with square-wave
targets, using, for instance, a set of Ronchi rulings of appropriate dimension
located at the object plane. In this case, the delta-function spectrum of the infinite
square wave will be convolved with a function that represents the Fourier
transform of the field of view. For targets of high spatial frequency, the resultant
spectral broadening is negligible, and a series representation is still accurate. For
targets of lower spatial frequency, the bar spacing may be an appreciable fraction
of the field of view. In those cases, we may find that the frequency-domain width
of the broadened spectral line is significant compared to the harmonic spacing,
and a series representation is consequently less accurate.
For infinite square-wave targets, we can define a contrast transfer function
(CTF) as a function of the square-wave fundamental spatial frequency:
M output ðjf Þ
CTFðjf Þ ¼ : (4.1)
M input square wave ðjf Þ
The CTF is not a transfer function in the true sense because it is not defined in
terms of sine waves. CTF cannot be cascaded with MTF curves without first
converting CTF to MTF. The modulation depth of the input square wave is usually
1 for all targets in the set, and for an infinite-square-wave target, the maxima of
irradiance Amax are all equal and the minima of irradiance Amin are all equal,
allowing an unambiguous calculation of the output modulation depth Moutput as
Square-Wave and Bar-Target Measurement of MTF 87
Amax Amin
M output ¼ : (4.2)
Amax þ Amin
We can express the modulation depth and, hence, the CTF at any frequency
as a two-sided summation of Fourier-series harmonic components. These
components are weighted by two multiplicative factors in the summation:
their relative strength in the input waveform and the MTF of the system under
test at each harmonic frequency. This process yields an expression1 for CTF in
terms of MTF:
4 MTFðj ¼ 3jf Þ MTFðj ¼ 5jf Þ
CTFðjf Þ ¼ MTFðj ¼ jf Þ þ :::
p 3 5
(4.3)
and inversely, for MTF in terms of CTF:
p CTFðjf ¼ 3jÞ CTFðjf ¼ 5jÞ
MTFðjÞ ¼ CTFðjf ¼ jÞ þ þ : : : : (4.4)
4 3 5
As defined in Eqs. (4.3) and (4.4), both CTF and MTF are normalized to
unity at zero spatial frequency. We can see from Eq. (4.4) that, if we want to use
the series conversion, calculating MTF at any particular spatial frequency
requires us to have CTF data at a series of frequencies that are harmonically
related to the frequency of interest. Typically, the procedure to accomplish this is
to measure the CTF for a sufficient number of fundamental frequencies (over a
range from low frequencies up to where the CTF is negligibly small) so that we
can interpolate a continuous curve between the measured values. This allows us
to find the CTFs at the frequencies needed for computing an MTF curve from
the CTF data.
Owing to the higher harmonics present in a square wave, it is not accurate
to directly take the square-wave CTF measurements as MTF measurements.
A built-in bias makes the CTF higher at all frequencies than the
corresponding MTF. We can see this bias from the first term in the series
summation of Eq. (4.3). This first term, Eq. (4.5), is a high-frequency
approximation to the series because, for high fundamental spatial frequencies,
the MTF at the harmonic frequencies is low:
For lower fundamental frequencies, we must include more harmonic terms for
an accurate representation because the MTF at the harmonic frequencies is
higher, but the CTF always exceeds the MTF.
We compare plots of CTF and MTF in Fig. 4.2 for the case of a
diffraction-limited circular aperture. We calculated the CTF values directly
88 Chapter 4
Figure 4.2 Comparison of MTF, CTF, three-bar IMD, and four-bar IMD for a diffraction-
limited circular-aperture system (reprinted from Ref. 2). [IMD is image modulation depth, as
defined in Eq. (4.6) in the next section.]
from the series in Eq. (4.3). The pronounced hump in the CTF near j/jcutoff
0.3 arises from the 4/p multiplier and from the fact that that the third
harmonic, which adds into the series with a negative sign, is nearing the cutoff
frequency and therefore is not contributing negatively to the CTF. The small
oscillations in the CTF at lower frequencies arise from the fifth harmonic
(which adds with a positive sign) nearing cutoff and the seventh harmonic
(which adds with a negative sign) nearing cutoff.
M output ðjf Þ
IMDðjf Þ ¼ : (4.6)
M input bar target ðjf Þ
This IMD does not equal the MTF at jf because of the extra frequency
components at both higher and lower frequencies3 than jf. These components
contribute to the IMD, biasing the measurement toward higher modulation
values than would be measured with a sinusoidal input. In practice, we
Square-Wave and Bar-Target Measurement of MTF 89
Figure 4.3 Binary (left) three-bar and (right) four-bar transmission targets.
Figure 4.4 Measurement of three-bar IMD from unequal bar data (adapted from Ref. 4).
90 Chapter 4
Figure 4.5 Comparison of MTF, CTF, three-bar IMD, and four-bar IMD for a system with a
Gaussian MTF ¼ exp{–2(j/j0)2}. The CTF and the three-bar and four-bar IMD curves are
identical for this case (reprinted from Ref. 2).
Square-Wave and Bar-Target Measurement of MTF 91
Figure 4.6 Comparison of MTF, CTF, three-bar IMD, and four-bar IMD for a system with an
exponential MTF ¼ exp{–2(j/j0)} (reprinted from Ref. 2).
Figure 4.7 Comparison of MTF, CTF, three-bar IMD, and four-bar IMD for a diffraction-
limited, annular-aperture system with a 50% diameter obscuration (reprinted from Ref. 2).
exponential cases, we do not observe the mid-frequency hump of Fig. 4.2 with
the exponential MTF because there is no cutoff frequency at which a given
harmonic ceases to contribute to the series representing the CTF.
The situation is a bit different in Fig. 4.7, where we consider the case of an
obscured-aperture system. The MTF curve is not as smooth as in the case of
no obscuration, and the discontinuities in its derivative make the weighting of
the different terms of the series a more complicated function of frequency. The
CTF curve still exceeds the MTF curve. At some frequencies, the three-bar
and four-bar IMD curves are nearly identical to the CTF, and at other
frequencies, there is as much as a 10% difference in absolute modulation depth
or a 40% relative difference between the curves. So, in some cases, a bar-
target-to-MTF conversion using a series is not accurate. The examples shown
indicate that, if the MTF curve is a smooth, monotonic decreasing function,
the series conversion will be reasonably accurate. But in a measurement
context, we do not know the MTF curve beforehand, and we want to have a
procedure that is valid, in general, for conversion of bar-target data to MTF.
When digitized image data are available, we can perform a direct bar-
target-to-MTF conversion that makes no prior assumptions about the MTF
curve being measured. For either a three-bar or four-bar target with equal
bars and spaces, we know the magnitude spectrum of the input as a
mathematical function, for any fundamental frequency jf:
1 j j 1
S input, 3-bar ðjÞ ¼ sinc cos 2p þ (4.7)
jf 2jf jf 2
1 j j j
S input, 4-bar ðjÞ ¼ sinc cos 3p þ cos p : (4.8)
jf 2jf jf jf
We take the absolute value of the Fourier transform of the digitized bar-target
image data to produce the output magnitude spectrum Soutput(j). Figure 4.9
shows a measured three-bar-target magnitude spectrum and the correspond-
ing input spectrum Sinput-bar-target(j), both normalized to unity at j ¼ 0. We
perform the bar-target MTF calculation at the fundamental frequency of the
particular target being used. Note that the image-plane fundamental
frequency is not simply the frequency of the (j . 0) maximum of the
Square-Wave and Bar-Target Measurement of MTF 93
measured spectrum. The measured output spectrum has been filtered by the
system MTF. Because this MTF decreases with frequency, we see that the
peak of the output spectrum occurs at a slightly lower frequency than the
fundamental jf of the input target. The ratio of Eq. (4.9) is to be calculated at
the fundamental frequency. Without knowing the MTF curve, we cannot say
how much the peak was shifted. So, to determine the fundamental spatial
frequency of the input target, we use the first zero of the spectrum, the
location of which is not shifted by the MTF. In Eq. (4.7) describing the three-
bar target, the term in square brackets first goes to zero at
jfirst-zero ¼ jf ∕3, (4.10)
and in Eq. (4.8) describing the four-bar target, the term in square brackets first
goes to zero at
jfirst-zero ¼ jf ∕4: (4.11)
Figure 4.9 Measured three-bar-target magnitude spectrum Soutput(j) (dotted curve) and
the corresponding calculated input spectrum Sinput-bar-target(j) (solid curve) (reprinted from
Ref. 2).
to generate the MTF curve from digitized bar-target data without the need for
a series conversion. Thus, we still make the bar-target MTF measurement one
frequency at a time, but now without concern about the accuracy of a series
conversion because we use a digital-filtering technique to isolate jf from the
continuous spectrum of the bar target.
From Fig. 4.9, we can see that there is more information present than just
at the fundamental frequency, and it is tempting to try to extend the range of
the measurement beyond the fundamental. We found experimentally that
accuracy suffered when we tried to extend the frequency range of the
measurement on either side of the fundamental. The spectral information is
strongly peaked, and dividing the output curve by the input curve tends to
emphasize any noise present in the measured data because the input decreases
so rapidly on either side of the peak. However, it may be possible to use the
lower-frequency subsidiary maxima seen in Fig. 4.7 to at least get another
measurement frequency for any given target, assuming a good signal-to-noise
ratio. Higher-frequency data will have generally been attenuated by the MTF
to a degree such that taking the ratio of Eq. (4.9) will not produce results of
good quality.
4.3 Conclusion
We often use bar targets in MTF measurements. It is important to realize that
we must correct modulation depth measurements made with bar targets to
produce MTF data, either with a series approach or with digital filtering. If
the MTF curve of the system under test is relatively smooth, the agreement
between CTF and bar-target data is often quite close. If the MTF curve of the
Square-Wave and Bar-Target Measurement of MTF 95
References
1. J. W. Coltman, “The specification of imaging properties by response to a
sine wave input,” JOSA 44(6), 468–471 (1954).
2. G. D. Boreman and S. Yang, “Modulation transfer function measurement
using three- and four-bar targets,” Appl. Opt. 34(34), 8050–8052 (1995).
3. D. H. Kelly, “Spatial frequency, bandwidth, and resolution,” Appl. Opt.
4(4), 435–437 (1965).
4. I. de Kernier, A. Ali-Cherif, N. Rongeat, O. Cioni, S. Morales, J. Savatier,
S. Monneret, and P. Blandin, “Large field-of-view phase and fluorescence
mesoscope with microscopic resolution,” J. Biomedical Optics 24(3),
036501 (2019) [doi: 10.1117/1.JBO.24.036501].
Chapter 5
Noise-Target Measurement
of MTF
Measurement of a system’s transfer function by means of its response to
random-noise inputs has long been a standard procedure in time-domain
systems.1 If the input is white noise, which contains equal amounts of all
frequencies, the action of the transfer function is to impart a nonuniformity of
frequency content that can be assessed at the output of the system by means of
a Fourier analysis. This concept has not historically been employed in the
measurement of optical systems, with the exception of an initial demonstra-
tion2 using (non-white) film-grain noise.
Noise-like targets of known spatial-frequency content are useful for MTF
testing, particularly for spatially sampled systems such as detector-array
image receivers. Noise targets have a random position of the image data with
respect to sampling sites in the detector array and measure a shift-invariant
MTF that inherently includes the sampling MTF. Noise targets measure the
MTF according to
where PSD denotes power spectral density, defined as the ensemble average of
the square of the Fourier transform of object or image data. The PSD is a
measure of spatial-frequency content for random targets or random images.
We calculate the output PSD from the image data. Generally, we calculate the
finite-length Fourier transform of a row of image data and square the result.
This is an estimate of the PSD, but because the calculation is performed on a
data record of finite length, there is noise in the estimate. When we perform
this operation on other rows of image data, we generate other PSD estimates.
Averaging over these additional estimates gives a more accurate estimation3
of the PSD of the underlying random process. Noise targets usually measure
the MTF averaged over a system’s whole field of view. However, we can
calculate PSDs from various subregions of the image. If we use smaller data
97
98 Chapter 5
sets in this way, we will likely need to average over additional independent
data sets to obtain PSD estimates of sufficient signal-to-noise ratio.
Accuracy of an MTF measurement using noise targets depends critically
on how well we know the input PSD. This is a central consideration in the
design of the specific measurement apparatus to be used. The two main
methods we use for generating noise targets are laser speckle and random
transparencies. We generally use the laser-speckle method to measure the
MTF of a detector array alone because the method relies on diffraction to
deliver the random irradiance pattern to the receiver, without the need for an
intervening optical system. Because a transparency must be imaged onto the
detector array using optical elements, we find the random-transparency
method to be more convenient for MTF measurement of a complete imager
system, including both the detector array and the fore-optics. In both cases,
we can generate a variety of input PSDs, depending on the specific
instrumentation details.
Figure 5.2 Relationship between the frequency content of a speckle pattern and a
diffraction pattern, given a single aperture of width W.
100 Chapter 5
Figure 5.3 Laser-speckle setup for MTF tests, using an integrating sphere to illuminate the
aperture with phase-randomized laser light.
Figure 5.5 (left) Typical narrowband laser-speckle pattern and (right) the corresponding
image PSD plot in which both aliased and non-aliased PSDs are shown (reprinted from
Ref. 10).
Figure 5.6 MTF results from a dual-slit speckle measurement of a detector array, showing
measured data beyond the Nyquist frequency (reprinted from Ref. 10).
Figure 5.7 (left) Slanted-dual-slit aperture and (right) its j-direction PSD (reprinted from
Ref. 10).
Figure 5.8 (left) Two-dimensional PSD plot and (right) a speckle pattern from the slanted-
dual-slit aperture of Fig. 5.7 (reprinted from Ref. 10).
Figure 5.9 (left) 45-deg cross aperture and (right) its j-direction PSD (reprinted from Ref. 11).
Figure 5.10 (left) Two-dimensional PSD and (right) speckle pattern from the 45-deg cross
aperture of Fig. 5.9 (reprinted from Ref. 11).
used to yield the MTF. Figure 5.11 shows the results of this measurement. The
Nyquist frequency of the tested array was 107 cy/mm. The discontinuity in the
MTF plot near Nyquist resulted from the absence of data in that vicinity
because of the finite width of the baseband feature in the PSD.
Figure 5.11 Speckle MTF measurement to twice the Nyquist frequency of a detector array,
using the aperture of Fig. 5.9. jNyquist ~107 cy/mm (adapted from Ref. 11).
Figure 5.12 Setup for the random-transparency MTF test (adapted from Ref. 12).
number of pixels in a row of the detector array. We repeat this for the M rows
of the detector array so that we have an N M matrix of uncorrelated
random numbers. If we render these data values as square contiguous pixels of
transmittance values on the substrate at the desired spacing Dxobj, we will
have spatial white noise bandlimited to the desired spatial frequency. We see
an example of this type of pattern in Fig. 5.13, along with a PSD computed
from the N random numbers for one line of data. Because the image data is
random, there are fluctuations in the PSD. This noise would average out if we
used more lines of data to compute the PSD, but the single-line PSD estimate
shown demonstrates the white-noise characteristic of the transparency.
In situations where the required spatial resolution of the pattern is well
within the capabilities of the system that generates the transparency, it may be
acceptable to confirm the input PSD in this manner directly from the pattern
Figure 5.13 (left) Bandlimited white-noise pattern and (right) a sample of the PSD
computed from the pattern (reprinted from Ref. 12).
Noise-Target Measurement of MTF 107
Figure 5.14 (Upper left) A small region of design data for the IR transparency; (upper right)
microscopic image of a small region of the as-fabricated transparency (pixel size 46 mm);
(lower) the input PSD calculated from a microscopic image of the as-fabricated transparency,
where the smooth curve is a fourth-order polynomial fit (reprinted from Ref. 12).
108 Chapter 5
Figure 5.15 MTF measurement results for bandlimited white-noise pattern (reprinted from
Ref. 12).
Figure 5.15 shows MTF measurement results for the pattern in Fig. 5.13.
In the left figure, the dots are data points from the random-transparency MTF
technique, and the solid line is a fourth-order polynomial fit to the data. The
dashed lines are measured data using a line-response method. The upper
dashed curve corresponds to the situation where the image of the line source is
centered on a column of the detector array; the lower MTF curve corresponds
to the situation where the line image is centered between the columns.
Individual data points are the result of a particular spatial-frequency
component in the image, which have a random position with respect to the
photosite locations. The data points thus fall between the maximum and
minimum LSF-derived MTF curves, and the fitted curve falls midway
between the two line-response MTF curves. In the right figure of Fig. 5.15, we
compare the fourth-order data fit seen in the left figure to an average line-
response MTF, which was measured as follows. The line source was moved
randomly, and 15 LSF-derived MTFs were calculated and averaged. The
comparison between the random-transparency MTF (solid curve) and the
average LSF-derived MTF (dotted curve) shows excellent agreement,
consistent with the analysis of Park et al.,14 where a shift-invariant MTF is
calculated as the average over all possible test-target positions. We confirmed
the repeatability of the random-transparency method by making in-plane
translations of the transparency and comparing the resulting MTFs. We
found a variation in MTF of less than 2%. Thus, we demonstrated the
random-transparency method to be shift invariant. This shift invariance,
which applies to all noise-target methods, relaxes alignment requirements in
the test procedure, as compared to methods not employing noise targets,
where the test procedure generally requires fine positional adjustment of
components to achieve the best response.
Noise-Target Measurement of MTF 109
Other PSD dependences are possible. For example, using the frequency-
domain filtering process seen in Fig. 5.16, we can modify white-noise data to
yield a pattern with several discrete spatial frequencies of equal PSD
magnitude. Figure 5.17 shows a resulting random pattern, along with the
frequency filtering function we used. The PSD is no longer white, resulting in
a discernable inter-pixel correlation. We show the MTF measurements in
Fig. 5.18 by comparing the MTF results from the random discrete-frequency
pattern and the white-noise pattern. Also we show the amplitude spectrum of
Figure 5.16 Generation process of a discrete-frequency pattern (adapted from Ref. 12).
Figure 5.17 (left) Random discrete-frequency pattern and (right) its corresponding filter
function (reprinted from Ref. 12).
110 Chapter 5
the system noise, which we measured by taking the magnitude of the FFT of
the array data, averaged over rows, for a uniform irradiance input equal to the
average value of the discrete-frequency target image. The discrete-frequency
method allows a single-frame measurement of both MTF and spatial noise at
several discrete spatial frequencies.
5.3 Conclusion
Random-noise targets are useful for measuring a shift-invariant MTF. The two
primary methods for generating these targets are laser speckle and transparen-
cies. We generally use laser speckle to test detector arrays because no
intervening optical elements are required. If the input noise PSD is narrowband,
we can use laser speckle to measure MTF past the Nyquist frequency of the
detector array. We generally use transparencies to test camera systems
consisting of fore-optics and a detector array. In both cases, we can generate
a variety of input PSDs, depending on the specifics of the instrument design.
References
1. A. Papoulis, Probability, Random Variables, and Stochastic Processes,
McGraw-Hill, New York, pp. 346–350 (1965).
2. H. Kubota and H. Ohzu, “Method of measurement of response function
by means of random chart,” JOSA 47(7), 666–667 (1957).
Noise-Target Measurement of MTF 111
sinðpjwÞ
MTFðjÞ ¼ : (6.1)
ðpjwÞ
113
114 Chapter 6
the impulse response of the system under test. This allows us to measure the PSF
irradiance distribution at the plane of the ground glass without regard for whether
incident rays at all angles would be captured by the finite numerical aperture of
the microscope objective. The objective and CCD combination should be first
focused on the ground glass, and the whole assembly should allow precise axial
motion to find the best focus of the impulse response being measured. The
objective and FPA combination should have a three-axis micropositioner,
allowing motion perpendicular to the optic axis, to allow for centering the PSF in
the field of the camera and measurement of the impulse response width.
We can obtain a quick estimate of the width of the blur spot by
positioning the center of the image at one side of the blur spot and noting the
amount of cross-axial micropositioner motion required to place the center of
the image on the other side of the blur spot. Surely the blur spot is not a
uniform irradiance distribution, and there is some arbitrariness in the
assessment of the spot width in this manner. Nevertheless, we can obtain a
back-of-the-envelope estimate for the MTF using that estimate of w. When we
compare Fig. 6.1 [using the manual measurement of w in Eq. (6.1)] to the
computer-calculated MTF, the results should be reasonably close in
magnitude (if not in acutal functional form). If we do not find a suitable
correspondence, we should re-examine the assumptions made in the computer
calculation before certifying the final measurement results. Common errors
Practical Measurement Issues 115
Figure 6.3 Cascade of MTFs of a lens system (curve a) and a detector (curve b) produces
the product MTF (curve c).
116 Chapter 6
A relay lens pair with a diffusing screen at the intermediate image plane is
another case where we can cascade MTFs by means of a point-by-point
multiplication (Fig. 6.4). The exitance [W/cm2] on the output side of the
diffuser is proportional to the irradiance [W/cm2] at the input face. Any point-
to-point phase correlation in the intermediate image is lost in this process. The
diffuser forces the two systems to interact independently, regardless of their
individual state of correction. The two relay stages cannot compensate for the
aberrations of the other because of the phase-randomizing properties of the
diffuser. The MTFs of each stage multiply, and the product MTF is
correspondingly lower. This relay-lens example is a bit contrived because we
typically do not have a diffuser at an intermediate image plane (from a
radiometric point of view, as well as for image-quality reasons), but it
illustrates the MTF multiplication rule by presenting the second stage with an
incoherent irradiance image formed by the first stage.
Figure 6.5 illustrates a case in which cascading MTFs will not work. This
is a two-lens combination where the second lens balances the spherical
aberration of the first lens. Neither lens is well-corrected by itself, as seen by
the poor individual MTFs. The system MTF is higher than either of the
Figure 6.4 Relay-lens pair with a diffuser at the intermediate image plane. The MTF of
each stage is simply multiplied.
Figure 6.5 Pair of relay lenses for which the MTFs do not cascade.
Practical Measurement Issues 117
MTF1, meas ðjÞ ¼ MTF1, geom ðjÞ MTF1, diff ðjÞ: (6.2)
MTF2, meas ðjÞ ¼ MTF2, geom ðjÞ MTF2, diff ðjÞ (6.3)
and
MTF2, meas ðjÞ
MTF2, geom ðjÞ ¼ : (6.4)
MTF2, diff ðjÞ
The only diffraction MTF that contributes to the calculation of the total MTF
is that of subsystem #1, so for the total system MTF, we have
MTFtotal ðjÞ ¼ MTF1, diff ðjÞ MTF1, geom ðjÞ MTF2, geom ðjÞ: (6.5)
Figure 6.7 MTF test where the source is directly imaged by the unit under test.
Practical Measurement Issues 119
The other PSF terms should be known and should be much narrower than the
PSF of the UUT so that we can divide them out in the frequency domain
using Eq. (1.9) without limiting the frequency range of the MTF test.
120 Chapter 6
Figure 6.13 The object generator for the visible square-wave test.
Practical Measurement Issues 125
The object generator is backlit and, as seen in Fig. 6.14, is placed at the
focus of a collimator and re-imaged by the lens under test. A long, narrow
detector is placed perpendicular to the orientation of the slit image, and the
moving square waves pass across the detector in the direction of its narrow
dimension. The analog voltage waveform output from the sensor is essentially
the time-domain square-wave response of the system to whatever spatial
frequency the object generator produced. To avoid the necessity of using the
series conversion from CTF to MTF, a tunable-narrowband analog filter is
used, eliminating all harmonics and allowing only the fundamental frequency
of the square wave to pass through. The center frequency of the filter is
changed when, with different u settings, the object frequency is changed. The
waveform after filtering is sinusoidal at the test frequency, and the modulation
depth is measured directly from this waveform. Filtering the waveform
electronically requires the electronic frequency of the fundamental to stay well
above the 1/f-noise region (1 kHz and below), which implies a fast rotation of
the object-plane radial grating. This necessitates a relatively high-power
optical source to back-illuminate the object generator because the detector has
to operate with a short integration time to acquire the quickly moving square
waves. This design works well in the visible, but was not suitable in the
infrared, where the SNR would be lower because of the lower source
temperatures used in that band. Compensating for the low SNR would
126 Chapter 6
require a longer integration time. Lowering the speed of the rotating grating
would put the electronic signal into the 1/f-noise region.
6.9 Conclusion
Measurement of MTF requires attention to various issues that can affect the
integrity of the data set. Most importantly, the quality of any auxiliary optics
must be known and accounted for. The optics other than the unit under test
should not significantly limit the spatial frequency range of the test. The
positional repeatability of the micropositioners used is critical to obtaining
high-quality data. We should take care to ensure that the coherence of the
source does not introduce interference artifacts into the image data that could
bias the MTF computation. It is also important that we carefully consider the
issue of low-frequency normalization because this affects the MTF value at all
frequencies. Computers are ubiquitous in today’s measurement apparatus. We
should make sure that all default settings in the associated software are
consistent with the actual measurement conditions.
Practical Measurement Issues 127
References
1. Prof. Michael Nofziger, University of Arizona, personal communication.
2. J. B. DeVelis and G. B. Parrent, “Transfer function for cascaded optical
systems,” JOSA 57(12), 1486–1490 (1967).
3. T. L. Alexander, G. D. Boreman, A. D. Ducharme, and R. J. Rapp,
“Point-spread function and MTF characterization of the kinetic-kill-vehicle
hardware-in-the-loop simulation (KHILS) infrared-laser scene projector,”
Proc. SPIE 1969, pp. 270–284 (1993) [doi: 10.1117/12.154720].
4. L. Baker, “Automatic recording instrument for measuring optical transfer
function,” Japanese J. Appl. Science 4(suppl. 1), 146–152 (1965).
Further Reading
Baker, L., Selected Papers on Optical Transfer Function: Measurement, SPIE
Milestone Series, Vol. MS 59, SPIE Press, Bellingham, Washington
(1992).
Chapter 7
Other MTF Contributions
We now consider the MTF contributions arising from image motion, image
vibration, atmospheric turbulence, and aerosol scattering. We present a
first-order analysis of these additional contributions to the system MTF. Our
heuristic approach provides a back-of-the-envelope estimate for the image-
quality impact of these effects, and a starting point for more advanced
analyses.
Figure 7.1 Linear motion blur is the product of image velocity and exposure time.
129
130 Chapter 7
sinðpjvimg te Þ
MTFalong-motion ðjÞ ¼ : (7.2)
ðpjvimg te Þ
residence time of the object near those locations and thus a higher probability
of finding the object near the peaks of the sinusoidal motion.
If the sinusoidal object motion has amplitude D, the total width of h(x) is
2D, assuming unit magnification of the optics. There is zero probability of the
geometrical image point being found outside of this range, which leads to the
impulse response depicted in Fig. 7.3. If we take the Fourier transform of this
h(x), we obtain the corresponding vibration MTF seen in Fig. 7.4.1
For low-frequency sinusoidal vibrations, the image quality depends on
whether the image-data acquisition occurs near the origin or near the extreme
points of the object movement. As stated earlier, the velocity slows near
the extreme points and is at its maximum near the center of the motion. In the
case of low-frequency sinusoidal vibrations, we must perform a more detailed
analysis to predict the number of exposures required to get a single lucky shot
where there is no more than a prescribed degree of motion blur.2
of the light incident on the phase screen passes through unscattered. As the
phase variance increases, more light will be spread into the diffuse halo of the
impulse response seen in Fig. 7.6, and the MTF will roll off at low frequencies.
For all of the curves in Fig. 7.7, the MTF is flat at high frequencies. The
atmospheric turbulence MTF is only one component of the MTF product,
and the high-frequency rolloff typically seen for overall system MTF will be
caused by some other MTF component.
For a large phase variance, the turbulence MTF of Eq. (7.3) reduces to a
Gaussian form:
2
2 lj
MTFðjÞ ¼ exp s , (7.4)
w
Figure 7.8 Comparison of field imagery and turbulence simulations (adapted from Ref. 4).
Figure 7.10 Increasing the propagation path decreases the aerosol-scattering MTF at all
frequencies.
Figure 7.11 Increasing the scattering coefficient decreases the aerosol-scattering MTF at
all frequencies.
jt ¼ a∕l (7.8)
7.5 Conclusion
This chapter provided a brief consideration of MTF contributions from
motion, vibration, turbulence, and aerosols. The treatment is intended as a
first-order approach to a complex topic. Additional information can be found
in the references and further-reading list.
Other MTF Contributions 137
References
1. O. Hadar, I. Dror, and N. S. Kopeika, “Image resolution limits resulting
from mechanical vibrations. Part IV: real-time numerical calculation of
optical transfer functions and experimental verification,” Opt. Eng. 33(2),
566–578 (1994) [doi: 10.111/12.153186].
2. D. Wulich and N. S. Kopeika, “Image resolution limits resulting from
mechanical vibrations,” Opt. Eng. 26(6), 529–533 (1987) [doi: 10.1117/12.
7974110].
3. J. W. Goodman, Statistical Optics, John Wiley and Sons, New York
(1985).
4. K. J. Miller, B. Preece, T. W. Du Bosq, and K. R. Leonard, “A data-
constrained algorithm for the emulation of long-range turbulence-degraded
video,” Proc. SPIE 11001, 110010J (2019) [doi: 10.1117/12.2519069].
5. Y. Kuga and A. Ishimaru, “Modulation transfer function and image
transmission through randomly distributed spherical particles,” JOSA A
2(12), 2330–2336 (1985).
6. D. Sadot and N. S. Kopeika, “Imaging through the atmosphere: practical
instrumentation-based theory and verification of aerosol MTF,” JOSA A
10(1), 172–179 (1993).
Further Reading
Andrews, L., and R. Phillips, Laser Beam Propagation through Random
Media, Second edition, SPIE Press, Bellingham, Washington (2005) [doi:
10.1117/3.626196].
Bohren, C. F., and D. R. Huffman, Absorption and Scattering of Light by
Small Particles, Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim
(1998).
Ishimaru, A., Wave Propagation and Scattering in Random Media, Wiley-
IEEE Press (1997).
Kopeika, N., A System Engineering Approach to Imaging, SPIE Press,
Bellingham, Washington, Chapters 14, 16, and 17 (1998) [doi: 10.1117/3.
2265069].
Zege, E. P., A. P. Ivanov, and I. L. Katsev, Image Transfer Through a
Scattering Medium, Springer-Verlag, Berlin (1991).
Index
A diffraction-limited MTF, 22
aberrations, 24, 27, 29, 33 diffraction MTF, 25, 117
aerosol scattering, 135 division by zero, 79
aliasing, 44, 90
angular spatial frequency, 6, 24 E
astigmatism, 30 edge-spread function (ESF), 70,
atmospheric turbulence, 132 74, 123
autocorrelation, 21, 25, 99 electro-optical systems, 39
auxiliary optics, 118 electronic networks, 61
electronics noise, 64, 79
B
bar target, 126 F
bar-target-to-MTF conversion, 92 fiber bundles, 58
birefringent filters, 48 finite source size, 78
boost filter, 62 flat field, 8, 121
focal-plane array (FPA), 39, 51
C four-bar target, 89
charge-carrier diffusion, 61 frame grabbers, 61
charge-transfer inefficiency, 60
coherence effects, 120 G
coma, 30 geometrical MTF, 28, 117
contrast transfer function (CTF), 86 ground glass, 113–114
convolution theorem, 7, 41 ground-glass diffuser, 100, 116, 120
critical spatial frequencies, 18
crosstalk MTF, 59 I
cutoff frequency, 24, 47, 105 image modulation depth (IMD), 88
impulse response, 1
D instantaneous field of
defocus, 12, 29 view (IFOV), 55
detection, recognition, and integrating spheres, 99–100, 104
identification, 18 interlacing, 53
detector arrays, 39
detector footprint, 41 J
diffraction, 18 Johnson criteria, 18
139
140 Index
BOREMAN
low-frequency normalization. Some generic measurement-instrument designs are
compared, and the book closes with a brief consideration of the MTF impacts of
motion, vibration, turbulence, and aerosol scattering.
P.O. Box 10
Bellingham, WA 98227-0010
Glenn D. Boreman
ISBN: 9781510639379
SPIE Vol. No.: TT121
TT121