(Tutorial Texts in Optical Engineering) Glenn D. Boreman - Modulation Transfer Function in Optical and Electro-Optical systems-SPIE (2021)

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 158

Modulation Transfer

Modulation Transfer Function in Optical and Electro-Optical Systems Second Edition


Function in Optical
and Electro-Optical
Systems
Second Edition Modulation Transfer
Glenn D. Boreman Function in Optical
This second edition, which has been significantly expanded since the 2001 edition,
introduces the theory and applications of the modulation transfer function (MTF)
and Electro-Optical
used for specifying the image quality achieved by an imaging system. The book
begins by explaining the relationship between impulse response and transfer
function, and the implications of a convolutional representation of the imaging
Systems
process. Optical systems are considered first, including the effects of diffraction and
aberrations on the image, with attention to aperture and field dependences. Then
Second Edition
electro-optical systems with focal-plane arrays are considered, with an expanded
discussion of image-quality aspects unique to these systems, including finite sensor
size, shift invariance, sampling MTF, aliasing artifacts, crosstalk, and electronics
noise. Various test configurations are then compared in detail, considering the
advantages and disadvantages of point-response, line-response, and
edge-response measurements. The impact of finite source size on the measurement
data and its correction are discussed, and an extended discussion of the practical
aspects of the tilted-knife-edge test is presented. New chapters are included on
speckle-based and transparency-based noise targets, and square-wave and
bar-target measurements. A range of practical measurement issues are then
considered, including mitigation of source coherence, combining MTF
measurements of separate subsystems, quality requirements of auxiliary optics, and

BOREMAN
low-frequency normalization. Some generic measurement-instrument designs are
compared, and the book closes with a brief consideration of the MTF impacts of
motion, vibration, turbulence, and aerosol scattering.

P.O. Box 10
Bellingham, WA 98227-0010
Glenn D. Boreman
ISBN: 9781510639379
SPIE Vol. No.: TT121
TT121
Tutorial Texts Series Related Titles
• Aberration Theory Made Simple, Second Edition, Virendra N. Mahajan, TT93
• Analysis and Evaluation of Sampled Imaging Systems, Richard H. Vollmerhausen,
Donald A. Reago, and Ronald Driggers, TT87
• Introduction to Optical Testing, Joseph M. Geary, TT15
• Laser Beam Quality Metrics, T. Sean Ross, TT96
• Modeling the Imaging Chain of Digital Cameras, Robert D. Fiete, TT92
• Optical Design: Applying the Fundamentals, Max J. Riedl, TT84
• Optical Design for Visual Systems, Bruce H. Walker, TT45
• Optical Design of Microscopes, George H. Seward, TT88

(For a complete list of Tutorial Texts, see http://spie.org/publications/books/tutorial-texts.)

Other Related SPIE Press Titles


SPIE Field Guides:
• Geometrical Optics, John E. Greivenkamp, FG01
• Image Processing, Khan M. Iftekharuddin and Abdul Awwal, FG25
• Lens Design, Julie Bentley and Craig Olson, FG27
• Microscopy, Tomasz S. Tkaczyk, FG13
SPIE Press Monographs:
• Advances in Sampling Theory and Techniques, Leonid P. Yaroslavsky, PM315
• Designing Optics Using CODE V®, Donald C. O’Shea and Julie L. Bentley, PM292
• Electro-Optical Imaging System Performance, Sixth Edition, Gerald C. Holst, PM278
• Electro-Optical System Analysis and Design: A Radiometric Perspective, Cornelius,
J. Willers, PM236
• Image Acquisition and Preprocessing for Machine Vision Systems, P. K. Sinha,
PM197
• Introduction to the Optical Transfer Function, Charles S. Williams, and Orville A.
Becklund, PM112
• Optics Inspections and Tests: A Guide for Optics Inspectors and Designers, Michael
Hausner, PM269
• Optical Specification, Fabrication, and Testing, Jim Schwiegerling, PM252
• Optical Systems Engineering, Keith J. Kasunic, PM235
• Photonics Rules of Thumb, Third Edition, John Lester Miller, Edward J. Friedman,
John N. Sanders-Reed, Katie Schwertz, and Brian K. McComas, PM314
• Signal and Image Restoration: Information Theoretic Approaches, Joseph P. Noonan
and Prabahan Basu, PM213
• Understanding Optical Systems through Theory and Case Studies, Sijiong Zhang,
Changwei Li, and Shun Li, PM276
Library of Congress Cataloging-in-Publication Data

Names: Boreman, G. D. (Glenn D.), author.


Title: Modulation transfer function in optical and electro-optical systems
/ Glenn D. Boreman.
Description: Second edition. | Bellingham, Washington : SPIE Press, [2021]
| Series: Tutorial texts in optical engineering ; Volume TT121 |
Includes bibliographical references and index.
Identifiers: LCCN 2020043215 (print) | LCCN 2020043216 (ebook) | ISBN
9781510639379 (paperback) | ISBN 9781510639386 (pdf)
Subjects: LCSH: Optics. | Electrooptical devices. | Modulation theory.
Classification: LCC TA1520 .B67 2021 (print) | LCC TA1520 (ebook) | DDC
621.36‐‐dc23
LC record available at https://lccn.loc.gov/2020043215
LC ebook record available at https://lccn.loc.gov/2020043216

Published by
SPIE
P.O. Box 10
Bellingham, Washington 98227-0010 USA
Phone: +1 360.676.3290
Fax: +1 360.647.1445
Email: books@spie.org
Web: http://spie.org

Copyright © 2021 Society of Photo-Optical Instrumentation Engineers (SPIE)

All rights reserved. No part of this publication may be reproduced or distributed in any
form or by any means without written permission of the publisher.

The content of this book reflects the work and thought of the author. Every effort
has been made to publish reliable and accurate information herein, but the publisher
is not responsible for the validity of the information or for any outcomes resulting
from reliance thereon.

Printed in the United States of America.


First printing.
For updates to this book, visit http://spie.org and type “TT121” in the search field.
Introduction to the Series
The Tutorial Text series provides readers with an introductory reference text
to a particular field or technology. The books in the series are different from
other technical monographs and textbooks in the manner in which the
material is presented. True to their name, they are tutorial in nature, and
graphical and illustrative material is used whenever possible to better explain
basic and more-advanced topics. Heavy use of tabular reference data and
numerous examples further explain the presented concept. A grasp of the
material can be deepened and clarified by taking corresponding SPIE short
courses.
The initial concept for the series came from Jim Harrington (1942–2018)
in 1989. Jim served as Series Editor from its inception to 2018. The Tutorial
Texts have grown in popularity and scope of material covered since 1989.
They are popular because they provide a ready reference for those wishing to
learn about emerging technologies or the latest information within a new field.
The topics in the series have grown from geometrical optics, optical detectors,
and image processing to include the emerging fields of nanotechnology,
biomedical optics, engineered materials, data processing, and laser technolo-
gies. Authors contributing to the series are instructed to provide introductory
material so that those new to the field may use the book as a starting point to
get a basic grasp of the material.
The publishing time for Tutorial Texts is kept to a minimum so that the
books can be as timely and up-to-date as possible. When a proposal for a text
is received, it is evaluated to determine the relevance of the proposed topic.
This initial reviewing process helps authors identify additional material or
changes in approach early in the writing process, which results in a stronger
book. Once a manuscript is completed, it is peer reviewed by multiple experts
in the field to ensure that it accurately communicates the key components of
the science and technologies in a tutorial style.
It is my goal to continue to maintain the style and quality of books in the
series and to further expand the topic areas to include new emerging fields as
they become of interest to our readers.
Jessica DeGroote Nelson
Optimax Systems, Inc.

v
Contents
Preface to the Second Edition xi
Preface to the First Edition xiii

1 MTF in Optical Systems 1


1.1 Impulse Response 1
1.2 Spatial Frequency 4
1.3 Transfer Function 7
1.3.1 Modulation transfer function 9
1.3.2 Phase transfer function 12
1.4 MTF and Resolution 16
1.5 Diffraction MTF 18
1.5.1 Calculation of diffraction MTF 21
1.5.2 Diffraction MTFs for obscured systems 25
1.6 Effect of Aberrations on MTF 27
1.6.1 MTF and Strehl ratio 28
1.6.2 Effect of defocus on MTF 29
1.6.3 Effects of other aberrations on MTF 29
1.6.4 Minimum modulation curve 30
1.6.5 Visualizing other MTF dependences 33
1.7 Conclusion 35
References 37
Further Reading 38
2 MTF in Electro-optical Systems 39
2.1 Detector Footprint MTF 39
2.2 Sampling 43
2.2.1 Aliasing 44
2.2.2 Sampling MTF 49
2.3 Crosstalk 59
2.4 Electronic-Network MTF 61
2.5 Conclusion 64
References 65

vii
viii Contents

3 Point-, Line-, and Edge-Spread Function Measurement of MTF 67


3.1 Point-Spread Function (PSF) 67
3.2 Line-Spread Function (LSF) 68
3.3 Edge-Spread Function (ESF) 70
3.4 Comparison of PSF, LSF, and ESF 72
3.5 Increasing SNR in PSF, LSF, and ESF Tests 73
3.5.1 Object- and image-plane equivalence 73
3.5.2 Averaging in pixelated detector arrays 76
3.6 Correcting for Finite Source Size 78
3.7 Correcting for Image-Receiver MTF 80
3.7.1 Finite pixel width 80
3.7.2 Finite sampling interval 81
3.8 Oversampled Knife-Edge Test 81
3.9 Conclusion 83
References 84
4 Square-Wave and Bar-Target Measurement of MTF 85
4.1 Square-Wave Targets 85
4.2 Bar Targets 88
4.3 Conclusion 94
References 95
5 Noise-Target Measurement of MTF 97
5.1 Laser-Speckle MTF Test 98
5.2 Random-Transparency MTF Test 104
5.3 Conclusion 110
References 110
6 Practical Measurement Issues 113
6.1 Measurement of PSF 113
6.2 Cascade Properties of MTF 115
6.3 Quality of Auxiliary Optics 118
6.4 Source Coherence 120
6.5 Low-Frequency Normalization 121
6.6 MTF Testing Observations 122
6.7 Use of Computers in MTF Measurements 122
6.8 Representative Instrument Designs 122
6.8.1 Example system #1: visible edge response 123
6.8.2 Example system #2: infrared line response 123
6.8.3 Example system #3: visible square-wave response 124
6.8.4 Example system #4: bar-target response 126
6.9 Conclusion 126
References 127
Further Reading 127
Contents ix

7 Other MTF Contributions 129


7.1 Motion MTF 129
7.2 Vibration MTF 130
7.3 Turbulence MTF 132
7.4 Aerosol-Scattering MTF 134
7.5 Conclusion 136
References 137
Further Reading 137

Index 139
Preface to the Second Edition
It had been 19 years since the first edition of this book, when the extended
quarantine period of 2020 afforded me the rare opportunity of quiet time
away from my usual administrative and research activities. I have significantly
expanded the treatment of several topics, including bar-target measurements,
noise-target measurements, effects of aberrations, and slant-edge measure-
ments. I have been gratified by the recent industrial and government-lab
interest in the speckle techniques, which, after all, comprised a good portion
of my dissertation at University of Arizona some 36 years ago. All other
topics in the book were reviewed and updated, with recent references added.
I have kept my original emphasis on practical issues and measurement
techniques.
I acknowledge with pleasure discussions about MTF with colleagues and
their students here at UNC Charlotte, among whom are Profs. Angela Allen,
Chris Evans, and Thomas Suleski. During the writing process, I appreciated
receiving daily encouragement by telephone from Dot Graudons, daily
encouragement via WhatsApp from Prof. Mike Sundheimer of the
Universidade Federal Rural de Pernambuco in Recife Brazil, and weekly
encouragement via Zoom from Skye Engel. I am grateful for the permissions
granted for reproductions of some of the figures from their original sources, to
the two anonymous reviewers for their insightful and helpful comments, and
to Dara Burrows of SPIE Press for her expert copyediting.
Last but surely not least, I want to thank Maggie Boreman – my wife of
30 years, my main encourager, and technical editor. You have graciously
taken time from your equestrian pleasures to struggle, once again, with
turning my writing into something approaching standard English. Thanks.
Glenn D. Boreman
Emerald Rose Farm
23 November 2020

xi
Preface to the First Edition
I first became aware that there was such a thing as MTF as an undergraduate
at Rochester, scurrying around the Bausch and Lomb building. There was, in
one of the stairwells, a large poster of the Air Force bar target set. I saw that
poster every day, and I remember thinking. . . gee, that’s pretty neat. Well,
more than 25 years later, I still think so. I have had great fun making MTF
measurements on focal-plane arrays, SPRITE detectors, scanning cameras,
IR scene projectors, telescopes, collimators, and infrared antennas. This book
is an outgrowth of a short course that I have presented for SPIE since 1987. In
it, I emphasize some practical things I have learned about making MTF
measurements.
I am grateful for initial discussions on this subject at Arizona with Jack
Gaskill and Stace Dereniak. Since then, I have had the good fortune here at
Central Florida to work with a number of colleagues and graduate students
on MTF issues. I fondly recall discussions of MTF with Arnold Daniels, Jim
Harvey, Didi Dogariu, Karen MacDougall, Marty Sensiper, Ken Barnard, Al
Ducharme, Ofer Hadar, Ric Schildwachter, Barry Anderson, Al Plogstedt,
Christophe Fumeaux, Per Fredin, and Frank Effenberger. I want to thank
Dan Jones of the UCF English Department for his support, as well as Rick
Hermann, Eric Pepper, and Marshall Weathersby of SPIE for their assistance
and enthusiasm for this project. I also appreciate the permissions granted for
reproductions of some of the figures from their original sources.
Last but surely not least, I want to thank Maggie Boreman – my wife,
main encourager, and technical editor. Once again, Meg, you have wrestled
with my occasionally tedious expository and transformed it, if not into poetry,
then at least into prose. Thanks.
GDB
Cocoa Beach
15 March 2001

xiii
Chapter 1
MTF in Optical Systems
Linear-systems theory provides a powerful set of tools with which we can
analyze optical and electro-optical systems. The spatial impulse response of
the system is Fourier transformed to yield the spatial-frequency optical
transfer function. These two viewpoints are equivalent ways to describe an
object—as a collection of points or as a summation of spatial frequencies.
Simply expressing the notion of image quality in the frequency domain does
not by itself generate any new information. However, the conceptual change
in viewpoint—instead of a spot size, we now consider a frequency response—
provides additional insight into the behavior of an imaging system,
particularly in the common situation where several subsystems are combined.
We can multiply the individual transfer function of each subsystem to give the
overall transfer function. This procedure is easier than the repeated
convolutions that would be required for a spatial-domain analysis, and
allows immediate visualization of the performance limitations of the
aggregate system in terms of the performance of each of the subsystems.
We can see where the limitations of performance arise and which crucial
components must be improved to yield better overall image quality. We
directly see the effects of diffraction and aberrations at various spatial
frequencies.
In Chapter 1 we develop the transfer-function concept and apply it to
classical optical systems—imaging systems alone without detectors or
electronics. We will first define terms and then discuss image-quality issues.

1.1 Impulse Response


The impulse response h(x,y) is the smallest image detail that an optical
system can form and is called the point-spread function (PSF). It is the blur
spot in the image plane when a point source is the object of an imaging
system. The finite width of the impulse response is a result of the combination
of diffraction and aberration effects. We interpret h(x,y) as an irradiance
(W/cm2) distribution that is a function of the image-plane position. Modeling

1
2 Chapter 1

the imaging process as a convolution operation (denoted by *), we express the


image irradiance distribution g(x,y) as the ideal image f(x,y) convolved with
the impulse response h(x,y):
gðx, yÞ ¼ f ðx, yÞ  hðx, yÞ: (1.1)

The ideal image f(x,y) is the irradiance distribution that would exist in the
image plane (taking into account the system magnification) if the system had
perfect image quality, in other words, a delta-function impulse response. The
ideal image is thus a magnified version of the input-object irradiance, with all
detail preserved. For conceptual discussions, we typically assume that the
imaging system has unit magnification, so for the ideal image we can directly
take f(x,y) as the object irradiance distribution, albeit as a function of image-
plane coordinates x and y. We can see from Eq. (1.1) that if h(x,y) ¼ d(x,y),
the image would be a perfect replica of the object. A perfect optical system is
capable of forming a point image of a point object. However, because of the
blurring effects of diffraction and aberrations, a real imaging system has an
impulse response that is not a point. For any real system, h(x,y) has a finite
spatial extent. It is within this context that h(x,y) is referred to as the point-
spread function (PSF)—the image-plane irradiance corresponding to a point
source input. The narrower the PSF, the less blurring occurs in the image-
forming process. A more compact impulse response indicates better image
quality.
As Fig. 1.1 illustrates, we represent mathematically a point object as a
delta function at location (x0 ,y0 ) in object-plane coordinates:
f ðxobj , yobj Þ ¼ dðx0  xobj , y0  yobj Þ: (1.2)

Assuming that the system has unit magnification, the ideal image is a delta
function located at (x0 ,y0 ) in image-plane coordinates:
gðx, yÞ ¼ dðx0  x, y0  yÞ: (1.3)

In a real imaging system, the response to the delta-function object of


Eq. (1.2) is the impulse response g(x,y) ¼ h(x0  x, y0  y), centered at x ¼ x0

Figure 1.1 A delta function in the object plane is mapped to a blur function, the impulse
response, in the image plane.
MTF in Optical Systems 3

and y ¼ y0 in the image plane. We represent a continuous function f(xobj, yobj)


of object coordinates by breaking the continuous object into a set of point
sources at specific locations, each with a strength proportional to the object
brightness at that particular location. Any given point source has a weighting
factor f(x0 ,y0 ) that we determine using the sifting property of the delta
function:
ZZ
0 0
f ðx , y Þ ¼ dðx0  xobj , y0  yobj Þf ðxobj , yobj Þdxobj dyobj : (1.4)

The image of each discrete point source will be the impulse response of
Eq. (1.1) at the conjugate image-plane location, weighted by the correspond-
ing object brightness. The image irradiance function g(x,y) becomes the
summation of weighted impulse responses. This summation can be written as
a convolution of the ideal image function f(x,y) with the impulse response
ZZ
gðx, yÞ ¼ hðx0  x, y0  yÞf ðx, yÞdx0 dy0 , (1.5)

which is equivalent to Eq. (1.1). Figure 1.2 illustrates the imaging process
using two methods: the clockwise loop demonstrates the weighted superposi-
tion of the impulse responses, and the counterclockwise loop demonstrates a
convolution with the impulse response. Both methods are equivalent.

Figure 1.2 Image formation can be modeled as a convolutional process. The clockwise
loop is a weighted superposition of impulse responses, and the counterclockwise loop is a
convolution with the impulse response.
4 Chapter 1

Representing image formation as a convolutional process assumes


linearity and shift invariance (LSI). To model imaging as a convolutional
process, we must have a unique impulse response that is valid for any position
or brightness of the point-source object. Linearity is necessary for us to be able
to superimpose the individual impulse responses in the image plane to form
the final image. Linearity requirements are typically accurately satisfied for
the irradiance distribution itself (the so-called aerial image). However, certain
receivers such as photographic film, vidicon tubes,1 IR detector arrays,2 and
xerographic media are particularly nonlinear in their responses. In these cases,
the impulse response is a function of the input irradiance level. We can only
perform LSI analysis for a restricted range of input irradiances. Another
linearity consideration is that coherent optical systems (optical processors) are
linear in the electric field (V/cm), while incoherent systems (imaging systems)
are linear in irradiance (W/cm2). We will deal exclusively with incoherent
imaging systems. Partially coherent systems are not linear in either electric
field or irradiance. Thus, their analysis as a convolutional system is more
complicated, requiring definition of the mutual coherence function.3
Shift invariance is the other requirement for a convolutional analysis.
According to the laws of shift invariance, a single impulse response can be
defined that is not a function of image-plane position. Shift invariance
assumes that the functional form of h(x,y) does not change over the image
plane. This shift invariance allows us to write the impulse response as h(x0 –x,
y0 –y), a function of distance from the ideal image point, rather than as a
function of image-plane position in general. Aberrations violate the
assumption of shift invariance because typically the impulse response is a
function of field angle. To preserve a convolutional analysis in this case, we
segment the image plane into isoplanatic regions over which the functional
form of the impulse response does not change appreciably. Typically, we
specify image quality on-axis and at off-axis field locations (typically 0.7 field
and full field). It is worth noting that any variation in impulse response with
field angle implies a corresponding field-dependent variation in transfer
function.

1.2 Spatial Frequency


We can also consider the imaging process from a frequency-domain
(modulation-transfer-function) viewpoint, as an alternative to the spatial-
domain (impulse-response) viewpoint. An object- or image-plane irradiance
distribution is composed of “spatial frequencies” in the same way that a time-
domain electrical signal is composed of various frequencies: by means of a
Fourier analysis. As seen in Fig. 1.3, a given profile across an irradiance
distribution (object or image) is composed of constituent spatial frequencies.
By taking a one-dimensional profile across a two-dimensional irradiance
MTF in Optical Systems 5

distribution, we obtain an irradiance-vs-position waveform, which can be


Fourier decomposed exactly as if the waveform was in the more familiar form
of volts vs time. A Fourier decomposition answers the question of what
frequencies are contained in the waveform in terms of spatial frequencies with
units of cycles per unit distance, analogous to temporal frequencies in cycles/s
for a time-domain waveform. Typically for optical systems, the spatial
frequency is in cycles/mm. Sometimes we see spatial frequency expressed in
lp/mm (line pairs per mm). One cycle ¼ one black/white line pair.
An example of one basis function for the one-dimensional waveform of
Fig. 1.3 is shown in Fig. 1.4. The spatial period X (crest-to-crest repetition
distance) of the waveform can be inverted to find the x-domain spatial
frequency denoted by j ≡ 1/X.
Fourier analysis of optical systems is more general than for time-domain
systems because objects and images are inherently two-dimensional, so the
basis set of component sinusoids is also two-dimensional. Figure 1.5 illustrates
a two-dimensional sinusoid of irradiance. The sinusoid has a spatial period
along both the x and y directions, X and Y, respectively. If we invert these
spatial periods, we find the two spatial-frequency components that describe
this waveform: j ¼ 1/X and h ¼ 1/Y. Two pieces of information are required
for specification of the two-dimensional spatial frequency. An alternative
representation is possible using polar (r,u) coordinates, where the minimum
crest-to-crest distance of the sinusoid is R, and the orientation of the minimum
crest-to-crest distance with respect to the x and y axes is u ¼ tan1(Y/X).

Figure 1.3 Definition of a spatial-domain irradiance waveform.

Figure 1.4 One-dimensional spatial frequency.


6 Chapter 1

Figure 1.5 (a) Two-dimensional spatial period. (b) Two-dimensional spatial frequency.

The corresponding spatial-frequency variables in polar coordinates are r ¼


pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
j2 þ h2 and f ¼ tan1(h/j). Whatever coordinate system we choose for the
description, the basis set over which objects and images are decomposed
consists of the whole range of possible orientations and spatial frequencies,
where the sinusoid shown in Fig. 1.5 is but a single example.
Two-dimensional functions can be separable or nonseparable. A separable
function of two variables (such as x and y, or r and u) can be described as a
function of one variable multiplied by a function of the other variable.
Whether a function is separable depends on the choice of coordinate system.
For instance, a function that is rotationally symmetric (just a function of r) is
not separable in rectangular coordinates because a slice of the function along
the u ¼ 45 deg diagonal is identical to a slice of the function along the
horizontal axis, and thus is not a product of x-dependent and y-dependent
functions. If a function is separable in the spatial domain, the Fourier
transform of that function is separable in the corresponding spatial-frequency
domain.
Angular spatial frequency is typically encountered in the specification of
imaging systems designed to observe a target at a long distance. If the target is
far enough away to be in focus for all distances of interest, then it is
convenient to specify system performance in angular units, that is, without
having to specify a particular object distance. Angular spatial frequency is
most often specified in units of cycles/milliradian. These units can initially be a
troublesome concept because both cycles (cy) and milliradians (mrad) are
dimensionless quantities; however, with reference to Fig. 1.6, we find that the
angular spatial frequency jang is simply the range ZR multiplied by the target
spatial frequency j. For a periodic target of spatial period X, we define an
angular period u ≡ X/ZR, an angle over which the object waveform repeats
itself. The angular period is in radians if X and ZR are in the same units.
Inverting this angular period gives angular spatial frequency jang ¼ ZR/X.
Given the resolution of optical systems, often X is in meters and ZR is in
kilometers, for which the ratio ZR/X is then in cy/mrad.
MTF in Optical Systems 7

Figure 1.6 Angular spatial frequency.

1.3 Transfer Function


Equation (1.1) describes the loss of detail inherent in the imaging process as
the convolution of the ideal image function with the impulse response. The
convolution theorem4 states that a convolution in the spatial domain is a
multiplication in the frequency domain. Taking the Fourier transform
(denoted as F ) of both sides of Eq. (1.1) yields

F fgðx, yÞg ¼ F ff ðx, yÞ  hðx, yÞg (1.6)

and
Gðj, hÞ ¼ F ðj, hÞ  Hðj, hÞ, (1.7)

where uppercase functions denote the Fourier transforms of the correspond-


ing lowercase functions: F denotes the object spectrum, G denotes the image
spectrum, and H denotes the spectrum of the impulse response. H(j,h) is the
transfer function; it relates the object and image spectra multiplicatively. The
Fourier transform changes the irradiance waveform from a spatial-position
function to the spatial-frequency domain but generates no new information.
The appeal of the frequency-domain viewpoint is that the multiplication
of Eq. (1.7) is easier to perform and visualize than the convolution of
Eq. (1.1). This convenience is most apparent in the analysis of imaging
systems consisting of several subsystems, each with its own impulse response.
8 Chapter 1

As Eqs. (1.8) and (1.9) demonstrate, each subsystem has its own transfer
function—the Fourier transform of its impulse response.
The final result of all subsystems operating on the input object distribution
is a multiplication of their respective transfer functions. Figure 1.7 illustrates
that we can analyze a combination of several subsystems by the multiplication
of transfer functions of Eq. (1.9) rather than the convolution of impulse
responses of Eq. (1.8):

f ðx, yÞ  h1 ðx, yÞ  h2 ðx, yÞ  : : :  hn ðx, yÞ ¼ gðx, yÞ (1.8)

and

F ðj, hÞ  H 1 ðj, hÞ  H 2 ðj, hÞ  : : :  H n ðj, hÞ ¼ Gðj, hÞ: (1.9)

For the classical optical systems under discussion in this first chapter, we
ignore the effects of noise and typically assume that H(j,h) has been
normalized to have a unit value at zero spatial frequency. By the central
ordinate theorem of Fourier transforms, this is equivalent to a unit area under
the impulse response. This normalization at low spatial frequencies yields a
relative transmittance as a function of frequency (ignoring constant
attenuation factors such as Fresnel-reflection loss, neutral-density filters,
and atmospheric absorption). The lowest spatial frequency (a flat field, that is,
a uniform irradiance distribution across the entire field of view) is assumed to
come through with unit transmittance. Although this normalization is
common, when we use it, we lose information about the absolute signal
levels. MTF is thus typically not radiometric in the information it conveys. In
some situations, we might want to keep the signal-level information,
particularly when electronics noise is a significant factor. When we want to

Figure 1.7 The aggregate transfer function of several subsystems is a multiplication of


their transfer functions.
MTF in Optical Systems 9

calculate a spatial-frequency-dependent signal-to-noise ratio, we do not


normalize the transfer function.
With this normalization, we refer to H(j,h) as the optical transfer function
(OTF). Unless the impulse response function h(x,y) satisfies certain symmetry
conditions, its Fourier transform H(j,h) is, in general, a complex function,
having both a magnitude and a phase portion, referred to as the modulation
transfer function (MTF) and the phase transfer function (PTF), respectively:

OTF ≡ Hðj, hÞ ¼ jHðj, hÞj expfjUðj, hÞg (1.10)


and

MTF ≡ jHðj, hÞj PTF ≡ Uðj, hÞ: (1.11)

1.3.1 Modulation transfer function


The modulation transfer function is the magnitude response of the optical
system to sinusoids of different spatial frequencies. When we analyze an
optical system in the frequency domain, we consider the imaging of sinewave
inputs (Fig. 1.8) rather than point objects. Sinewave targets are typically
printed as diffuse reflective targets to be viewed under conditions of natural
illumination. Alternatively, sinewaves of irradiance can be generated by
diffraction as Young’s fringes5 or using two-beam interference techniques.6
A linear shift-invariant optical system images a sinusoid as another
sinusoid. The limited spatial resolution of the optical system results in a
decrease in the modulation depth M of the image relative to what was in the
object distribution (Fig. 1.9). Modulation depth is defined as the amplitude of
the irradiance variation divided by the bias level:

Amax  Amin 2  ac component ac


M¼ ¼ ¼ : (1.12)
Amax þ Amin 2  dc component dc

We can see from Fig. 1.10 that modulation depth is a measure of contrast,
with
M ! 0 as Amax  Amin ! 0 and M ! 1 as Amin ! 0: (1.13)

Figure 1.8 Sinewave target containing several spatial frequencies.


10 Chapter 1

Figure 1.9 Modulation depth decreases going from object to image.

Figure 1.10 Definition of modulation depth per Eq. (1.10): (a) high contrast and (b) low
contrast.

The M ¼ 0 condition means that no spatial variation of irradiance exists


in the image. However, the M ¼ 0 condition does not imply that the image-
irradiance level is zero. Another consequence of Eq. (1.13) is that the M ¼ 1
unit-modulation-depth condition is obtained when the irradiance waveform
has a minimum value of zero, regardless of the maximum irradiance level.
This is a consequence of the usual nonradiometric normalization of MTF. In
practice, the variations of irradiance must be sufficient to be seen above any
noise level present. The interpretation of modulation depth as a measure of
contrast is that a waveform with a small modulation depth is difficult to
distinguish against a uniform background irradiance, especially with some
level of noise present.
MTF in Optical Systems 11

The finite spatial extent of the impulse response of the optical system
causes a filling in of the valleys and a lowering of the peak levels of the
sinusoid. This decreases the modulation depth in the image waveform relative
to that in the corresponding object waveform. We define the modulation
transfer (MT) as the ratio of modulation in the image to that in the object:

MT ≡ M image ðjÞ∕M object ðjÞ: (1.14)

The modulation transfer is, in general, spatial-frequency dependent. The


limited resolution of the optics is more important at high spatial frequencies,
where the scale of the detail is smaller. When we plot modulation transfer
against spatial frequency, we obtain the MTF, generally a decreasing function
of spatial frequency, as seen in Fig. 1.11. In the situation shown where the
object modulation depth is constant, the MTF is simply the image modulation
as a function of spatial frequency:

MTFðjÞ ≡ M image ðjÞ∕M object : (1.15)

The M of the object waveform does not need to be unity. If a lower input
modulation is used, then the image modulation will be proportionally lower.
Equation (1.14) shows that the modulation of the object waveform does not
need to be unity. We can use non-unit-modulation targets (Fig. 1.12) for MTF
measurements, but a high-contrast target generally produces the best results.

Figure 1.11 Modulation transfer function is the decrease of modulation depth with
increasing frequency.
12 Chapter 1

Figure 1.12 Examples of non-unit-modulation depth waveforms (adapted from Ref. 7,


which also includes information on test targets conforming to requirements of ISO-12233).

1.3.2 Phase transfer function


Recalling the definition of the optical transfer function in Eq. (1.10), we now
proceed to interpret the phase response:

OTFðjÞ ≡ MTFðjÞ expfjPTFðjÞg: (1.16)

For the special case of a reflection-symmetric impulse response centered on


the ideal image point, the phase transfer function (PTF) is particularly simple,
having a value of either zero or p as a function of spatial frequency. For
example, the OTF for a defocused impulse response (Fig. 1.13) exhibits these
phase reversals. The OTF, being a complex quantity, can be plotted as a
function having both positive and negative values, while the MTF is strictly
positive. The PTF shows a phase reversal of p over the spatial frequency
range for which the OTF is negative.
We can see (Fig. 1.14) the phase reversals associated with defocus using
the example of a radial bar target, which has increasing spatial frequency
toward the center. Phase reversals (white-to-black line transitions) are seen
over certain spatial-frequency (radius) ranges of the defocused image. The
MTF is zero at the phase-transition frequency, so we see a uniform gray
irradiance at the corresponding radii of the chart where the black/white phase
transition occurs—one at a relatively low spatial frequency and one at a
higher frequency. The higher-frequency phase transition is of low contrast
because the MTF is lower there.
We also see this p phase reversal on the image of a radial bar target that
was blurred at a 45-deg diagonal direction (Fig. 1.15). Over a certain range of
two-dimensional spatial frequencies, what begins as a black bar at the
periphery of the target becomes a white bar. This phase shift occurs in a gray
transition zone where the MTF goes through a zero.
We see more complicated phase transfer functions when the irradiance of
the impulse response is not reflection symmetric about the center. For the
computed example shown in Fig. 1.16 of a slightly asymmetric impulse
MTF in Optical Systems 13

Figure 1.13 OTF, MTF, and PTF for a defocused impulse response.

Figure 1.14 Radial bar target: comparison of focused and defocused images.

response (Gaussian with a slight linear ramp added), the PTF shows
significant variation as a function of spatial frequency. In practice, we most
often encounter a plot of the PTF in the output from an optical-design
computer program. This is because phase distortion is a sensitive indicator for
the presence of aberrations such as coma with an asymmetric PSF for off-axis
image points. The PTF is not typically measured directly, but the information
is available by Fourier transforming a measured impulse response.
14 Chapter 1

Figure 1.15 Radial bar target that has been blurred in the 45-deg diagonal direction.
Phase reversals exist over certain two-dimensional spatial-frequency ranges.

Figure 1.16 Asymmetric impulse responses produce nonlinear PTFs.

Nonlinearity in the PTF causes different spatial frequencies in the image


to recombine with different relative phases. This phase distortion can change
the shape of the spatial waveform describing the image. We illustrate, using a
computed example, that severe phase distortion can produce image irradiance
distributions that differ in significant ways from the object irradiance
distribution. We thus consider the imaging of a four-bar target with equal
lines and spaces (Fig. 1.17). The fundamental spatial frequency jf of the target
is the inverse of the center-to-center bar spacing as shown.
Although such targets are not periodic in the true sense (being spatially
limited), we will speak of nth harmonics of the fundamental spatial frequency
as simply njf. In Fig. 1.18 we plot PTF and image-irradiance waveforms for
three p-phase step-discontinuity PTFs. To emphasize the effect of the PTF on
the image, we let MTF ¼ 1 in this computed example, so that all of the spatial
MTF in Optical Systems 15

Figure 1.17 Four-bar irradiance image.

Figure 1.18 Image-irradiance-vs-position plots for four-bar-target images under specified


phase distortions.

frequencies are present in their original amounts and only the relative phases
have changed. In the first case, the PTF transition occurs at 4jf. There is no
shift for the fundamental or the third harmonic, and a p shift for higher
frequencies. We see a phase-reversal artifact as a local minimum at the center
of each bar, primarily because the fifth harmonic is out of phase with the third
and the fundamental. In the second case, the transition occurs at 2jf, so the
only in-phase spatial frequency is the fundamental. The bars are sinusoidal at
the center, with secondary-maxima artifacts in the shoulder of each bar and in
the space between them, arising primarily from the third and fifth harmonics.
The step transition for the third case occurs at 0.9jf, shifting the fundamental
and all harmonics with respect to frequencies lower than jf. The most
16 Chapter 1

dramatic artifact is that the image now has five peaks instead of the four seen
in the previous cases.

1.4 MTF and Resolution


Resolution is a quantity without a standardized definition. Figure 1.19
illustrates the image-irradiance-vs-position plots in the spatial domain,
showing a particular separation distance for which images of two points are
said to be resolved. A variety of criteria exist for such determination, based on
the magnitude of the dip in irradiance between the point images. Resolution
can be defined in image-plane distance or in object-space angular measure.
In addition to these definitions, resolution can be specified in the spatial-
frequency domain as the frequency at which the MTF falls below a particular
threshold. A typical value used in practice is 10%, but the required MTF
threshold depends on the application. A threshold MTF and hence limiting
resolution (Fig. 1.20) can be defined in terms of the noise-equivalent
modulation (NEM), which is how much modulation depth is needed to
give a unit signal-to-noise ratio, as a function of spatial frequency. We will

Figure 1.19 Resolution can be defined in the spatial domain.

Figure 1.20 Resolution can be defined in the spatial-frequency domain.


MTF in Optical Systems 17

discuss NEM more fully in Chapter 2. The NEM is also referred to in the
literature as the demand modulation function or threshold detectability curve.
A convenient graphical representation is to plot MTF and the noise-
equivalent modulation depth on the same curve. The limiting resolution is the
spatial frequency where the curves cross.
In general, the best overall image-quality performance is achieved by the
imaging system that has the maximum area between the MTF and NEM
curves over the spatial frequency range of interest. This quantity, seen in
Fig. 1.21, is called MTF area (MTFA), which has been correlated to image
quality in human perception tests.8,9
Using either the spatial-domain definition or the spatial-frequency-
domain definition, resolution is a single-number performance specification
and, as such, it is often seen as being more convenient to use than MTF
(which is a function instead of a single number). However, MTF provides
more complete performance information than is available from simply
specifying resolution, including information about system performance over a
range of spatial frequencies.
As we can see on the left-hand side of Fig. 1.22, two systems may have an
identical limiting resolution but different performances at lower frequencies.
The system corresponding to the higher of the two curves would have the
better image quality. The right-hand side of Fig. 1.22 shows that resolution

Figure 1.21 MTF area (MTFA) is the area between the MTF and NEM curves. Larger
MTFA indicates better image quality.

Figure 1.22 Limiting resolution does not tell the whole story.
18 Chapter 1

Figure 1.23 Comparison of images for detection, recognition, and identification according
to the Johnson criteria (top right Fig. adapted from Ref. 11 with permission; bottom right
figure adapted from Ref. 12 with permission).

alone can be a misleading performance criterion. The system that has the best
resolution (limiting frequency) has poorer performance at the midrange
frequencies. A decision about which system has better performance requires us
to specify the spatial-frequency range of interest.
One way to determine the range of spatial frequencies of interest is to use
the Johnson criteria,10 where certain spatial frequencies are necessary for
various imaging tasks. Across the smallest dimension of an object, the
Johnson criteria state that: detection (an object is present) requires 0.5 to
1 line pair, recognition (the class of object is discerned) requires 4 to 5 line
pairs, and identification (the specific type of object) requires 6 to 8 line pairs.
These criteria can be visualized in Fig. 1.23. Other imaging tasks, such as
lithography, also have critical spatial frequencies. These are needed, for
example, to ensure proper edge definition and reproduction of sharp corners.

1.5 Diffraction MTF


This introductory section presents results for the circular-pupil case. Because
of the wave nature of light, an optical system with a finite-sized aperture can
never form a point image. A blur spot in the image plane is formed, even in
the absence of aberrations. The smallest spot size that the system can form is
determined by diffraction. The diameter of this diffraction blur is

d diffraction ¼ 2.44lðF ∕#Þ: (1.17)


MTF in Optical Systems 19

The irradiance distribution in the diffraction image of a point source is

EðrÞ ¼ j2J 1 ðprÞ∕ðprÞj2 , (1.18)

where r is the normalized radial distance from the center of the pattern:

r ¼ r∕½lðF ∕#Þ: (1.19)

Figure 1.24 shows a radial plot of Eq. (1.18). In Fig. 1.25 we see a
two-dimensional plot that is somewhat overexposed (saturating the center
lobe) to better emphasize the ring structure of the diffraction pattern. A two-
dimensional integration of Eq. (1.18) shows that the blur diameter, defined as

Figure 1.24 Radial plot of diffracted irradiance [Eq. (1.18)].

Figure 1.25 Two-dimensional plot of the diffracted irradiance distribution.


20 Chapter 1

the diameter to the first dark ring of the pattern, 2.4l(F/#), contains 84% of
the power in the irradiance distribution.
In Eqs. (1.17) and (1.19), the parameter F/# is used as a scale factor that
determines the physical size of the diffraction spot. Actually, three different
expressions for F/# can be used for diffraction-limited spot size and
diffraction-limited MTF calculations. As seen in Fig. 1.26, we can define a
working F/# in either object space or in image space in terms of the lens
aperture diameter D and either the object distance p or the image distance q:

ðF ∕#Þobject-space ¼ p∕D (1.20)

or
ðF ∕#Þimage-space ¼ q∕D: (1.21)

For the special case of an object at infinity (Fig. 1.27), the image distance
q becomes the lens focal length f, and the image-space F/# becomes

ðF ∕#Þimage-space ¼ f ∕D: (1.22)

Equations 1.20 to 1.22 assume that the lens aperture D is uniformly filled with
light. For instances where this is not the case, such as illumination with a
Gaussian laser beam, D would be reduced to the effective aperture diameter
that the beam illuminates, with a corresponding increase in F/#.
Using Eqs. (1.20), (1.21), or (1.22), we can calculate, either in object space
or in image space, the diffraction spot size from Eq. (1.17). We can also

Figure 1.26 Definition of working F/# for the finite-conjugate situation.

Figure 1.27 Definition of image-space F/# for the object-at-infinity situation.


MTF in Optical Systems 21

consider the effect of diffraction in terms of the MTF. Conceptually, a


diffraction MTF can be calculated as the magnitude of the Fourier transform
of the diffraction impulse-response profile given in Eq. (1.18). Diffraction
MTF is a wave-optics calculation for which the only variables (for a given
aperture shape) are the aperture dimension D, wavelength l, and focal
length f. The MTFdiffraction is the upper limit to the system’s performance; the
effects of optical aberrations are assumed to be negligible. Aberrations
increase the spot size and thus contribute to a reduced MTF. The diffraction
MTF is based on the overall limiting aperture of the system. For an optical
system with multiple elements, the effect of diffraction is only calculated once
per system, at the aperture stop. Diffraction effects are not calculated
separately at each optical element, and the diffraction MTF does not
accumulate multiplicatively on an element-by-element basis.

1.5.1 Calculation of diffraction MTF


We can calculate the diffraction OTF as the normalized autocorrelation of the
exit pupil of the optical system, which we will consider for both the circular-
pupil and square-pupil cases. This is consistent with the definition of Eqs.
(1.6), (1.7), and (1.10), which state that the OTF is the Fourier transform of
the impulse response. For the incoherent systems we consider, the impulse
response h(x,y) is the square of the two-dimensional Fourier transform of the
diffracting aperture p(x,y). The magnitude squared of the diffracted electric-
field amplitude E in V/cm gives us the irradiance profile of the impulse
response in W/cm2:

hdiffraction ðx, yÞ ¼ jF F fpðx, yÞgj2 : (1.23)

From Eq. (1.23), we must implement a change of variables j ¼ x/lf and h ¼ y/


lf to express the impulse response (which is a Fourier transform of the pupil
function) in terms of the image-plane spatial position. We then calculate the
diffraction OTF in the usual way, as the Fourier transform of the impulse
response h(x,y):

OTFdiffraction ðj, hÞ ¼ F F fhdiffraction ðx, yÞg ¼ F F fjF F fpðx, yÞgj2 g: (1.24)

Because of the absolute-value-squared operation, the two transform


operations of Eq. (1.24) do not exactly undo each other: the diffraction OTF
is the two-dimensional autocorrelation of the diffracting aperture p(x,y). The
diffraction MTF is thus the magnitude of the (complex) diffraction OTF. As
an example of this calculation, we take the simple case of a square aperture,
seen in Fig. 1.28:
22 Chapter 1

Figure 1.28 Variables for calculating the diffraction MTF of a square-aperture system.

pðx, yÞ ¼ rectðx∕D, y∕DÞ: (1.25)

The autocorrelation of the square is a (real-valued) triangle-shaped function,

MTFðjÞ ¼ 1  j=jcutoff , (1.26)

with cutoff frequency defined by

jcutoff ¼ 1∕½lðF ∕#Þ: (1.27)

For the case of a circular aperture of diameter D, the system has the same
cutoff frequency, jcutoff ¼ 1/[l(F/#)], but the MTF has a different functional
form:
2 n 1 o
MTFðj∕jcutoff Þ ¼ cos ðj∕jcutoff Þ  ðj∕jcutoff Þ½1  ðj∕jcutoff Þ2 1∕2 (1.28)
p

for j , jcutoff and

MTFðj∕jcutoff Þ ¼ 0 (1.29)

for j . jcutoff. These diffraction-limited MTF curves are plotted in Fig. 1.29.
The diffraction-limited MTF is an easy-to-calculate upper limit to perfor-
mance; we need only l and F/# to compute it. An optical system cannot
perform better than its diffraction-limited MTF; any aberrations will pull the
MTF curve down. We can compare the performance specifications of a given
system to the diffraction-limited MTF curve to determine the feasibility of the
proposed specifications, to decide how much headroom has been left for
MTF in Optical Systems 23

Figure 1.29 Universal curves representing diffraction-limited MTFs for incoherent systems
with either a circular or a rectangular aperture.

manufacturing tolerances, or to see what performance is possible within the


context of a given choice of l and F/#.
As an example of the calculations, let us consider the rectangular-aperture
system of Fig. 1.30. For an object at a finite distance, we use the object-space
or image-space F/# as appropriate to calculate the diffraction-limited jcutoff in
either the object plane or image plane:
1 1
jcutoff, obj ¼ ¼ ¼ 800 cy=mm (1.30)
lðF ∕#Þobj space lðp∕DÞ

or
1 1
jcutoff, img ¼ ¼ ¼ 400 cy=mm: (1.31)
lðF ∕#Þimg space lðq∕DÞ

Figure 1.30 Rectangular-aperture (top) finite-conjugate and (bottom) infinite-conjugate


MTF examples.
24 Chapter 1

Because p , q, the image is magnified with respect to the object; hence, a


given feature in the object appears at a lower spatial frequency in the image,
so the two frequencies in Eqs. (1.30) and (1.31) represent the same feature.
The filtering caused by diffraction from the finite aperture is the same,
whether considered in object space or image space. For example, knowing the
cutoff frequency, we can calculate the image spatial frequency for which the
diffraction-limited MTF is 30%. We use Eq. (1.26) to find that 30% MTF is at
70% of the image-plane cutoff frequency, or 280 cy/mm. This calculation is for
diffraction-limited performance, and aberrations will narrow the bandwidth
of the system. Thus, the frequency at which the MTF is 30% will most likely
be lower than 280 cy/mm.
Continuing the example for an object-at-infinity condition, we obtain the
cutoff frequency in the image plane using Eq. (1.27):

jcutoff, img space ¼ 1∕½lðF ∕#Þ ¼ D∕lf ¼ 1200 cy=mm: (1.32)

A given feature experiences the same amount of filtering, whether expressed in


the image plane or in object space. In the object space, we find the cutoff
frequency to be

jcutoff, obj space ¼ 1∕ðl=DÞ ¼ D∕l ¼ 40 cy=mrad: (1.33)

Let us verify that this angular spatial frequency corresponds to the same
feature as that in Eq. (1.32). Referring to Fig. 1.31, we use the relationship
between object-space angle u and image-plane distance X,

X ¼ uf : (1.34)

Inverting Eq. (1.34) to obtain the angular spatial frequency 1/u, we have

1=u ¼ ð1∕X Þf ¼ jf : (1.35)

Figure 1.31 Relation between object-space angle u and image-plane distance X.


MTF in Optical Systems 25

Figure 1.32 Heuristic representation of the formula for diffraction MTF.

Given that u is in radians, if X and f have the same units, we can verify the
correspondence between the frequencies in Eqs. (1.32) and (1.33):

ð1=uÞ½cy=mrad ¼ j½cy=mm  f ½mm  ð0.001Þ½rad=mrad: (1.36)

It is also of interest to verify that the diffraction MTF curves in Fig. 1.29
are consistent with the results of the simple 84% encircled-power diffraction
spot-size formula of 2.4l(F/#). As a heuristic verification, in Fig. 1.32 we
create a pattern with adjacent bright and dark regions, whose widths are
2.4l(F/#). We set the bright regions at a magnitude of 84% and the dark
regions at a magnitude of 16%, consistent with the amount of power inside
and outside the central lobe of the diffraction spot, respectively. Overlapping
adjacent diffraction spots with this spacing would create an irradiance
distribution approximating the situation shown. Considering a horizontal
one-dimensional spatial frequency across the pattern, we can calculate both
the spatial frequency and the modulation depth.
The fundamental spatial frequency of the pattern in Fig. 1.32 is

j ¼ 1∕½4.88lðF ∕#Þ ¼ 0.21  ½1∕ðlðF ∕#ÞÞ ¼ 0.21jcutoff (1.37)

and the modulation depth at this frequency is

M ¼ ð84  16Þ∕ð84 þ 16Þ ¼ 68%, (1.38)

which is in close agreement with Fig. 1.29 for a diffraction-limited circular


aperture, at a frequency of j ¼ 0.21 jcutoff, as seen in Fig. 1.33.

1.5.2 Diffraction MTFs for obscured systems


Many common optical systems, such as Cassegrain telescopes, have an
obscured aperture. We can calculate the diffraction OTF of obscured-aperture
systems according to Eq. (1.24), the autocorrelation of the pupil of the system.
With an obscuration, an attenuation of image-plane irradiance occurs that is
proportional to the fractional blocked area of the pupil. This attenuation
affects all spatial frequencies equally. When the autocorrelation is calculated,
26 Chapter 1

Figure 1.33 Verification of the formula for diffraction MTF.

we see a slight emphasis of the MTF at high frequencies, corresponding to an


overlap of the clear part of the aperture in the shift-multiply-and-integrate
operation of the autocorrelation. This emphasis has come at the expense of
the overall flux transfer. In the usual definition of MTF to be unity at j ¼ 0,
the attenuation caused by the obscuration is normalized out when MTF is
plotted.
In Fig. 1.34 the MTF curves for the obscured apertures exceed the
diffraction-limited curve for no obscuration (curve A). This behavior is an
artifact of the normalization, and there is actually no more image modulation
at those frequencies than would exist in the case of an unobscured aperture. If
we let the value of curve B at j ¼ 0 be 0.84, the value of curve C at j ¼ 0 be
0.75, and the value of curve D at j ¼ 0 be 0.44, all of these MTF curves would
be bounded as an upper limit by the diffraction-limited curve A for no
obscuration. The cutoff frequency is the same for all of the obscured apertures,
determined by the diameter of the open aperture 2rm and the wavelength l.

Figure 1.34 Diffraction-limited MTF curves for obscured-aperture systems (adapted from
Ref. 13).
MTF in Optical Systems 27

1.6 Effect of Aberrations on MTF


Diffraction-limited performance is the ideal, representing the best possible
imaging achievable from a system with a given F/# and wavelength. The
monochromatic (Seidel) aberrations are image-quality defects that arise both
from the nonlinearity of Snell’s law and because spherical surfaces are
generally used in optical systems. Chromatic aberrations are caused by the
variation of refractive index with wavelength. Defects in manufacture or
alignment will make aberrations worse.
Aberrations will increase the spatial extent of the impulse response,
spreading the irradiance into a larger region and lowering the peak irradiance.
An impulse response of larger spatial extent reduces the image quality,
blurring fine details in the image. Because of the Fourier-transform
relationship, an impulse response of larger spatial extent implies a narrower
MTF. Aberrations generally get worse with decreasing F/# (wider aperture)
and increasing field angle. Defocus and spherical aberration are independent
of field angle, so they affect image quality, even for image points near the
optical axis. Aberrations reduce the MTF and lower the cutoff frequency, as
compared to the diffraction-limited MTF. The MTF at any spatial frequency
is bounded by the diffraction-limited MTF curve. Figure 1.35 illustrates that
the value of the MTF at any spatial frequency is bounded by the diffraction-
limited MTF curve:

MTFw=aberr ðjÞ # MTFdiffraction ðjÞ: (1.39)

We can calculate the OTF of an imaging system in the following way. The
wavefront aberration, defined at the exit pupil, is the departure from
sphericity of the nominally spherical wavefront proceeding toward a
particular image point, bounded by the dimensions of the exit pupil. The
OTF is the autocorrelation of this spatially bounded wavefront aberration.
We use the change of variables j ¼ x/lf and h ¼ y/lf to convert a spatial shift
in the autocorrelation to a spatial frequency. If there are no aberrations, the
calculation is simply the autocorrelation of the pupil transmittance function.
Any aberrations present in the system will reduce the MTF, given that the
positive and negative phase variations on the wavefront will autocorrelate to a

Figure 1.35 The effect of aberrations on MTF is to pull the transfer-function curve down.
28 Chapter 1

lower value than would unaberrated wavefronts without such variations. This
calculation is the typical means of computing MTF in optical-design software
programs, accounting for aberrations (by means of the wavefront error) and
diffraction (by means of the pupil dimension and the wavelength).
Alternately, we can conveniently use a back-of-the-envelope method to
estimate MTF. If we have a raytraced spot diagram for the system or a spot
size based on geometrical-optics aberration formulae, we take this as the PSF
h(x,y) and calculate a geometrical MTF in the manner of Eqs. (1.6) and (1.7)
by Fourier transformation. We can approximate the overall MTF by
multiplying the geometrical and diffraction MTFs. This multiplication will
reduce the MTF below the diffraction-limited curve and will lower the cutoff
spatial frequency. By the convolution theorem of Eqs. (1.6) and (1.7), this is
equivalent to convolving the diffraction irradiance distribution with the PSF
of the aberration.14,15 This approach is suitable for encircled-energy
calculations but does not capture the fine spatial irradiance fluctuations of
the aberrated diffraction image.

1.6.1 MTF and Strehl ratio


A useful single-number performance specification is the Strehl ratio s,
defined as the on-axis irradiance produced by the actual system divided by
the on-axis irradiance that would be formed by a diffraction-limited system of
the same F/#:
hactual ðx ¼ 0, y ¼ 0Þ
s≡ : (1.40)
hdiffraction ðx ¼ 0, y ¼ 0Þ

A Strehl ratio in excess of 0.8 indicates excellent image quality (l/4 of


wavefront aberration). We can obtain an alternative interpretation of the
Strehl ratio using the central-ordinate theorem for Fourier transforms, which
says that the volume under a two-dimensional function in the transform
domain (or area under a one-dimensional function) equals the on-axis value of
the function in the spatial domain:
ZZ
f ðx ¼ 0, y ¼ 0Þ ¼ F ðj, hÞdjdh: (1.41)

Using this relation, we can express the Strehl ratio as the volume (area) under
the actual OTF curve divided by the volume (area) under the diffraction-
limited OTF curve:
RR
OTFactual ðj, hÞ
s ¼ RR : (1.42)
OTFdiffraction ðj, hÞ
MTF in Optical Systems 29

A large Strehl ratio implies a large area under the MTF curve and high
irradiance at the image location. Aberration effects such as those seen in
Fig. 1.35 can be interpreted directly as the decrease in volume (area) under the
MTF curve.

1.6.2 Effect of defocus on MTF


In Fig. 1.36 we compare a diffraction-limited OTF to that for systems with
increasing amounts of defocus. The amount of defocus is expressed in terms of
optical path difference (OPD), which is the wavefront error at the edge of the
aperture in units of l. In all cases, the diffraction-limited curve is the upper
limit to the OTF. For small amounts of defocus, the OTF curve is pulled
down only slightly. For additional defocus, a significant narrowing of the
transfer function occurs. For a severely defocused system, we observe the
phase-reversal phenomenon that we saw in the defocused image of the radial
bar target of Fig. 1.14.
We can visualize the relationship between Strehl ratio and MTF by
comparing the PSFs and MTFs for the diffraction-limited and l/4-of-defocus
conditions. For l/4 of defocus, the transfer function is purely real, so
OTF ¼ MTF. From Fig. 1.37, we see that the defocus moves about 20% of the
power from the center of the impulse response into the ring structure. The on-
axis value of the impulse response is reduced, with a corresponding reduction
in the area under the MTF curve.

1.6.3 Effects of other aberrations on MTF


To further visualize the effects of aberrations on MTF, let us compare the PSF
and MTF for different amounts of wavefront error (WFE). The graphs on
pages 31 to 33 show the PSFs (both as irradiance as a function of position
and as a gray-level irradiance distribution), along with corresponding MTF
plots.

Figure 1.36 Effect of defocus on the OTF of a diffraction-limited circular-aperture system


(adapted from Ref. 13).
30 Chapter 1

Figure 1.37 Comparison of diffraction-limited (DL) performance and quarter wavelength of


defocus for (left) PSFs (adapted from Ref. 16 with permission) and (right) MTFs (adapted
from Ref. 13).

We begin with defocus and spherical aberration in Fig 1.38. Both


aberrations produce radially symmetric PSFs and MTFs, which can be
conveniently represented as one-dimensional functions. We can see the
reduction in on-axis irradiance and corresponding reduction in area under
the MTF curve, both being more pronounced with increasing wavefront error.
Next, in Fig. 1.39 we compare PSF and MTF for various amounts of
coma. The effects of coma are asymmetric, so the PSFs and MTFs are both
two-dimensional functions. The narrowest and widest profiles of the PSFs are
shown, along with the best and worst MTFs, all as one-dimensional plots for
the noted values of WFE. Because of the asymmetry of the PSF, the reduction
in MTF depends on the orientation of the aberrated PSF with the two-
dimensional spatial frequencies of interest. As seen in Fig. 1.40, the MTF is
highest if the wide direction of the PSF is along the constant direction of the
spatial-frequency sinusoid, and is lowest for the orthogonal orientation.
Finally, in Fig. 1.41 we compare PSF and MTF for different amounts of
astigmatism. Again, the image quality depends on the orientation of the
asymmetric PSF with the two-dimensional spatial frequencies of interest, as
seen in Fig. 1.42.

1.6.4 Minimum modulation curve


From a specification viewpoint, we often want to ensure that a given
minimum MTF is met at certain spatial frequencies. Usual practice is to
MTF in Optical Systems 31

Figure 1.38 Comparison of PSF and MTF for defocus and spherical aberration, for
different amounts of wavefront error (adapted from Ref. 17 with permission).

specify horizontal and vertical slices of the two-dimensional MTF. However,


this approach can miss important information if there is significant
asymmetry in the MTF, as seen in the previous section. To address this
situation, Aryan, Boreman, and Suleski recently developed a summary
descriptor of the two-dimensional MTF, the minimum modulation curve
(MMC).18 The two-dimensional MTF(j, h) can be expressed in polar
coordinates using r ¼ (j2 þ h2)1/2 and azimuth angle f ¼ tan1(h/j), yielding
MTF(r,f). The radial spatial frequency r is the distance from the center of the
two-dimensional polar plot, and f is measured from the horizontal (j) axis.
To generate the MMC, the MTF is evaluated for all values of f, for a given
value of r. The minimum MTF value at that r for any value of f becomes the
MMC value for that r:
32 Chapter 1

Figure 1.39 Comparison of PSF and MTF for coma, for different amounts of wavefront
error. Best and worst one-dimensional profiles are shown of the two-dimensional PSF and
MTF functions for different amounts of wavefront error (adapted from Ref. 17 with
permission).

Figure 1.40 Image quality depends on the orientation of the asymmetric PSF with respect
to the two-dimensional spatial-frequency components.

MMCðrÞ ¼ min fMTFðr, fÞg: (1.43)


f∈½0, 2p

So, the MMC presents the minimum MTF found for any azimuth angle, as a
function of r. This displays information in a familiar one-dimensional form. If
used as a performance specification, the MMC would guarantee that a given
MTF specification is met for any possible orientation of spatial frequencies in
the image.
We illustrate this concept using the example of a Cooke triplet at a field
angle of 20 deg. The two-dimensional MTF for this situation has significant
MTF in Optical Systems 33

Figure 1.41 Comparison of PSF and MTF for astigmatism, for different amounts of
wavefront error (adapted from Ref. 17 with permission). Best and worst one-dimensional
profiles are shown of the two-dimensional PSF and MTF functions for different amounts of
wavefront error.

Figure 1.42 Image quality depends on the orientation of the asymmetric PSF with respect
to the two-dimensional spatial-frequency components.

asymmetry, as seen in the color plot and wireframe plots of Fig. 1.43. We can
see qualitatively from those plots that the horizontal and vertical slices of the
MTF are an inadequate representation of the overall performance. In
Fig. 1.44 we plot the horizontal and vertical MTFs for this lens, along with
the MMC. This shows the potential utility of MMC as a practical
performance specification.

1.6.5 Visualizing other MTF dependences


Although a typical MTF plot is a function of spatial frequency, sometimes the
dependence of MTF on field position or axial location of the image plane are
of interest. Aberrations typically get worse with increasing aperture and field,
34 Chapter 1

Figure 1.43 Two-dimensional MTF for the example lens.

Figure 1.44 Comparison of horizontal and vertical MTFs and MMC for the example lens
(reprinted from Ref. 18).

so we usually find that the quality of the image is best near the optic axis and
not as good toward the edge of the image. In Fig. 1.45 we see some MTF
plots for an example lens at 10, 20, and 40 lp/mm spatial frequencies, as a
function of location of the image with respect to the optic axis. MTF falls for
higher spatial frequencies and for larger field heights, as expected. The solid
lines are for the optimal orientation of the two-dimensional spatial frequency,
and the dashed lines are for the worst orientation. The PSF is increasingly
asymmetric as the field height increases. Of course, the two plots converge for
small field heights.
MTF also depends on the axial position of the image plane with respect to
the optical system, known as the through-focus MTF. In Fig. 1.46 we see
MTF in Optical Systems 35

Figure 1.45 Best and worst MTFs for an example lens at three specific spatial frequencies,
as a function of image-plane field height (adapted from Ref. 19 with permission).

MTF plots for an example lens at three specific spatial frequencies as a


function of axial focus position. When the lens operates at the wide aperture
setting of F/1.4, spherical aberration is the primary limitation to image
quality. The image plane location is such that 20 lp/mm is at best focus. From
the MTF plots, we see that the 10 lp/mm and 40 lp/mm spatial frequencies
have their best focus positions at different axial locations. In this situation, the
image quality is not the same on either side of best focus. If we stop down this
lens to a smaller aperture (F/4), the best focus shifts away from the lens,
consistent with the behavior of spherical aberration seen in Fig. 1.47. Also, for
the smaller aperture setting, the peak MTFs at each spatial frequency increase
because of the reduction in spherical aberration. For the F/4 case, the three
spatial frequencies are in best focus at about the same axial position,
indicating more symmetry in the through-focus image quality.

1.7 Conclusion
Expression of image quality in terms of a transfer function provides additional
insight into the performance of an optical system, compared to describing the
irradiance of a blur spot or a specification of resolution. We can conveniently
account for the various contributions to image quality by multiplying transfer
functions for the different subsystems. The transfer function approach allows
us to directly see the effects of diffraction and aberrations at various spatial
frequencies.
36 Chapter 1

Figure 1.46 MTFs for an example lens at three specific spatial frequencies as a function of
axial image-plane location. The zero defocusing location is set as best focus for F/1.4 at
20 lp/mm. Top plots are for F/1.4 aperture, and bottom plots are for F/4 aperture, with
defocusing in millimeters (adapted from Ref. 19 with permission).
MTF in Optical Systems 37

Figure 1.47 Ray trace showing the presence of spherical aberration. Best focus is the
image plane location resulting in the smallest spot size. It is located between the marginal-
ray focus and the paraxial-ray focus. A reduced aperture diameter will tend to shift the
position of best focus away from the lens.

References
1. G. D. Boreman, A. B. James, and C. R. Costanzo, “Spatial harmonic
distortion: a test for focal plane nonlinearity,” Opt. Eng. 30, pp. 609–614
(1991) [doi: 10.1117/12.55832].
2. G. D. Boreman and C. R. Costanzo, “Compensation for gain
nonuniformity and nonlinearity in HgCdTe infrared charge-coupled-
device focal planes,” Opt. Eng. 26, pp. 981–984 (1987) [doi: 10.1117/12.
7974184].
3. M. Beran and G. Parrent, Theory of Partial Coherence, Prentice-Hall,
Englewood Cliffs, New Jersey (1964).
4. J. Gaskill, Linear Systems, Fourier Transforms, and Optics, John Wiley &
Sons, New York (1978).
5. K. J. Barnard, G. D. Boreman, A. E. Plogstedt, and B. K. Anderson,
“Modulation-transfer function measurement of SPRITE detectors:
sine-wave response,” Appl. Opt. 31(3), 144–147 (1992).
6. M. Marchywka and D. G. Socker, “Modulation transfer function
measurement technique for small-pixel detectors,” Appl. Opt. 31(34),
7198–7213 (1992).
7. N. Koren: www.normankoren.com/Tutorials/MTF.html; www.imatest.
com.
8. L. M. Biberman, “Image Quality,” Chapter 2 in Perception of Displayed
Information, L. M. Biberman, Ed., Plenum Press, New York, pp. 52–53
(1973).
9. H. L. Snyder, “Image Quality and Observer Performance,” Chapter 3 in
Perception of Displayed Information, L. M. Biberman, Ed., Plenum Press,
New York, pp. 87–117 (1973).
38 Chapter 1

10. J. Johnson, “Analysis of image forming systems,” in Image Intensifier


Symposium, AD 220160, Warfare Electrical Engineering Department,
U.S. Army Research and Development Laboratories, Fort Belvoir,
Virginia, pp. 244–273 (1958).
11. A. Richards, FLIR Systems Inc., private communication.
12. www.flir.com/discover/security/thermal/5-benefits-of-thermal-imaging-
cameras/
13. W. Wolfe and G. Zissis, Eds., The Infrared Handbook, Environmental
Research Institute of Michigan/Infrared Information and Analysis
Center, Ann Arbor, U.S. Office of Naval Research, Washington, D.C.
and SPIE Press, Bellingham, Washington (1985).
14. E. H. Linfoot, “Convoluted spot diagrams and the quality evaluation of
photographic images,” Optica Acta 9(1), p. 81–100 (1962).
15. K. Miyamoto, “Wave Optics and Geometrical Optics in Optical Design,”
in Progress in Optics, Vol. 1, E. Wolf, Ed., North-Holland, Amsterdam,
pp. 31–40, 40a, 41–66 (1961).
16. J. Wyant, class notes, Wyant College of Optical Sciences, University of
Arizona.
17. www.telescope-optics.net/mtf.htm, with graphics credit to Cor Berrevoets
(http://aberrator.astronomy.net/).
18. H. Aryan, G. D. Boreman, and T. J. Suleski, “The minimum modulation
curve as a tool for specifying optical performance: application to surfaces
with mid-spatial frequency errors,” Opt. Exp. 27(18), 25551–25559
(2019).
19. H. Nasse, “How to read MTF curves,” https://lenspire.zeiss.com/photo/
en/article/overview-of-zeiss-camera-lenses-technical-articles

Further Reading
Baker, L. Selected Papers on Optical Transfer Function: Foundation and
Theory, SPIE Milestone Series, Vol. MS59, SPIE Press, Bellingham,
Washington (1992).
Williams, C. S., and Becklund O. A., Introduction to the Optical Transfer
Function, Wiley, New York (1989); reprinted by SPIE Press, Bellingham,
Washington (2002).
Williams, T., The Optical Transfer Function of Imaging Systems, Institute of
Physics Press, Bristol (1999).
https://www.imatest.com/docs/
https://lenspire.zeiss.com/photo/en/article/overview-of-zeiss-camera-lenses-
technical-articles
Chapter 2
MTF in Electro-optical Systems
In Chapter 1 we applied a transfer-function-based analysis to describe image
quality in classical optical systems, that is, systems with optical components
only. In this chapter we will examine the MTF of electro-optical systems, that
is, systems that use a combination of optics, scanners, detectors, electronics,
signal processors, and displays. To apply MTF concepts in the analysis of
electro-optical systems, we must generalize our assumptions of linearity and
shift invariance. Noise is inherent in any system with electronics. Linearity is
not strictly valid for systems that have an additive noise level because image
waveforms must be of sufficient irradiance to overcome the noise before they
can be considered to add linearly. The classical MTF theory presented in
Chapter 1 does not account for the effects of noise. We will demonstrate how
to broaden the MTF concept to include this issue. Electro-optical systems
typically include detectors or detector arrays for which the size of the detectors
and the spatial sampling interval are both finite. Because of the shift-variant
nature of the impulse response for sampled-data systems, we will develop the
concept of an average impulse response obtained over a statistical ensemble of
source positions to preserve the convenience of a transfer-function analysis.
We will also develop an expression for the MTF impact of irradiance
averaging over the finite sensor size. With these modifications, we can apply a
transfer-function approach to a wider range of situations.

2.1 Detector Footprint MTF


We often think about the object as being imaged onto the detectors, but it is
also useful to consider where the detectors are imaged. The footprint of a
particular detector, called the instantaneous field of view (IFOV), is the
geometrical projection of that detector into object space. We consider a
scanned imaging system in Fig. 2.1 and a staring focal-plane-array (FPA)
imaging system in Fig. 2.2. In each case, the flux falling onto an individual
detector produces a single output. Inherent in the finite size of the detector
elements is some spatial averaging of the image irradiance. For the

39
40 Chapter 2

Figure 2.1 Scanned linear detector array.

Figure 2.2 Staring focal-plane-array imaging system.

configurations shown, we have two closely spaced point sources in the object
plane that fall within one detector footprint. The signal output from the sensor
will not indicate the fact that there are two sources. Our first task is to
quantify the spatial-frequency filtering inherent in an imaging system with
finite-sized detectors.
A square detector of size w  w performs spatial averaging of the scene
irradiance that falls on it. When we analyze the situation in one dimension, we
find that the integration of the scene irradiance f(x) over the detector surface is
equivalent to a convolution of f(x) and the rectangle function1 that describes
the detector responsivity:

Z
w∕2

gðxÞ ¼ f ðxÞ dx ¼ f ðxÞ  rectðx∕wÞ: (2.1)


w∕2
MTF in Electro-optical Systems 41

By the convolution theorem,1 Eq. (2.1) is equivalent to filtering in the frequency


domain by a transfer function:
 
 sin ðpjwÞ
MTFfootprint ðjÞ ¼ jsincðjwÞj ¼ : (2.2)
pjw 

Equation (2.2) shows us that the smaller the sensor photosite dimension, the
broader the transfer function. This equation is a fundamental MTF component
for any imaging system with detectors. In any given situation, the detector
footprint may or may not be the main limitation to image quality, but its
contribution to a product such as Eq. (1.9) is always present. Equation (2.2) is
plotted in Fig. (2.3), where we see that the sinc-function MTF has its first zero
at j ¼ 1/w. Let us consider the following plausibility argument to justify the fact
that the footprint MTF ¼ 0 at j = 1/w.
Figure 2.4 represents spatial averaging of an input irradiance waveform
by sensors of a given dimension w. The individual sensors may represent either
different positions for a scanning sensor or discrete locations in a focal-plane
array. We will consider the effect of spatial sampling in a later section. Here
we consider exclusively the effect of the finite size of the photosensitive regions
of the sensors. We see that at low spatial frequencies there is almost no
reduction in modulation of the image irradiance waveform arising from
spatial averaging over the surfaces of the photosites. As the spatial frequency
increases, the finite size of the detectors becomes more significant. The
averaging leads to a decrease in the maximum values and an increase in the
minimum values of the image waveform, decreasing the modulation depth.
For the spatial frequency j ¼ 1/w, one period of the irradiance waveform just
fits onto each detector. Regardless of the position of the input irradiance
waveform with respect to the photosite boundaries, each sensor will collect

Figure 2.3 Sinc-function MTF for detector of width w.


42 Chapter 2

Figure 2.4 At a frequency of j ¼ 1/w the modulation depth goes to zero.

exactly the same power (spatially integrated irradiance). The MTF is zero at
j ¼ 1/w because each sensor reads the same level and there is no modulation
depth in the resulting output waveform.
Extending our analysis to two dimensions, we consider the simple case of
a rectangular detector with different widths along the x and y directions:

hfootprint ðxÞ ¼ rectðx∕wx , y∕wy Þ ¼ rectðx∕wx Þrectðy∕wy Þ: (2.3)

By Fourier transformation, we obtain the OTF, which is a two-dimensional


sinc function:

OTFfootprint ðj, hÞ ¼ sincðjwx , hwy Þ (2.4)

and
MTF in Electro-optical Systems 43

  
 sinðpjwx Þ   sinðphwy Þ 
MTFfootprint ðj, hÞ ¼ 
  phw
:
 (2.5)
pjwx y

The impulse response in Eq. (2.3) is separable, that is, hfootprint(x,y) is simply
a function of x multiplied by a function of y. The simplicity of the separable
case is that both h(x,y) and H(j,h) are products of two one-dimensional
functions, with the x and y dependences completely separated. Occasionally, a
situation arises in which the detector responsivity function is not separable.2,3 In
that case, we can no longer write the MTF as the product of two one-
dimensional MTFs, as seen in Fig. 2.5. The MTF along the j and h spatial
frequency directions is affected by both x and y profiles of the detector
footprint. For example, the MTF along the j direction is not simply the Fourier
transform of the x profile of the footprint but is

MTFfootprint ðj, 0Þ ¼ jH footprint ðj, h ¼ 0Þj ≠ jF fhfootprint ðx, y ¼ 0Þgj: (2.6)

Finding MTF in these situations requires a two-dimensional Fourier transform


of the detector footprint. The transfer function can then be evaluated along the
j or h axis, or along any other desired direction.

2.2 Sampling
Sampling is a necessary part of the data-acquisition process in any electro-
optical system. We will sample at spatial intervals Dx ≡ xsamp. The spatial
sampling rate is determined by the location of the detectors in a focal-plane

Figure 2.5 Example of a nonseparable detector footprint (adapted from Ref. 3).
44 Chapter 2

array. The process of spatial sampling has two main effects on image quality:
aliasing and the sampling MTF.

2.2.1 Aliasing
Aliasing is an image artifact that occurs when we insufficiently sample a
waveform. We assume that the image irradiance waveform of interest has
already been decomposed into its constituent sinusoids. Therefore, we can
consider a sinusoidal irradiance waveform of spatial frequency j. If we choose a
sampling interval sufficient to locate the peaks and valleys of the sinewave, then
we can reconstruct that particular frequency component unambiguously from
its sampled values, assuming that the samples are not all taken at the same level
(the 50%-amplitude point of the sinusoid). Thus, the two-samples-per-cycle
minimum sampling rate seen in Fig. 2.6 corresponds to the Nyquist condition:

Dx ≡ xsamp ¼ 1∕ð2jÞ: (2.7)

If the sampling is less frequent [xsamp . 1/(2j)] than required by the Nyquist
condition, then we see the samples as representing a lower-frequency sinewave
(Fig. 2.7). Even though both sinewaves shown are consistent with the samples,
we will perceive the low-frequency waveform when looking at the sampled
values. This image artifact, where samples of a high-frequency waveform
appear to represent a low-frequency waveform, is an example of aliasing.
Aliasing is symmetric about the Nyquist frequency of jNyquist ¼ 1/(2Dxsamp),
which means that the amount by which a waveform’s spatial frequency exceeds
1/(2Dxsamp) is the amount by which we perceive it to be below the Nyquist
frequency. So, a frequency transformation of

ðjNyquist þ DjÞ ! ðjNyquist  DjÞ (2.8)

takes place between the input waveform and the aliased image data.

Figure 2.6 Nyquist sampling condition of two samples per cycle.


MTF in Electro-optical Systems 45

Figure 2.7 Aliasing of a high-frequency waveform (solid line) to a lower-spatial-frequency


waveform (dashed line) by insufficient sampling.

Figure 2.8 Example of aliasing in the image of a radial bar target.

Figure 2.8 shows an example of aliasing for the case of a radial bar target,
for which the spatial frequency increases toward the center. The right-hand
image has been sampled using a larger sampling interval. With an insufficient
spatial-sampling rate, we see that the high frequencies near the center are
aliased into the appearance of lower spatial frequencies.
Figure 2.9 is a three-bar target that shows aliasing artifacts. The left image
was acquired with a small spatial-sampling interval, and we see that the bars
have equal lines and spaces, and are of equal density. The right image was
acquired with a larger spatial-sampling interval. Although bar targets are not
periodic in the true sense, we can consider the nth harmonics of the
fundamental spatial frequency jf as njf. Some of these frequencies are above
jNyquist and are not adequately sampled. The fact that not all of the bars in a
46 Chapter 2

Figure 2.9 Bar-target pattern showing aliasing artifacts.

given three-bar pattern are of the same width or density in the undersampled
image on the right is evidence of aliasing.
Figure 2.10 shows another example of aliasing using three versions of a
scene. Part (a) is a 512  512 image, which appears spatially continuous
without significant aliasing artifacts evident. Part (b) has been downsampled
to 128  128 pixels, and aliasing artifacts in sharp edges begin to be visible
because of lower jNyquist. In part (c), the image has been downsampled to
64  64 pixels, and we see extensive aliasing artifacts as low-frequency
banding in the folds of the shirt and the sharp edges.

Figure 2.10 Pictorial example of aliasing.


MTF in Electro-optical Systems 47

After the irradiance waveform has been sampled, aliasing artifacts cannot
be removed by filtering because, by Eq. (2.8), the aliased components have
been lowered in frequency to fall within the main spatial-frequency passband
of the system. Thus, to remove aliasing artifacts at this point requires the
attenuation of broad spatial-frequency ranges of the image data. We can
avoid aliasing in the first place by prefiltering the image, that is, bandlimiting
it before the sampling occurs. The ideal anti-aliasing filter, seen in Fig. 2.11,
would pass at unit amplitude all frequency components for which j , jNyquist
and attenuate completely all components for which j . jNyquist. The problem
is that neither the detector MTF (a sinc function) nor the optics MTF
(bounded by an autocorrelation function) follows the form of the desired anti-
aliasing filter.
An abrupt filter shape such as the one in Fig. 2.11 can be implemented in
the electronics subsystem. However, at that stage the image irradiance has
already been sampled by the sensors, so the electrical filter cannot effectively
serve an anti-aliasing function. The optics MTF offers some flexibility as an
anti-aliasing filter but, because it is bounded by the autocorrelation function
of the aperture, it does not allow for the abrupt-cutoff behavior desired. By
choosing l and F/# we can control the cutoff frequency of the optics MTF.
However, this forces a tradeoff of reduced MTF at frequencies less than
jNyquist against the amount of residual aliasing. Using the diffraction-limited
MTF as in Eq. (1.26) or (1.28) and Fig. 1.29 as an anti-aliasing filter requires
setting the cutoff so that MTF (j $ jNyquist) ¼ 0. This results in a loss of
considerable area under the MTF curve at frequencies below Nyquist. If we
set the cutoff frequency higher, we preserve additional modulation depth for
j , jNyquist at the expense of nonzero MTF above Nyquist (leading to some
aliasing artifacts). The choice of a higher cutoff frequency for the linear MTF
function preserves more modulation below Nyquist but results in higher
visibility of aliasing artifacts. A small amount of defocus is occasionally used

Figure 2.11 Comparison of an ideal anti-aliasing filter to filters corresponding to diffraction-


limited optics MTF, showing a tradeoff of MTF below Nyquist with an amount of residual
aliasing.
48 Chapter 2

in a bandlimiting context, but the MTF does not have the desired functional
form either, and hence a similar tradeoff applies.
Birefringent filters that are sensitive to the polarization state of the
incident radiation can be configured to perform an anti-aliasing function,4
although still without the ideal abrupt-cutoff MTF shown in Fig. 2.11. A filter
of the type shown in Fig. 2.12 is particularly useful in color focal-plane arrays,
where different spectral filters (red, blue, green) are placed on adjacent
photosites. Because most visual information is received in the green portion of
the spectrum, it is radiometrically advantageous to set the sampling interval
for the red- and blue-filtered detectors wider than for the green-filtered
detectors. If we consider each color separately, we find a situation equivalent
to the sparse-array configuration seen in Fig. 2.12, where the active photosites
for a given color are shown shaded. The function of the birefringent filter is to
split an incident ray into two components. A single point in object space maps
to two points in image space, with a spacing equal to one-half of the detector-
to-detector distance. The impulse response of the filter is two delta functions:

1
hfilter ðxÞ ¼ fdðxÞ þ dðx þ xsamp ∕2Þg: (2.9)
2

The corresponding filter transfer function can be found by Fourier


transformation as

MTFfilter ðjÞ ¼ j cos½2pðxsamp ∕4Þj j, (2.10)

which has its first zero at 1/(xsamp) ¼ 2jNyquist. The birefringent filter thus
provides a degree of prefiltering, in that the bandlimiting function is applied
before the image is sampled by the detector array. The blur obtained using a

Figure 2.12 Mechanism of a birefringent anti-aliasing filter.


MTF in Electro-optical Systems 49

birefringent filter is approximately independent of the lens F/#, which is not


the case when defocus is used as a prefilter. Typically, F/# is needed as a
variable to control the image-plane irradiance.

2.2.2 Sampling MTF


We can easily see that a sampled-imaging system is not shift invariant.
Consider the FPA imager seen in Fig. 2.13. The position of the image-
irradiance function with respect to the sampling sites will affect the final image
data. If the image is aligned so that most of the image irradiance falls
completely on one single column of the imager, then a high-level signal is
produced that is spatially compact. If the image irradiance function is moved
slightly so that it falls on two adjacent columns, the flux from the source is
split in two, and a lower-level signal of broader spatial extent is produced. If
we compute the MTF for such a sampled system, the Fourier transform of the
spatial-domain image will depend on the alignment of the target and the
sampling sites, with the best alignment giving the broadest MTF.
This shift variance violates one of the main assumptions required for a
convolutional analysis of the image-forming process. To preserve the
convenience of a transfer-function approach, the concept of impulse response
can be generalized to define a shift-invariant quantity. Following Park,
Schowengerdt, and Kaczynski,5 we define a spatially averaged impulse
response and a corresponding MTF component that is inherent in the
sampling process itself by assuming that the scene being imaged is randomly
positioned with respect to the sampling sites. This random alignment
corresponds to the situation where a natural scene is imaged with an
ensemble of individual alignments. Park’s original work was in the context of
star-field images. For a two-dimensional rectangular sampling grid, the

Figure 2.13 A sampled image-forming system is not shift invariant.


50 Chapter 2

sampling impulse response is a rectangle function whose widths are equal to


the sampling intervals in each direction:

hsampling ðx, yÞ ¼ rect ðx∕xsamp , y∕ysamp Þ: (2.11)

We see from Eq. (2.11) that wider-spaced sampling produces an image with
poorer image quality. An average sampling MTF can be defined as the
magnitude of the Fourier transform of hsampling(x, y):

MTFsampling ¼ j F frect ðx∕xsamp , y∕ysamp Þgj, (2.12)

which yields a sinc-function sampling MTF:


 
 sinðpjxsamp Þ sinðphysamp Þ
MTFsampling ¼ jsincðjxsamp , hysamp Þj ¼ : (2.13)
pjxsamp physamp 

The sampling MTF is equivalent to the average of the MTFs that would be
realized for an ensemble of image locations, uniformly distributed with respect
to the sampling sites. As Fig. 2.13 demonstrates, when the alignment is
optimum, the MTF is broad, but for other source positions, the MTF is
narrower. The sampling MTF is the average over all possible MTFs. Thus
defined, the sampling MTF is a shift-invariant quantity, and we can proceed
with a usual transfer-function-based analysis. The sampling MTF of Eq. (2.13)
is a component that multiplies the other MTF components for the system.
However, this sampling MTF does not contribute in an MTF-measurement
setup where the test target is aligned with the sampling sites because the central
assumption in its derivation is the random position of any image feature with
respect to the sampling sites. In typical MTF test procedures, we adjust the
position of the test target to yield the best output signal (most compact output,
best appearance of bar-target images). In the typical test-setup case, the
sampling MTF equals unity except where random-noise test targets6 that
explicitly include the sampling MTF in the measurement are used. Because
typical test procedures preclude the sampling MTF from contributing to the
measurements, the sampling MTF is often forgotten in a system analysis.
However, when the scene being imaged has no net alignment with respect to the
sampling sites, the sampling MTF will contribute in practice and should
therefore be included in the system-performance modeling.
The combined MTF of the optics, detector footprint, and sampling can be
much less than initially expected, especially considering two common
misconceptions that neglect the detector and sampling MTFs. The first error
is to assume that if the optics blur-spot size is matched to the detector size then
there is no additional image-quality degradation from the finite detector size.
We can see from Eq. (1.9) that the optics and detector MTFs multiply, and
hence both terms contribute. Also, it is quite common to forget the sampling
MTF in Electro-optical Systems 51

MTF, which misses another multiplicative sinc contribution. As an


illustration of the case where the optics blur-spot size is matched to the
detector size in a contiguous FPA, we assume that each impulse response can
be modeled in a one-dimensional analysis as a rect function of width wx. Each
of the three MTF terms is a sinc function, and the product is proportional to
sinc3(jwx), as seen in Fig. 2.14. The difference between the expected sinc (jwx)
and the actual sinc3(jwx) at midrange spatial frequencies can be 30% or more
in terms of absolute MTF and, at high frequencies the MTF can go from an
expected value of 20% to essentially zero, with the inclusion of the two
additional sinc-function contributions.
The recent trend in visible and IR FPAs is that pixels are very small (on the
order of a wavelength), with a near-contiguous FPA pitch. This has several
image-quality advantages. Finer spatial sampling pushes the Nyquist frequency
higher, increasing the un-aliased bandwidth. For a given system MTF, increasing
the Nyquist frequency lowers the MTF at the onset of aliasing, decreasing the
visibility of aliasing artifacts. Additionally, small contiguous pixels increase the
overall system MTF by increasing the footprint MTF and the sampling MTF at
a given spatial frequency. However, there is a decreased amount of performance
to be gained in terms of overall system MTF because the upper limit to the
overall MTF is the (diffraction-limited) optics MTF. Nonetheless, given the
reduced cost of large-pixel-count FPAs, it is worth pushing into the range of
“diminishing returns” to get slightly better overall MTF, which directly translates
into increased detection- and recognition-range performance for IR systems.
We see a direct demonstration of the sampling MTF contribution in the
following discussion. This method itself is of decreasing importance nowadays
because FPA pixel densities are so high, but it is included here to illustrate the
contribution of the sampling MTF. The pixel spacing on a FPA is of course
fixed, but we can decrease the spatial sampling interval by microscanning
(also called microdither).7–10 The image falling on the FPA is moved by a
piezoelectric actuator that moves optical elements or by a liquid crystal beam

Figure 2.14 MTF contributions multiply for detector footprint, optics blur spot, and
sampling.
52 Chapter 2

steerer, in either case slightly displacing the line of sight. Successive frames of
displaced image samples are obtained and interlaced with appropriate spatial
offsets. Usually, we collect four frames of data at half-pixel relative shifts, as
shown in Fig. 2.15. In part (a) we see the original image of the object
superimposed on the detector array. In part (b) the image location with
respect to the detector array is shifted by a half-pixel spacing in the horizontal
direction. In part (c) the image location is shifted by a half-pixel spacing in the
vertical direction. In part (d) the image location is shifted by a half-pixel
spacing in both the horizontal and vertical directions.
The four frames are interlaced to produce an output frame with twice the
effective sampling rate in each direction. Finer sampling yields better
sampling MTF along with higher Nyquist frequencies. The fact that
microscanning produces better pictures (Fig. 2.16) is intuitive proof of the
existence of a sampling MTF contribution because the detector size and the
optics MTF are both unchanged. The single MTF component that is
improved by the microscan technique is the sampling MTF. The drawback to
microscanning, from a systems viewpoint, is that the frame takes longer to
acquire for a given integration time. Alternatively, if we keep the frame rate
constant, the integration time decreases, which can have a negative impact on

Figure 2.15 Illustration of the microscan process (adapted from Ref. 9).
MTF in Electro-optical Systems 53

Figure 2.16 Microscan imagery: (left) the original image, (center) the four shifted images,
and (right) the interlaced image (adapted from Ref. 10).

the signal-to-noise ratio. There is also an additional data-processing


complexity involved in interlacing the frames together.
We now compare several sampled-image situations with FPAs, where we
consider the overall MTF as the product of sampling MTF and footprint
MTF. We consider only the MTF inherent in the averaging-and-sampling
process arising from finite-sized detectors that have a finite center-to-center
spacing. In actual practice, other MTF contributions such as those arising in
the fore-optics or electronics subsystems would multiply these results. For
simplicity, we analyze only the x and y sampling directions. Recall that the
Nyquist frequency is the inverse of twice the sampling interval in each
direction. In each of the cases considered, we keep the sensor dimension
constant at w and investigate the aggregate MTF as we vary the sampling
situation. In each of the following FPA examples, the image quality in the x
and y directions is identical. We do not account for the finite dimension of the
array as a whole in any of the examples.
First, we consider the sparse FPA shown in Fig. 2.17. In this case, the
sampling interval is twice the detector width. Because the Nyquist frequency is

Figure 2.17 MTF for a sparse focal-plane array.


54 Chapter 2

rather low at j ¼ 0.25/w, the MTF is high at the aliasing frequency,


which means that aliasing artifacts such as those seen in Figs. 2.8 through
2.10 are visible with high contrast. In addition, the large sampling interval
places the first zero of the sampling MTF at j ¼ 0.5/w, narrowing the
MTF product considerably compared to the detector MTF, which has a first
zero at j ¼ 1/w.
In Fig. 2.18 the sensor size remains the same as in Fig. 2.17, but now the
detectors are contiguous, with a sampling interval equal to the detector width.
The Nyquist frequency is thus raised to j ¼ 0.5/w, which has two related
effects. First, because the aliasing frequency is higher, the MTF is lower at the
aliasing frequency, so aliasing artifacts are not as visible. Also, the usable
bandwidth of the system, from dc to the onset of aliasing, has been increased
by the finer sampling. The sinc-function MTF for the detectors and for the
sampling is identical, with a first zero at j ¼ 1/w for each. Their product is a
sinc-squared function, which has considerably higher MTF than did the MTF
of the sparse array seen in Fig. 2.17.
In Fig. 2.19 we consider a situation that would arise in the context of
microdither. The sensors are physically contiguous, and their size remains the
same; but now samples are taken at half-pixel intervals in each direction. The
Nyquist frequency rises to j ¼ 1/w, which increases the available un-aliased
bandwidth. Also, in this case the MTF is zero at Nyquist. Thus, as spatial
frequency increases, any aliasing artifacts will come in slowly and initially
with very low visibility. The sinc-function MTF for the detectors has its first
zero at j ¼ 1/w, while the sinc-function MTF for the sampling is twice as wide,
having its first zero at j ¼ 2/w. Thus, the product of these two MTFs is wider
than it is for the overall MTF seen in Fig. 2.18.

Figure 2.18 MTF for a contiguous focal-plane array.


MTF in Electro-optical Systems 55

Figure 2.19 MTF for a contiguous focal-plane array with samples at half-pixel intervals in x
and y directions.

Now we explore analogous sampling situations with a linear array of


scanned detectors. The effective scan velocity relates temporal and spatial
variables, and similarly relates temporal and spatial frequencies. Given a
single detector scanned in the horizontal direction, the instantaneous field of
view (IFOV) is that portion of the object scene being looked at by the detector
at one instant in time. The IFOV is the moving footprint of the detector, and it
has (spatial) dimensions of distance. As seen in Fig. 2.20, when the IFOV
scans across a localized (delta function) feature in the scene, there will be a
signal on the detector for a time period called the dwell time td ¼ IFOV∕vscan
while the IFOV continuously moves across this feature.
We are free to decide the sampling interval for the time-domain
analog waveform arising from a continuously scanned detector. Given
td ¼ IFOV∕vscan , the waveform sample spacings in time are directly related to
spatial samplings in the scene’s spatial variable x by Dt ¼ Dx∕vscan . Thus,
temporal frequencies and spatial frequencies are related by

Figure 2.20 Two positions of a detector: at the start and at the end of the overlapping of the
IFOV with a delta function scene feature.
56 Chapter 2

f ½Hz ¼ vscan ½mm∕sec  j ½cy∕mm: (2.14)

Typical practice in scanned sensor systems is to sample the analog signal from
the detector at time intervals equivalent to two samples per detector width,
known as “twice per dwell.” Finer sampling is certainly possible, but we
obtain the maximum increase in image quality by going from one to two
samples per dwell. Beyond that spatial sampling rate, there is a diminishing
return in terms of image quality. Let us see why two samples per dwell has
been such a popular operating point, with reference to Fig. 2.21. With a
sampling interval of w/2, the x-direction Nyquist frequency has been increased
to j ¼ 1/w. This higher aliasing frequency is beneficial because the usable
bandwidth has been increased, but the other factor is that now the MTF of the
detector footprint goes through its first zero at the Nyquist frequency. Because
the transfer function is zero at Nyquist, the image artifacts arising from
aliasing are naturally suppressed. A final advantage is that the x-direction
MTF has been increased because of the broader sampling-MTF sinc function,
which has its first zero at j ¼ 2/w. Since the detectors are contiguous in the
y direction, the aliasing frequency is h ¼ 0.5/w and the overall MTF as a
function of h is just the sinc-squared function seen in the analysis of the
contiguous FPA in Fig. 2.18.
In Fig. 2.22 we extend this analysis to a pair of staggered linear arrays
offset by half of the detector-to-detector spacing, which is a commonly used
configuration. Once again, we must perform additional data processing to
interlace the information gathered from both sensor arrays into a high-
resolution image. The advantage we gain is that an effective twice-per-dwell
sampling in both the x and y directions is achieved, with wider h-direction
MTF, higher h-direction aliasing frequency, and additional suppression of

Figure 2.21 MTF for a scanned linear array of sensors.


MTF in Electro-optical Systems 57

Figure 2.22 MTF for a staggered pair of scanned linear arrays.

aliasing artifacts compared to the situation in Fig. 2.21. Although it is possible


to interlace more than two arrays to increase the signal-to-noise ratio in
the context of time delay-and-integration (TDI), the greatest image-quality
benefit in terms of MTF is gained in going from a single array to two.
As a final example of this type of analysis, we consider a fiber array
(Fig. 2.23), which can transmit an image-irradiance distribution over the
length of the fibers in the bundle.11,12 If we assume that the arrangement of the
fibers is preserved between the input and output faces (a so-called coherent
array), we can describe the MTF as the product of the MTF of the fiber
footprint (because there is no spatial resolution within an individual fiber,
only a spatial averaging of irradiance) and the sampling MTF (which depends
on the details of the arrangement of the fibers). In this example, we consider a
hexagonal packed array with a sampling lattice having different sampling
intervals in the x and y directions: an x-sampling interval of D/2 and a
pffiffiffi
y-sampling interval of 3D∕2. This yields different Nyquist frequencies and
different sampling MTFs in the x and y directions. Other spatial arrangements
of the fibers are possible. Once the center-to-center spacing of the fibers in
each direction is fixed, the sampling MTF along j and h can be found from
Eq. (2.13), under the assumption that any scene irradiance distribution is
randomly positioned with respect to the fibers. Calculation of the footprint
MTF requires a two-dimensional Fourier transform of the fiber footprint
because it is not a separable function in x and y. This gives us the Bessel-
function footprint MTF shown in Fig. 2.23.
So far, we have considered only the nearest-neighbor sampling distances
along the x and y directions. If we extend this sampling to any direction in the
x-y plane, we can extend our analysis to any sampled-image system where
58 Chapter 2

Figure 2.23 MTF for a coherent fiber-bundle array.

directions other than x and y are important for image formation, such as
hexagonal focal-plane arrays, fiber bundles, and laser printers. Once the two-
dimensional sampling MTF is in hand, we multiply it by the two-dimensional
Fourier transform of the pixel footprint to yield the overall sampling-and-
averaging array MTF.13,14 A one-dimensional sinc-function sampling MTF
along the lines of Eq. (2.13) applies to the spacing between the nearest
neighbors in any direction because the distance between samples in any
direction can be modeled as a rect-function impulse response (assuming a
random position of the scene with respect to the sampling sites). The width of
the rect function depends on the particular direction u in which the next-
nearest neighbor is encountered:

hsampling ðuÞ ¼ rect ½x∕xsamp ðuÞ (2.15)

and

MTFsampling ðju Þ ¼ j F frect ½x∕xsamp ðuÞg ¼ j sinc½ju xsamp ðuÞ j, (2.16)

where ju is understood to be a one-dimensional spatial frequency parameter-


ized on the u direction. Directions with more widely spaced next-nearest
neighbors will have poorer image quality. For a finite-dimension sampling
array, a nearest neighbor does not exist at all in some directions, so the MTF
is necessarily zero in that direction (because the image data of that particular
spatial frequency cannot be reconstructed from the samples). We find that the
MTF of Eq. (2.16) is thus a discontinuous function of angle (Fig. 2.24).
MTF in Electro-optical Systems 59

Figure 2.24 The nearest-neighbor distance, and therefore the sampling MTF, is a
discontinuous function of angle u.

2.3 Crosstalk
Crosstalk arises when the signal of a particular detector on a FPA contributes
to or induces a spurious signal on its neighbor. Origins of crosstalk include
charge-transfer inefficiency, photogenerated carrier diffusion, inter-pixel
capacitance caused by coupling of close capacitors inherent in FPA pixel
structures, and channel-to-channel crosstalk caused by the wiring harness and
readout electronics.
One way to measure inter-pixel crosstalk is by illuminating a single pixel of
the FPA with an image of an x-ray source. The x-ray photons will generate
charge carriers that give a signal from the illuminated pixel (and adjacent pixels).
If we use a short-wavelength source, we can generate a spot that is smaller than
the pixel of the FPA to be measured, which would usually not be possible
because of diffraction if we chose a source that was within the response band of
the FPA. As Fig. 2.25 shows, crosstalk can be approximately modeled with an
impulse response of a Gaussian or negative-exponential form. Typically, there is
not a large number of sample points because only the few closest channels will
have an appreciable crosstalk signal. Thus, we have some flexibility in picking
the fitting function, as long as the samples that are present are appropriately
represented. If we Fourier transform the impulse response, we obtain a crosstalk
MTF component. We then cascade this crosstalk MTF component with other
system MTF contributions such as footprint and sampling MTFs.

Figure 2.25 Modeling the inter-pixel crosstalk response. Only the central sensor is
illuminated using a short-wavelength source.
60 Chapter 2

Figure 2.26 (left) Image of the charge injection pattern on the FPA. (right) Image of a
magnified view of the signal read out from nearby pixels (adapted from Ref 15).

In Ref. 15 the channel-to-channel crosstalk was measured for an IR FPA


by directly injecting charge into a single channel and measuring the signal at
all other channels. Figure 2.26 shows examples of the resulting impulse
response function, which are of significant extent in the horizontal direction.
Charge-transfer inefficiency,16 seen in charge-coupled devices (CCDs), is
caused by incomplete transfer of charge packets along the CCD delay line. A
smearing of the image occurs that is spatial-frequency dependent. The image
artifact seen is analogous to a motion blur in the along-transfer direction. For
n charge transfers, ε fractional charge left behind at each transfer, and Dx
pixel spacing, the crosstalk MTF from charge-transfer inefficiency (Fig. 2.27)
is given by
1
MTFðjÞ ¼ enε½1cosð2pjDxÞ for 0 # j # : (2.17)
2Dx

Figure 2.27 Crosstalk MTF caused by charge-transfer inefficiency.


MTF in Electro-optical Systems 61

Charge-carrier diffusion also leads to crosstalk effects.17 The absorption


of photons in a semiconductor material is wavelength dependent and is high
for short-wavelength photons and decreases for longer-wavelength photons as
the band-gap energy of the material is approached. With less absorption,
long-wave photons penetrate deeper into the material and their photogener-
ated charges must travel farther to be collected. The longer propagation path
leads to more charge-carrier diffusion and hence more charge-packet
spreading, and ultimately poorer MTF for long-wavelength illumination.
Figure 2.28 shows a family of MTF curves parameterized on wavelength for a
Si-based FPA; the decrease in MTF we see for longer wavelengths is caused
mainly by charge-carrier diffusion.
We see another example of charge-carrier diffusion MTF18 in signal-
processing-in-the-element (SPRITE) detectors, where an optical image is
scanned along a long horizontal semiconductor detector to increase the dwell
time and hence the signal-to-noise ratio of the detected signal. The increased
dwell time allows additional charge-carrier diffusion, which reduces the MTF,
as seen in Fig. 2.29.

2.4 Electronic-Network MTF


Electronic networks are essential to electro-optical imagers. They are present
in data-acquisition (i.e., frame grabbers),19 signal-processing, and display
subsystems, and establish a baseline noise level. To cast the electronics
transfer function as an MTF and to cascade it with the MTFs for other
subsystems, we must convert temporal frequency [Hz] into spatial frequency.
As seen in Fig. 2.20, these frequencies are related by a quantity having units of

Figure 2.28 Variation of carrier-diffusion MTF with illumination wavelength for a Si FPA.
62 Chapter 2

Fig. 2.29 MTF vs integration length for a SPRITE detector (adapted from Ref. 18).

an effective scan velocity vscan in either image-plane spatial frequency or


object-space angular spatial frequency:

f ½Hz ¼ vscan, image-plane ½mm∕s  j ½cy∕mm (2.18)

or
f ½Hz ¼ vscan, angular ½mrad∕s  j ½cy∕mrad: (2.19)

We can easily visualize the meaning of scan velocity for a scanned-sensor


system such as that seen in Fig. 2.1 because the IFOV is actually moving
across the object plane. It is not as easy to visualize scan velocity for a staring
system like that seen in Fig. 2.2 because there is no motion of the IFOV.
However, we can calculate a quantity having units of scan velocity if we know
the field of view and the frame rate. In practice, it is not necessary to explicitly
calculate the scan velocity to convert from temporal to spatial frequencies. We
can determine the multiplicative factor experimentally using an electronic
spectrum analyzer as seen in Fig. 2.30. First, we set up a bar target of known
fundamental frequency that will create an image-plane spatial frequency
which we can calculate (knowing the optical magnification) or which we can
measure directly from the output signal (knowing the pixel-to-pixel spacing of
the detector array). Taking the output video signal from the detector array
into the spectrum analyzer will give us a readout of the electrical frequency
corresponding to the fundamental image-plane spatial frequency of the bar
target.
The transfer function of an electronic network can be tailored in ways that
the transfer function of an optical system cannot. We can implement a boost
filter that preferentially amplifies a particular band of frequencies, which can
help to compensate for losses in modulation depth incurred in the optical
MTF in Electro-optical Systems 63

Figure 2.30 Experimental determination of temporal-to-spatial frequency correspondence.

portion of the system. The usefulness of a boost filter is limited by the effects
of electronics noise. At any given frequency, an ideal boost filter would
amplify signal and noise equally, but in practice, a boost filter increases the
electrical noise-equivalent bandwidth and hence decreases the image signal-to-
noise ratio (SNR).20 Both a high MTF and a high SNR are desirable, so in the
design of a boost filter we need to decide how much gain to use and what
frequencies we want to emphasize.
An image-quality criterion that we can use to quantify this tradeoff is the
MTF area (MTFA), which has been validated by field trials to correlate well
with image detectability.21 MTFA is the area between the MTF curve and the
noise-equivalent modulation (NEM) curve. The NEM characterizes the
electronics noise in terms of modulation depth, being defined as the amount of
modulation depth needed to yield an SNR of unity. The ratio of MTF to
NEM at any spatial frequency can be interpreted as an SNR. Because the
electronics noise is frequency dependent, the NEM is usually a function of
spatial frequency. A convenient representation is to plot the MTF and the
NEM on the same graph, as seen in Fig. 2.31. The limiting resolution is the
spatial frequency where the curves cross.

Figure 2.31 Relationship of MTF, NEM, and MTFA.


64 Chapter 2

The power spectral density (PSD) is a common way to describe the


frequency content of the electronics noise. The PSD is expressed in units of
W/Hz. Because the PSD is in terms of power and the NEM, being in
modulation-depth units, is proportional to the voltage, we can relate NEM
and PSD by
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
NEMðjÞ ¼ CðjÞ PSDðjÞ, (2.20)

where the frequency-dependent proportionality factor C(j) accounts for the


display and observer. Taking MTFA as the quantity to be maximized, we can
write the MTFA before application of a boost filter:

Zj2
MTFAbefore-boost ¼ fMTFðjÞ  NEMðjÞg dj: (2.21)
j1

An ideal boost filter with gain function B(j) amplifies signal and noise equally
at any frequency, so after the boost, the MTFA becomes

Zj2
MTFAafter-boost ¼ BðjÞfMTFðjÞ  NEMðjÞg dj: (2.22)
j1

From Eq. (2.22) we see that MTFA is increased, enhancing image


detectability, if the range of application of the boost is restricted to those
frequencies for which the system MTF is greater than the NEM. This
confirms that we cannot amplify noisy imagery and obtain a useful result. The
MTFA actually achieved will be somewhat lower than Eq. (2.22) indicates. To
keep the increase in noise-equivalent bandwidth (and the consequent
degradation of SNR) to a minimum, the boost should be implemented at
only those frequencies most important to the imaging task, and the magnitude
of the boost should be limited to avoid oscillation artifacts at abrupt light-to-
dark transitions in the image.22

2.5 Conclusion
We can apply a transfer-function analysis that was originally developed for
classical optical systems to electro-optical systems by generalizing the
assumptions of linearity and shift invariance. Linearity is not strictly valid
for systems that have an additive noise level because image waveforms must
be of sufficient irradiance to overcome the noise before they can be considered
to add linearly. The definition of NEM allows us to consider a spatial-
frequency-dependent signal-to-noise ratio rather than simply a transfer
function. Shift invariance is not valid for sampled-data systems; however, to
MTF in Electro-optical Systems 65

preserve the convenience of a transfer-function analysis, we consider the


average response of the system to an ensemble of image-irradiance
waveforms, each with a random position with respect to the array of
sampling sites. With the above-mentioned modifications, we can apply a
transfer-function approach to a wider range of situations.

References
1. J. Gaskill, Linear Systems, Fourier Transforms, and Optics, John Wiley &
Sons, New York (1978).
2. G. D. Boreman and A. Plogstedt, “Spatial filtering by a nonrectangular
detector,” Appl. Opt. 28(6), 1165–1168 (1989).
3. K. J. Barnard and G. D. Boreman, “Modulation transfer function of
hexagonal staring focal plane arrays,” Opt. Eng. 30(12), 1915–1919 (1991)
[doi: 10.1117/12.56012].
4. J. E. Greivenkamp, “Color-dependent optical prefilter for suppression of
aliasing artifacts,” Appl. Opt. 29(5), 676–684 (1990).
5. S. K. Park, R. Schowengerdt, and M.-A. Kaczynski, “Modulation-
transfer-function analysis for sampled image systems,” Appl. Opt. 23(15),
2572–2582 (1984).
6. A. Daniels, G. D. Boreman, A. Ducharme, and E. Sapir, “Random
transparency targets for modulation transfer function measurement in the
visible and IR,” Opt. Eng. 34(3), 860–868 (1995) [doi: 10.1117/12.190433].
7. K. J. Barnard, E. A. Watson, and P. F. McManamon, “Nonmechanical
microscanning using optical space-fed phased arrays,” Opt. Eng. 33(9),
3063–3071 (1994) [doi: 10.1117/12.178261].
8. K. J. Barnard and E. A. Watson, “Effects of image noise on
submicroscan interpolation,” Opt. Eng. 34(11), pp. 3165–3173 (1995)
[doi: 10.1117/12/213572].
9. E. A. Watson, R. A. Muse, and F. P. Blommel, “Aliasing and blurring in
microscanned imagery,” Proc. SPIE 1689, pp. 242–250 (1992) [doi: 10.
1117/12.137955].
10. J. D. Fanning and J. P. Reynolds, “Target identification performance of
superresolution versus dither,” Proc. SPIE 6941, 69410N (2008) [doi: 10.
1117/12.782274].
11. L. Huang and U. L. Osterberg, “Measurement of cross talk in order-
packed image-fiber bundles,” Proc. SPIE 2536, pp. 480–488 (1995) [doi:
10.1117/12.218456].
12. A. Komiyama and M. Hashimoto, “Crosstalk and mode coupling
between cores of image fibers,” Electron. Lett. 25(16), 1101–1103 (1989).
13. O. Hadar, D. Dogariu, and G. D. Boreman, “Angular dependence of
sampling modulation transfer function,” Appl. Opt. 36(28), 7210–7216
(1997).
66 Chapter 2

14. O. Hadar and G. D. Boreman, “Oversampling requirements for


pixelated-imager systems,” Opt. Eng. 38(5), 782–785 (1999) [doi: 10.
1117/1.602044].
15. A. Waczynski, R. Barbier, and S. Cagiano, et al., “Performance overview
of the Euclid infrared focal plane detector subsystems,” Proc. SPIE 9915,
991511 (2016) [doi: 10.1117/12.2231641].
16. D. F. Barbe, “Imaging devices using the charge-coupled concept,” Proc.
IEEE 63(1), 38–67 (1975).
17. E. G. Stevens, “A unified model of carrier diffusion and sampling
aperture effects on MTF in solid-state image sensors,” IEEE Trans.
Electron Devices 39(11), 2621–2623 (1992).
18. G. D. Boreman and A. E. Plogstedt, “Modulation transfer function and
number of equivalent elements for SPRITE detectors,” Appl. Opt. 27(20),
4331–4335 (1988).
19. H. A. Beyer, “Determination of radiometric and geometric characteristics
of frame grabbers,” Proc. SPIE 2067, pp. 93–103 (1993) [doi: 10.1117/12.
162117].
20. P. Fredin and G. D. Boreman, “Resolution-equivalent D* for SPRITE
detectors,” Appl. Opt. 34(31), 7179–7182 (1995).
21. J. Leachtenauer and R. Driggers, Surveillance and Reconnaissance
Imaging Systems, Artech House, Boston, pp. 191–193 (2001).
22. P. Fredin, “Optimum choice of anamorphic ratio and boost filter
parameters for a SPRITE based infrared sensor,” Proc. SPIE 1488,
pp. 432–442 (1991) [doi: 10.1117/12.45824].
Chapter 3
Point-, Line-, and Edge-Spread
Function Measurement of MTF
There are several ways we can measure MTF using targets that have an
impulsive nature, each with positive aspects as well as drawbacks. In this
chapter, we first develop the mathematical relationships between the data and
the MTF for the point-spread function (PSF), line-spread function (LSF),
and edge-spread function (ESF). One item of notation in this section is that
we use * to denote a one-dimensional convolution, and ** to denote a two-
dimensional convolution. We then compare the measurement techniques and
consider options for increasing the signal-to-noise ratio and extending the
spatial-frequency range of the measurements.

3.1 Point-Spread Function (PSF)


In the idealized arrangement of Fig. 3.1, we use a point source, represented
mathematically by a two-dimensional delta function, as the object:

f ðx, yÞ ¼ dðx, yÞ: (3.1)

We initially assume that the image receiver is continuously sampled; that is,
we do not need to consider the finite size of pixels nor the finite distance
between samples. We will address these aspects of the measurement-
instrument response later in this chapter. Here we assume that we can
measure the image-irradiance distribution g(x,y) to the necessary spatial
precision. If the object is truly a point source, the two-dimensional image-
irradiance distribution g(x,y) equals the impulse response h(x,y). This is also
called the point-spread function (PSF):

gðx, yÞ ¼ hðx, yÞ ≡ PSFðx, yÞ: (3.2)

The PSF can be Fourier transformed in two dimensions to yield the


two-dimensional OTF. Taking the magnitude yields the MTF:

67
68 Chapter 3

Figure 3.1 Point-spread-function measurement configuration.

jFF fPSFðx;yÞgj ¼ MTFðj, hÞ: (3.3)

We can evaluate this two-dimensional transfer function along any desired


profile, for example, MTF(j,0) or MTF(0,h).

3.2 Line-Spread Function (LSF)


Figure 3.2 is a schematic of the measurement setup for the line-spread
function (LSF), also called the line response. Instead of the point source used
in Fig. 3.1, the LSF test uses a line-source object. A line source acts as a delta
function in the x direction and is constant in the y direction (sufficient to
overfill the measurement field of view of the lens under test):

f ðx, yÞ ¼ dðxÞ 1ðyÞ: (3.4)

The two-dimensional image irradiance distribution g(x,y) is the LSF, which is


a function of one spatial variable (the same variable as that of the impulsive
behavior of the line source—in this case, the x direction):

gðx, yÞ ≡ LSFðxÞ: (3.5)

Figure 3.2 Line-spread-function measurement configuration.


Point-, Line-, and Edge-Spread Function Measurement of MTF 69

Figure 3.3 The LSF is the two-dimensional convolution of the line-source object with the
PSF (adapted from Ref. 1 with permission; © 1978 John Wiley & Sons).

Each point in the line source produces a PSF in the image plane. These
displaced PSFs overlap in the vertical direction, and their sum forms the LSF.
As seen schematically in Fig. 3.3, the LSF is the two-dimensional convolution
(denoted by **) of the line-source object with the impulse response of the
image-forming system:

gðx, yÞ ≡ LSFðxÞ ¼ f ðx, yÞ   hðx, yÞ ¼ ½dðxÞ1ðyÞ   PSFðx, yÞ: (3.6)

The y-direction convolution with a constant in Eq. (3.6) is equivalent to an


integration over the y direction:
Z`
gðx, yÞ ¼ LSFðxÞ ¼ hðx, y0 Þdy0 , (3.7)
`

which verifies that the LSF is a function of x alone. It must be independent of


y because the object used for the measurement is independent of y. The object,
being impulsive in one direction, provides information about only one spatial-
frequency component of the transfer function. We can find one profile of the
MTF from the magnitude of the one-dimensional Fourier transform of the
line response:

jF fLSFðxÞgj ¼ MTFðj, 0Þ: (3.8)

We can obtain other profiles of the transfer function by reorienting the line
source. For instance, if we turn the line source by an in-plane angle of 90 deg,
we get

f ðx, yÞ ¼ 1ðxÞ dðyÞ, (3.9)

which yields a y-direction LSF that transforms to MTF(0,h).


70 Chapter 3

Figure 3.4 Comparison of the x-direction functional forms of the PSF and LSF for a
diffraction-limited system. The Airy-disc radius is 1.22 l(F/#).

It is important to note that because of the summation along the constant


direction in the line-source image, the LSF and PSF have different functional
forms. The LSF(x) is not simply the x profile of the PSF(x,y):

LSFðxÞ ≠ PSFðx, 0Þ: (3.10)

In Fig. 3.4 we compare the PSF and LSF for a diffraction-limited system. We
see that, while the PSF(x,0) has zeros in the pattern, the LSF(x) does not.

3.3 Edge-Spread Function (ESF)


In Fig. 3.5 we see the configuration for the measurement of the edge-spread
function. We use an illuminated knife-edge source (a step function) as the
object:

f ðx, yÞ ¼ stepðxÞ1ðyÞ: (3.11)

The ESF is the convolution of the PSF with the unit-step function:

gðx, yÞ ≡ ESFðxÞ ¼ PSFðx, yÞ   stepðxÞ1ðyÞ: (3.12)

The y convolution of the PSF with a constant produces an LSF, and the x
convolution with the step function produces a cumulative integration, as seen
schematically in Fig. 3.6:
Point-, Line-, and Edge-Spread Function Measurement of MTF 71

Figure 3.5 Edge-spread-function measurement configuration.

Figure 3.6 The ESF is the two-dimensional convolution of the edge-source object with the
PSF (adapted from Ref. 1 with permission; © 1978 John Wiley & Sons).

Zx
ESFðxÞ ¼ PSFðx, yÞ   stepðxÞ1ðyÞ ¼ LSFðx0 Þdx0 : (3.13)
`

The ESF is a cumulative, monotonically increasing function. Figure 3.7


illustrates the ESF for a diffraction-limited system.

Figure 3.7 Plot of the ESF for a diffraction-limited system. The Airy-disc radius is 1.22l(F/#).
72 Chapter 3

We can understand the ESF in terms of a superposition of LSFs.2,3 Each


vertical strip in the open part of the aperture produces an LSF at its
corresponding location in the image plane. These displaced LSFs overlap in
the horizontal direction and sum to form the ESF. We can write this
superposition as

ESFðxÞ ≈ LSFðx  xi Þ: (3.14)
i¼1

In the limit of small displacements, the summation becomes an integral,


consistent with Eq. (3.13). To convert ESF data to the MTF, we first take the
spatial derivative of the ESF data to invert the integral:

Zx
d d
fESFðxÞg ¼ LSFðx0 Þdx0 ¼ LSFðxÞ, (3.15)
dx dx
`

as seen schematically in Fig. 3.8.


With the LSF in hand, the magnitude of the one-dimensional Fourier
transform yields one profile of the MTF by means of
  
 
F d ESFðxÞ  ¼ MTFðj, 0Þ: (3.16)
 dx 

We can obtain any one-dimensional profile of the MTF by appropriately re-


orienting the knife edge.

3.4 Comparison of PSF, LSF, and ESF


When we compare the advantages and disadvantages of the PSF, LSF, and
ESF tests, we find that the PSF test provides the entire two-dimensional OTF
in one measurement. The major drawback to the PSF test is that point-source
objects often provide too little flux to be conveniently detected. This is
particularly true in the infrared portion of the spectrum, where we typically

Figure 3.8 Spatial derivative of ESF data produces an LSF.


Point-, Line-, and Edge-Spread Function Measurement of MTF 73

use blackbodies as the flux sources. Having sufficient flux is usually not an
issue in the visible because hotter sources are typically used.
The LSF method provides more image-plane flux than does the PSF test.
The ESF setup provides even more flux and has the added advantage that a
knife edge avoids slit-width issues. However, the ESF method requires a
spatial-derivative operation, which accentuates noise in the data. If we reduce
noise by convolution with a spatial kernel, the data-smoothing operation itself
has an MTF contribution.
In any wavelength region, we can use a laser source to illuminate the
pinhole. The spatial coherence properties of the laser do not complicate the
interpretation of PSF data if the pinhole is small enough to act as a point
source (by definition, spatially coherent regardless of the coherence of the
illumination source). An illuminated pinhole acts as a point source if it is
smaller than both the central lobe of the PSF of the system that illuminates the
pinhole and the central lobe of the PSF of the system under test, geometrically
projected (with appropriate magnification) to the source plane. Even with a
laser-illuminated pinhole, the PSF measurement yields an incoherent MTF
because the irradiance of the PSF is measured rather than the electric field.
For sources of extended spatial dimension, such as those for LSF and ESF
tests, we must ensure that the coherence properties of the illumination do not
introduce interference-fringe artifacts into the data.

3.5 Increasing SNR in PSF, LSF, and ESF Tests


We can use a variety of image-averaging techniques to increase the signal-to-
noise ratio (SNR) in PSF, LSF, and ESF tests. These averaging techniques
can be implemented in either the object plane or the image plane, provided we
take steps to ensure that the data are the same in one direction and we average
over that direction. We usually configure the illumination level in the
measurement apparatus such that the shot noise of the signal dominates the
noise in the measurement. This means that the root-mean-square (rms) noise
grows in proportion to the square root of the signal. In this case, the SNR
increases in proportion to the square root of the signal level. In an averaging
procedure, the signal is proportional to the number of independent samples.
Therefore, we can achieve a considerable SNR advantage if we average a
large number of samples (such as the rows in a typical CCD array). In certain
instances (such as operation in the amplifier-noise limit or another signal-
independent noise floor), the SNR can grow as fast as linearly with respect to
the number of samples.

3.5.1 Object- and image-plane equivalence


This technique requires that we assume that we match the object and receiver
symmetry for optimum flux collection. We equate higher flux collection to
74 Chapter 3

better SNR. Note, however, that larger detectors generally exhibit more noise
than smaller detectors; the rms noise is proportional to the square root of the
sensor area. This dependence on area reduces the SNR gain somewhat if a
larger detector is not fully illuminated. But, if the collected power (and hence
the signal) is proportional to the detector area and the rms noise is
proportional to the square root of the detector area, the more flux we can
collect, the better our measurement SNR, even if that means we must use a
larger detector.
In the configurations illustrated in Figs. 3.1, 3.2, and 3.5, we assumed a
continuously sampled image receiver, which is analogous to a point receiver
that scans continuously. If we want to obtain PSF data, our test setup must
include a point source and this type of point receiver. The only option to
increase SNR if we use a PSF-test setup is to increase the source brightness or
to average over many data sets.
However, for an LSF measurement, we can accomplish the measurement
in a number of ways, some of which yield a better SNR than others. We can
use a linear source and a point receiver, such as the configuration seen in
Fig. 3.2. This system will give us a better SNR than PSF measurement because
we are using a larger-area source. Equivalently, as far as the LSF data set is
concerned, we can use the configuration seen in Fig. 3.9: a point source and a
slit detector (or a slit in front of a large-area detector). The data acquired are
equivalent to the data for an LSF test because of the averaging in the vertical
direction. Similar to the situation using a line source and a point receiver, this
collects more flux than a PSF test and has a better SNR. However, since we are
using a linear detector, we might also want to use a linear source (Fig. 3.10).
This configuration still provides data for an LSF test, but now the source and
the receiver have the same geometry. This arrangement collects the most flux
and will provide the best SNR of any LSF test setup.
A number of different configurations will work for ESF tests, and some
are better than others from an SNR viewpoint. We begin with a knife-edge
source and a scanned point receiver (Fig. 3.5). We can collect more flux with
the configuration of Fig. 3.11, where the ESF measurement is performed with

Figure 3.9 A PSF test performed with a scanning linear detector produces data equivalent
to an LSF test.
Point-, Line-, and Edge-Spread Function Measurement of MTF 75

Figure 3.10 An LSF test performed with a linear detector produces a better SNR than
when performed using a point receiver.

Figure 3.11 ESF test setup using a point source and a scanning knife edge with a large-
area detector.

a point source and where a knife edge in front of a large detector serves as the
image receiver. We will obtain a better SNR (and the same ESF data) using
the setup illustrated in Fig. 3.12, which involves a line source and a scanning
knife edge in front of a large-area detector. We can also use a knife-edge
source and a scanning linear receiver (Fig 3.13) or a knife-edge source and a
scanning knife edge in front of a large-area detector (Fig. 3.14) because the
data set is constant in the vertical direction. The measurement configuration

Figure 3.12 ESF test configuration using a slit source and a scanning knife edge with a
large-area detector.
76 Chapter 3

Figure 3.13 ESF test configuration using an edge source and a scanning linear detector.

Figure 3.14 ESF test configuration using an edge source and a scanning knife edge with a
large-area detector.

of Fig. 3.14 should produce the highest SNR, assuming that the detector is of
an appropriate size to accommodate the image-irradiance distribution (that is,
it is not significantly oversized).

3.5.2 Averaging in pixelated detector arrays


The acquisition of PSF, LSF, and ESF data with a detector array has MTF
implications in terms of pixel size and spatial sampling, as we will see later in
this chapter. But the use of a detector array facilitates data processing that can
increase the SNR by averaging the signal over the row or column directions.
Beginning with the PSF-test configuration seen in Fig. 3.15, we can sum PSF
data along the y direction, which yields an LSF measurement in the x
direction:
X
M
LSFðxi Þ ¼ PSFðxi , yj Þ: (3.17)
j¼1

Summing the PSF data along the y direction and accumulating them along the
x direction yields an ESF measurement in the x direction:
Point-, Line-, and Edge-Spread Function Measurement of MTF 77

Figure 3.15 PSF test configuration using a two-dimensional detector array can be used to
produce PSF, LSF, and ESF data.

X
M X
i0
ESFðxi0 Þ ¼ PSFðxi , yj Þ : (3.18)
j¼1 i¼1

Because of signal averaging, the LSF and ESF test data will have a better
SNR than the original PSF test.
Similarly, using a line source oriented along the y direction, summing (or
averaging) the LSF data along y yields an LSF measurement with better
SNR. Accumulating the LSF data along x yields an ESF measurement. If we
sum (or average) the ESF data along y, we obtain an ESF measurement with
better SNR.
However, when using the signal-averaging techniques just described, we
must be sure to subtract any background-signal level in the data. Expressions
such as Eqs. (3.17) and (3.18) assume that the detector data are just the image-
plane flux. Residual dark-background signal at each pixel, even if low-level,
can become significant if many pixels are added. Another item we must
consider is whether or not to window the data before a summation is
performed. Often, we only use data from the central region of the image,
where the flux level is highest and the imaging optic has the best performance.
If the lens under test has field-dependent aberrations, it is particularly
important that we use data from the region of the sensor array that contains
the data from the FOV of interest.
Also, if the signal-processing procedure involves a summation of data
over columns, we must ensure that each column has the same data, i.e., there
is no unintended in-plane angular tilt of a slit or edge source with respect to
columns. In taking a summation, spatial broadening of the measured response
will occur if the slit or edge is not precisely parallel to the columnar structure.
If there is a tilt, two (or more) adjacent columns can receive significant
portions of the signal irradiance. If the tilt is accounted for, and the data from
successive rows are interlaced with the appropriate spatial offset, then the data
do not suffer from unintended broadening. We will take up that issue later in
this chapter.
78 Chapter 3

3.6 Correcting for Finite Source Size


Source size is an issue for PSF and LSF measurements. We must have a finite
source size to have a measurable amount of flux. For a one-dimensional
analysis, recall Eq. (1.1), where f, g, and h are the object irradiance
distribution, image irradiance distribution, and impulse-response irradiance
distribution, respectively:

gðxÞ ¼ f ðxÞ  hðxÞ, (3.19)

and Eq. (1.7), where F, G, and H are the object spectrum, the image spectrum,
and the transfer function, respectively:

GðjÞ ¼ F ðjÞ HðjÞ: (3.20)

If the input object f(x) ¼ d(x), then the image g(x) is directly the PSF h(x):

gðxÞ ¼ dðxÞ  hðxÞ ¼ hðxÞ: (3.21)

For a delta-function object, the object spectrum F(j) ¼ 1, a constant in the


frequency domain, and the image spectrum G(j) is directly the transfer
function H(j):

GðjÞ ¼ F ðjÞHðjÞ ¼ HðjÞ: (3.22)

Use of a non-delta-function source f(x) effectively bandlimits the input


object spectrum. A narrow source f(x) implies a wide object spectrum F(j),
and a wider source implies a narrower spectrum. The object spectrum F(j) will
fall off at high frequencies rather than remain constant. Usually, a one-
dimensional rect function is convenient to describe the source width, and the
corresponding sinc function is convenient to describe the object spectrum:

f ðxÞ ¼ rectðx∕wÞ (3.23)

and

F ðjÞ ¼ sinðpjwÞ∕ðpjwÞ: (3.24)

In the case of a non-delta-function source, we need to divide the measured


image spectrum by the object spectrum to solve Eq. (3.20) for H(j):
Point-, Line-, and Edge-Spread Function Measurement of MTF 79

HðjÞ ¼ ½GðjÞ ∕ ½F ðjÞ, (3.25)

which is equivalent to a deconvolution of the source width from the image-


irradiance data. Obtaining the transfer function by the division of Eq. (3.25)
would be straightforward if not for the effects of noise. We cannot measure
the image spectrum G(j) directly; the measured spectrum is the image
spectrum added to the noise spectrum:

Gmeas ðjÞ ¼ GðjÞ þ NðjÞ, (3.26)

where the noise spectrum N(j) is defined as the square root of the power
spectral density (PSD) of the electronics noise:

pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
NðjÞ ¼ PSDðjÞ: (3.27)

The division of Eq. (3.25) becomes

Gmeas ðjÞ GðjÞ þ NðjÞ NðjÞ


HðjÞ ¼ ¼ ¼ HðjÞ þ , (3.28)
F ðjÞ F ðjÞ F ðjÞ

which yields valid results at spatial frequencies for which

F ðjÞ .. NðjÞ, (3.29)

such that the last term in Eq. (3.28) is negligible.


For frequencies where the input spectrum is near zero, the deconvolution
operation divides the finite noise spectrum by a very small number. In this
frequency range, the calculated MTF will exhibit significant noise artifacts
and will increase with frequency as seen in Fig. 3.16. The MTF data are
obviously not valid for these frequencies. To extend the frequency range of the
test as far as possible, we want a wide source spectrum, which requires us to
use a narrow input source. There is a practical tradeoff here because a smaller
source gives less flux and a poorer signal-to-noise ratio. We want to use a
source that is sufficiently narrow that the source spectrum has appreciable
magnitude at the upper end of the spatial-frequency band of interest. If we
obtain a poor signal-to-noise ratio with a small source, then we must either
use a brighter source or employ signal-averaging techniques to overcome the
electronics noise.
80 Chapter 3

Figure 3.16 Once the source spectrum has fallen to the level of the noise spectrum, MTF-
test results are invalid because of division-by-zero artifacts.

3.7 Correcting for Image-Receiver MTF


In any MTF test, we need an image receiver to acquire the image-irradiance
function g(x,y). We would like to perform this image acquisition on a
continuous basis, with infinitely small pixels and infinitely small spatial-
sampling intervals, which were the original conditions for the development of
the PSF, LSF, and ESF (seen in Figs. 3.1, 3.2, and 3.5). However, from a
practical viewpoint, there is always a finite-sized averaging area for an image-
plane sensor and a finite distance between data samples. The finite dimensions
of the image receiver also contribute to the MTF of the test instrument
according to Eq. (1.9), and those contributions must be divided out from the
measured data in a similar manner to Eq. (3.25) for us to obtain the MTF of
the unit under test.

3.7.1 Finite pixel width


The detector used to acquire the image can be staring or scanning, but the
finite dimension of its photosensitive area results in a convolution of the image
irradiance g(x) with the pixel-footprint impulse response hfootprint(x). The MTF
component of finite-footprint pixels is the same as was discussed previously in
Section 2.1. For most measurement situations, we use a one-dimensional rect
function of width w to describe the pixel footprint, leading to the MTF seen in
Eq. (2.2):
   
   sinðpjwÞ 
MTFfootprint ðjÞ ¼sincðjwÞ ¼ : (3.30)
pjw 
Point-, Line-, and Edge-Spread Function Measurement of MTF 81

This term is one component of the instrumental MTF that should be divided
out from the measured MTF to obtain the MTF of the unit under test.

3.7.2 Finite sampling interval


For a staring-array image receiver, the sampling interval is the center-to-
center pixel spacing of the array. For a scanning sensor, the sampling interval
is the spatial distance at which successive samples are taken. We can
determine this sampling distance if we know the sampling-time interval used
to sample the analog video waveform and convert from time to distance units,
as discussed in Chapter 2. For manually scanned sensors such as a
photomultiplier tube on a micropositioner, the sampling interval is simply
the distance we move the sensor between samples.
We can apply Eq. (2.13) to determine the sampling MTF, which again
represents an instrumental-MTF component that should be divided out from
the final MTF results. However, note that sampling-MTF effects are not
always present in the data, so we do not always need to divide them out.
Measurement procedures for PSF, LSF, and ESF tests typically involve
alignment of the image with respect to sampling sites. We tweak the alignment
to produce the best-looking image, typically observing the data on a TV
monitor or an oscilloscope as we perform the alignment. In this case, a
sampling MTF does not filter the image data because the image irradiance is
not randomly aligned with the sampling sites. Thus, any correction for a
sampling MTF using the approach of Eq. (3.25) will unrealistically bias the
MTF measurement, giving artificially high values.
The targets used for PSF, LSF, and ESF contain very high frequencies;
therefore, even with fine alignment of the target with respect to the sampling
sites, MTF results are typically only accurate to j  1/(4Dxsamp)—half the
Nyquist frequency—because of aliasing artifacts. If we want to measure MTF
at higher frequencies, a good option is to use an oversampled knife-edge test.

3.8 Oversampled Knife-Edge Test


This test requires the introduction of a slight tilt to the knife edge. The data
sets from each line are interlaced with an appropriate subpixel shift,
corresponding to the position of the knife edge with respect to the column.
This creates a measurement data set with a very fine sampling interval,
essentially setting the sampling MTF equal to unity, and pushing jNyquist to
very high spatial frequencies.
There are some subtle points involved in the implementation of this
measurement.4 First, with the data set in hand, we estimate the position of the
knife edge for each horizontal row of data. Next, we use a least-squares fitting
procedure to fit a line through the individual edge-location estimates. We then
use this fitted line as a more accurate estimate of the edge location for each
82 Chapter 3

Figure 3.17 (left) Registering scans with a tilted knife edge. (a) Sampling grid with the knife
edge skewed from perpendicular. (b) Knife-edge shift in successive scans. (c) Combined
scan with reregistered edges. (right) Tilted-knife-edge scan. (Left diagram reprinted from
Ref. 4.)

row, allowing the proper position registration of the individual ESF data sets.
The superposition of the data sets will thus have a very fine sampling
(Fig. 3.17). The method assumes that the ESF of the system under test is
constant over the measurement region, so the measurement FOV in the along-
edge direction should be small enough that this condition is satisfied.
Because computation of MTF will require a spatial derivative to be
performed on the ESF data (Fig. 3.8), we may want to smooth and resample
the high-resolution ESF data set with a moving-window average to reduce
noise. A nearest-neighbor smoothing is usually sufficient and will have a
minimal MTF impact. Another technique that we can use to reduce the noise
before taking the derivative is to fit the high-resolution ESF data set to a
suitable functional form such as a cumulative Gaussian, a sigmoid function,
or a low-order polynomial. We can then take the derivative on the functional
form rather than on the data itself.
The other fine point in the design of the measurement involves deciding
what tilt to use. To ensure a uniform distribution of positions of the edge with
respect to the columnar structure, we should use at least a one-pixel difference
Point-, Line-, and Edge-Spread Function Measurement of MTF 83

Figure 3.18 A knife-edge tilt that is too large can produce a small number of redundant
data-registration positions.

between the top and bottom rows of the scan data. Such fine control of the
angular position of the knife edge is not required, and we can use a somewhat
more pronounced tilt. Reference 4 suggests one pixel of horizontal position
difference of the knife edge over 64 scan lines. This criterion should produce a
good oversampling, and the overlap of data-point positions over an entire data
set of more than 64 rows should be helpful in reducing noise. We should avoid
significantly more pronounced tilts because we want a uniform distribution of
positions of the edge with respect to the columnar structure. Consider the
extreme example of a one-pixel horizontal offset over four scan lines (14-deg
tilt), as shown in Fig. 3.18. If the edge were centered on one pixel and on its
adjacent neighbor four rows down, this situation would result in only four edge
positions in the data set, rather than the desired uniform distribution. Thus, we
want to avoid this possibility entirely by using a sufficiently small tilt that any
such repetition of data would be spaced by many rows. We should avoid
pronounced tilts for another reason—because we want to measure MTF in a
particular direction (which we assume to be horizontal in this discussion), and
using a slight tilt provides the closest approximation to that measurement, in
the context of the tilted knife-edge test.

3.9 Conclusion
There are several ways we can measure MTF using targets that have an
impulsive nature, each with positive aspects as well as drawbacks. For point
sources and slit sources, we need to consider the dimensions of the source
object. Edge sources are a versatile method that can measure MTF past the
Nyquist frequency of a sensor array. We can employ a variety of averaging
methods to increase signal-to-noise ratio, with appropriate caveats to ensure
high-fidelity data sets.
84 Chapter 3

References
1. J. Gaskill, Linear Systems, Fourier Transforms, and Optics, John Wiley &
Sons, New York (1978).
2. B. Tatian, “Method for obtaining the transfer function from the edge
response function,” JOSA 55(8), 1014–1019 (1965).
3. R. Barakat, “Determination of the optical transfer function directly from
the edge spread function,” JOSA 55(10), 1217–1221 (1965).
4. S. E. Reichenbach, S. K. Park, and R. Narayanswamy, “Characterizing
digital image acquisition devices;,” Opt. Eng. 30(2), pp. 170–177 (1991)
[doi: 10.1117/12/55783].
Chapter 4
Square-Wave and Bar-Target
Measurement of MTF
Although MTF is defined in terms of the response of a system to sinusoids of
irradiance, we commonly use binary targets in practice because sinusoidal
targets require analog gray-level transmittance or reflectance. Fabricating
targets with analog transmittance or reflectance usually involves photographic
or lithographic processes with spatial resolutions much smaller than the
period of the sinusoid so that we can achieve an area-averaged reflectance or
transmittance. Sinusoidal targets used in MTF testing should have minimal
harmonic distortion so that they present a single spatial frequency to the
system under test. This is difficult to achieve in the fabrication processes.
Conversely, binary targets of either 1 or 0 transmittance or reflectance are
relatively easy to fabricate. We can fabricate binary targets for low spatial
frequencies by machining processes. For targets of higher spatial frequency,
we can use optical-lithography processes of modest resolution because the
metallic films required to produce the binary patterns are continuous on a
micro scale. In this chapter, we will first consider square-wave targets and
then three-bar and four-bar targets. Square-wave targets not only consist of
the fundamental spatial frequency, but also contain higher harmonic terms.
Bar targets contain both higher and lower harmonics of the fundamental.
Because of these harmonics, we must correct modulation-depth measurements
made with binary targets to produce MTF data, either with a series approach
or with digital filtering.

4.1 Square-Wave Targets


Targets that have an infinite number of square-wave cycles are simple to analyze
mathematically. Figure 4.1 shows a portion of an infinite square wave (equal
lines and spaces) and its spectrum. The spectrum consists of a series of delta
functions at dc, the fundamental frequency (the inverse of bar spacing), and
third, fifth, and higher odd harmonics. The one-sided amplitude of the

85
86 Chapter 4

Figure 4.1 Square-wave target and its spectrum.

harmonics is 2/(pn), which decreases with order number n. For this case, the
amplitude at dc is 0.5, which is the average value of the transmittance waveform.
Of course, we never use an infinite square wave in practice. However, it is
quite feasible to fill the field of view of the system under test with square-wave
targets, using, for instance, a set of Ronchi rulings of appropriate dimension
located at the object plane. In this case, the delta-function spectrum of the infinite
square wave will be convolved with a function that represents the Fourier
transform of the field of view. For targets of high spatial frequency, the resultant
spectral broadening is negligible, and a series representation is still accurate. For
targets of lower spatial frequency, the bar spacing may be an appreciable fraction
of the field of view. In those cases, we may find that the frequency-domain width
of the broadened spectral line is significant compared to the harmonic spacing,
and a series representation is consequently less accurate.
For infinite square-wave targets, we can define a contrast transfer function
(CTF) as a function of the square-wave fundamental spatial frequency:
M output ðjf Þ
CTFðjf Þ ¼ : (4.1)
M input square wave ðjf Þ

The CTF is not a transfer function in the true sense because it is not defined in
terms of sine waves. CTF cannot be cascaded with MTF curves without first
converting CTF to MTF. The modulation depth of the input square wave is usually
1 for all targets in the set, and for an infinite-square-wave target, the maxima of
irradiance Amax are all equal and the minima of irradiance Amin are all equal,
allowing an unambiguous calculation of the output modulation depth Moutput as
Square-Wave and Bar-Target Measurement of MTF 87

Amax  Amin
M output ¼ : (4.2)
Amax þ Amin

We can express the modulation depth and, hence, the CTF at any frequency
as a two-sided summation of Fourier-series harmonic components. These
components are weighted by two multiplicative factors in the summation:
their relative strength in the input waveform and the MTF of the system under
test at each harmonic frequency. This process yields an expression1 for CTF in
terms of MTF:
 
4 MTFðj ¼ 3jf Þ MTFðj ¼ 5jf Þ
CTFðjf Þ ¼ MTFðj ¼ jf Þ  þ  :::
p 3 5
(4.3)
and inversely, for MTF in terms of CTF:
 
p CTFðjf ¼ 3jÞ CTFðjf ¼ 5jÞ
MTFðjÞ ¼ CTFðjf ¼ jÞ þ  þ : : : : (4.4)
4 3 5

As defined in Eqs. (4.3) and (4.4), both CTF and MTF are normalized to
unity at zero spatial frequency. We can see from Eq. (4.4) that, if we want to use
the series conversion, calculating MTF at any particular spatial frequency
requires us to have CTF data at a series of frequencies that are harmonically
related to the frequency of interest. Typically, the procedure to accomplish this is
to measure the CTF for a sufficient number of fundamental frequencies (over a
range from low frequencies up to where the CTF is negligibly small) so that we
can interpolate a continuous curve between the measured values. This allows us
to find the CTFs at the frequencies needed for computing an MTF curve from
the CTF data.
Owing to the higher harmonics present in a square wave, it is not accurate
to directly take the square-wave CTF measurements as MTF measurements.
A built-in bias makes the CTF higher at all frequencies than the
corresponding MTF. We can see this bias from the first term in the series
summation of Eq. (4.3). This first term, Eq. (4.5), is a high-frequency
approximation to the series because, for high fundamental spatial frequencies,
the MTF at the harmonic frequencies is low:

CTFðjf Þ  ð4∕pÞ MTFðjf Þ: (4.5)

For lower fundamental frequencies, we must include more harmonic terms for
an accurate representation because the MTF at the harmonic frequencies is
higher, but the CTF always exceeds the MTF.
We compare plots of CTF and MTF in Fig. 4.2 for the case of a
diffraction-limited circular aperture. We calculated the CTF values directly
88 Chapter 4

Figure 4.2 Comparison of MTF, CTF, three-bar IMD, and four-bar IMD for a diffraction-
limited circular-aperture system (reprinted from Ref. 2). [IMD is image modulation depth, as
defined in Eq. (4.6) in the next section.]

from the series in Eq. (4.3). The pronounced hump in the CTF near j/jcutoff 
0.3 arises from the 4/p multiplier and from the fact that that the third
harmonic, which adds into the series with a negative sign, is nearing the cutoff
frequency and therefore is not contributing negatively to the CTF. The small
oscillations in the CTF at lower frequencies arise from the fifth harmonic
(which adds with a positive sign) nearing cutoff and the seventh harmonic
(which adds with a negative sign) nearing cutoff.

4.2 Bar Targets


A very common target type for measuring MTF is the three-bar or four-bar
binary transmission pattern, as seen in Fig. 4.3. Any particular bar target is
specified in terms of its fundamental frequency jf, which is the inverse of the
center-to-center spacing of the bars. Typically, the widths of the lines and the
spaces are equal. Using a target set consisting of a number of different sizes of this
type of target, we measure the modulation depth M of the non-sinusoidal image
waveform as a function of jf, using maximum and minimum values of image-
plane irradiance for each target to yield the image modulation depth (IMD):

M output ðjf Þ
IMDðjf Þ ¼ : (4.6)
M input bar target ðjf Þ

This IMD does not equal the MTF at jf because of the extra frequency
components at both higher and lower frequencies3 than jf. These components
contribute to the IMD, biasing the measurement toward higher modulation
values than would be measured with a sinusoidal input. In practice, we
Square-Wave and Bar-Target Measurement of MTF 89

Figure 4.3 Binary (left) three-bar and (right) four-bar transmission targets.

remove these non-fundamental-frequency components by computation, or we


filter them out electronically or digitally to yield MTF data.
We determine the modulation depth of the output image from the
maximum and minimum values of the irradiance waveform of the three- or
four-bar target. Often the effects of aberrations, shading, or aliasing produce
maxima and minima that are not equal for each bar in the output image, as
seen in Fig. 4.4. We calculate the modulation depth using the highest peak
(maximum) of one or more of the bars and the lowest inter-bar minimum to
occur in any particular image.
As a practical note, if we are measuring the modulation depth of a bar
target and cannot adjust the fine positional alignment of the target so that all
four bars have the same height at the output, then we are probably trying to
measure MTF at too high a frequency. The measurement accuracy of the

Figure 4.4 Measurement of three-bar IMD from unequal bar data (adapted from Ref. 4).
90 Chapter 4

modulation depth will be reduced. Bar targets contain a range of frequencies,


and even if the fundamental is not aliased, some of the higher frequencies will
be. We saw this bar-target aliasing effect in Fig. 2.9.
Because their Fourier transforms are continuous functions of frequency
rather than discrete-harmonic Fourier series, the series representations of Eqs.
(4.3) and (4.4) are not strictly valid for three-bar and four-bar targets. To
explore the validity of the series conversion, we compare2 four different curves
(the MTF, CTF, three-bar IMD, and four-bar IMD) for four optical systems
of interest: a diffraction-limited circular-aperture system, a system with
Gaussian MTF, a system with exponential MTF, and a diffraction-limited
annular-aperture system with a 50% diameter obscuration. If the IMD curves
are close to the CTF curve, then we can use the series to convert bar-target
data to MTF for systems with that type of MTF dependence. To produce the
three-bar and four-bar IMD curves shown in Figs. 4.2, 4.5, 4.6, and 4.7, we
calculated spectra for 120 bar targets of various fundamental frequencies,
which were then filtered by each of the MTF curves. We inverse transformed
the resulting filtered spectra and calculated the IMDs from the resulting image
data according to Eq. (4.6). We plotted these IMDs as a function of the
fundamental frequency of the bar target.
For the diffraction-limited circular-aperture system, we already
considered the behavior of the CTF in Fig. 4.2. The three-bar and four-bar
IMD curves we show there are very close to the CTF curve, and the series
conversions are sufficiently accurate. The small difference would not be
measurable in practice. The small amount of modulation depth past cutoff
seen for the three-bar case is consistent with Fig. 6 of Ref. 3 because spatial
frequencies just below cutoff will contribute to a residual modulation near the

Figure 4.5 Comparison of MTF, CTF, three-bar IMD, and four-bar IMD for a system with a
Gaussian MTF ¼ exp{–2(j/j0)2}. The CTF and the three-bar and four-bar IMD curves are
identical for this case (reprinted from Ref. 2).
Square-Wave and Bar-Target Measurement of MTF 91

Figure 4.6 Comparison of MTF, CTF, three-bar IMD, and four-bar IMD for a system with an
exponential MTF ¼ exp{–2(j/j0)} (reprinted from Ref. 2).

Figure 4.7 Comparison of MTF, CTF, three-bar IMD, and four-bar IMD for a diffraction-
limited, annular-aperture system with a 50% diameter obscuration (reprinted from Ref. 2).

original fundamental spatial frequency of the three-bar target, even though


the MTF is zero at the fundamental frequency.
In Figs. 4.5 and 4.6 we present a comparison of MTF, CTF, three-bar,
and four-bar IMD curves for the cases of a Gaussian and an exponential
MTF, respectively. For the Gaussian MTF case, the CTF and the three-bar
and four-bar curves are identical. For the exponential case, the three-bar and
four-bar IMDs are slightly larger than the CTF, with the three-bar IMD as
the highest, similar to the behavior of the diffraction-limited system seen in
Fig. 4.2. Despite the similar shape of the MTFs in the diffraction-limited and
92 Chapter 4

exponential cases, we do not observe the mid-frequency hump of Fig. 4.2 with
the exponential MTF because there is no cutoff frequency at which a given
harmonic ceases to contribute to the series representing the CTF.
The situation is a bit different in Fig. 4.7, where we consider the case of an
obscured-aperture system. The MTF curve is not as smooth as in the case of
no obscuration, and the discontinuities in its derivative make the weighting of
the different terms of the series a more complicated function of frequency. The
CTF curve still exceeds the MTF curve. At some frequencies, the three-bar
and four-bar IMD curves are nearly identical to the CTF, and at other
frequencies, there is as much as a 10% difference in absolute modulation depth
or a 40% relative difference between the curves. So, in some cases, a bar-
target-to-MTF conversion using a series is not accurate. The examples shown
indicate that, if the MTF curve is a smooth, monotonic decreasing function,
the series conversion will be reasonably accurate. But in a measurement
context, we do not know the MTF curve beforehand, and we want to have a
procedure that is valid, in general, for conversion of bar-target data to MTF.
When digitized image data are available, we can perform a direct bar-
target-to-MTF conversion that makes no prior assumptions about the MTF
curve being measured. For either a three-bar or four-bar target with equal
bars and spaces, we know the magnitude spectrum of the input as a
mathematical function, for any fundamental frequency jf:
    
1 j j 1
S input, 3-bar ðjÞ ¼ sinc cos 2p þ (4.7)
jf 2jf jf 2

     
1 j j j
S input, 4-bar ðjÞ ¼ sinc cos 3p þ cos p : (4.8)
jf 2jf jf jf

These spectra are plotted in Fig. 4.8.


We can find the MTF at the fundamental frequency of any particular
target by taking the ratio of output to input magnitude spectra:
 
S output ðj ¼ jf Þ
MTFðj ¼ jf Þ ¼ : (4.9)
S input-bar-target ðj ¼ jf Þ

We take the absolute value of the Fourier transform of the digitized bar-target
image data to produce the output magnitude spectrum Soutput(j). Figure 4.9
shows a measured three-bar-target magnitude spectrum and the correspond-
ing input spectrum Sinput-bar-target(j), both normalized to unity at j ¼ 0. We
perform the bar-target MTF calculation at the fundamental frequency of the
particular target being used. Note that the image-plane fundamental
frequency is not simply the frequency of the (j . 0) maximum of the
Square-Wave and Bar-Target Measurement of MTF 93

Figure 4.8 Normalized magnitude spectra: three-bar (top), four-bar (bottom).

measured spectrum. The measured output spectrum has been filtered by the
system MTF. Because this MTF decreases with frequency, we see that the
peak of the output spectrum occurs at a slightly lower frequency than the
fundamental jf of the input target. The ratio of Eq. (4.9) is to be calculated at
the fundamental frequency. Without knowing the MTF curve, we cannot say
how much the peak was shifted. So, to determine the fundamental spatial
frequency of the input target, we use the first zero of the spectrum, the
location of which is not shifted by the MTF. In Eq. (4.7) describing the three-
bar target, the term in square brackets first goes to zero at
jfirst-zero ¼ jf ∕3, (4.10)
and in Eq. (4.8) describing the four-bar target, the term in square brackets first
goes to zero at
jfirst-zero ¼ jf ∕4: (4.11)

Once we determine the fundamental frequency of the particular target


being used, we can make the calculation in Eq. (4.9), with the output spectrum
evaluated at jf. We repeat this process for each target used in the measurement
94 Chapter 4

Figure 4.9 Measured three-bar-target magnitude spectrum Soutput(j) (dotted curve) and
the corresponding calculated input spectrum Sinput-bar-target(j) (solid curve) (reprinted from
Ref. 2).

to generate the MTF curve from digitized bar-target data without the need for
a series conversion. Thus, we still make the bar-target MTF measurement one
frequency at a time, but now without concern about the accuracy of a series
conversion because we use a digital-filtering technique to isolate jf from the
continuous spectrum of the bar target.
From Fig. 4.9, we can see that there is more information present than just
at the fundamental frequency, and it is tempting to try to extend the range of
the measurement beyond the fundamental. We found experimentally that
accuracy suffered when we tried to extend the frequency range of the
measurement on either side of the fundamental. The spectral information is
strongly peaked, and dividing the output curve by the input curve tends to
emphasize any noise present in the measured data because the input decreases
so rapidly on either side of the peak. However, it may be possible to use the
lower-frequency subsidiary maxima seen in Fig. 4.7 to at least get another
measurement frequency for any given target, assuming a good signal-to-noise
ratio. Higher-frequency data will have generally been attenuated by the MTF
to a degree such that taking the ratio of Eq. (4.9) will not produce results of
good quality.

4.3 Conclusion
We often use bar targets in MTF measurements. It is important to realize that
we must correct modulation depth measurements made with bar targets to
produce MTF data, either with a series approach or with digital filtering. If
the MTF curve of the system under test is relatively smooth, the agreement
between CTF and bar-target data is often quite close. If the MTF curve of the
Square-Wave and Bar-Target Measurement of MTF 95

system under test has significant slope discontinuities, we find more


differences between CTF and bar-target data. In this situation, we prefer
the digital-filtering approach.

References
1. J. W. Coltman, “The specification of imaging properties by response to a
sine wave input,” JOSA 44(6), 468–471 (1954).
2. G. D. Boreman and S. Yang, “Modulation transfer function measurement
using three- and four-bar targets,” Appl. Opt. 34(34), 8050–8052 (1995).
3. D. H. Kelly, “Spatial frequency, bandwidth, and resolution,” Appl. Opt.
4(4), 435–437 (1965).
4. I. de Kernier, A. Ali-Cherif, N. Rongeat, O. Cioni, S. Morales, J. Savatier,
S. Monneret, and P. Blandin, “Large field-of-view phase and fluorescence
mesoscope with microscopic resolution,” J. Biomedical Optics 24(3),
036501 (2019) [doi: 10.1117/1.JBO.24.036501].
Chapter 5
Noise-Target Measurement
of MTF
Measurement of a system’s transfer function by means of its response to
random-noise inputs has long been a standard procedure in time-domain
systems.1 If the input is white noise, which contains equal amounts of all
frequencies, the action of the transfer function is to impart a nonuniformity of
frequency content that can be assessed at the output of the system by means of
a Fourier analysis. This concept has not historically been employed in the
measurement of optical systems, with the exception of an initial demonstra-
tion2 using (non-white) film-grain noise.
Noise-like targets of known spatial-frequency content are useful for MTF
testing, particularly for spatially sampled systems such as detector-array
image receivers. Noise targets have a random position of the image data with
respect to sampling sites in the detector array and measure a shift-invariant
MTF that inherently includes the sampling MTF. Noise targets measure the
MTF according to

PSDoutput ðj, hÞ ¼ PSDinput ðj, hÞ  ½MTFðj, hÞ2 , (5.1)

where PSD denotes power spectral density, defined as the ensemble average of
the square of the Fourier transform of object or image data. The PSD is a
measure of spatial-frequency content for random targets or random images.
We calculate the output PSD from the image data. Generally, we calculate the
finite-length Fourier transform of a row of image data and square the result.
This is an estimate of the PSD, but because the calculation is performed on a
data record of finite length, there is noise in the estimate. When we perform
this operation on other rows of image data, we generate other PSD estimates.
Averaging over these additional estimates gives a more accurate estimation3
of the PSD of the underlying random process. Noise targets usually measure
the MTF averaged over a system’s whole field of view. However, we can
calculate PSDs from various subregions of the image. If we use smaller data

97
98 Chapter 5

sets in this way, we will likely need to average over additional independent
data sets to obtain PSD estimates of sufficient signal-to-noise ratio.
Accuracy of an MTF measurement using noise targets depends critically
on how well we know the input PSD. This is a central consideration in the
design of the specific measurement apparatus to be used. The two main
methods we use for generating noise targets are laser speckle and random
transparencies. We generally use the laser-speckle method to measure the
MTF of a detector array alone because the method relies on diffraction to
deliver the random irradiance pattern to the receiver, without the need for an
intervening optical system. Because a transparency must be imaged onto the
detector array using optical elements, we find the random-transparency
method to be more convenient for MTF measurement of a complete imager
system, including both the detector array and the fore-optics. In both cases,
we can generate a variety of input PSDs, depending on the specific
instrumentation details.

5.1 Laser-Speckle MTF Test


Laser speckle is the spatial variation of irradiance that is produced when
coherent light is reflected from a rough surface. We can also generate speckle
by transmitting laser light through a phase-randomizing component. We see
the general geometry in Fig. 5.1, where the coordinates xs, ys denote the
illuminated rough surface (the source), and xobs, yobs denote the plane in
which we observe the speckle.
If the distance z between the source and observation planes satisfies the
Fresnel conditions,4 the PSD of the speckle irradiance consists of a delta
function at zero frequency, along with an extended-frequency component
proportional to the normalized autocorrelation of the irradiance distribution
at the plane of the source.5 This relation allows us to tailor the frequency
content of the speckle pattern by the design of the illuminated scattering
aperture.

Figure 5.1 General geometry for speckle formation.


Noise-Target Measurement of MTF 99

There is a Fourier transform relationship between a power spectrum and


an autocorrelation. Inverse transforming the speckle PSD produces the
autocorrelation of the speckle pattern irradiance. This autocorrelation
function is, except for a constant bias term, proportional to the irradiance
that would be present in the diffraction pattern of the clear aperture (i.e., the
square of the Fourier transform). The speckle pattern thus has a frequency
content similar to the diffraction pattern of the clear aperture, since the
autocorrelation width of the speckle equals the first-zero width of the
diffraction pattern. We visualize this relationship in Fig. 5.2 for a single
rectangular aperture of width W.
In generating laser speckle for MTF measurements, we randomize the
phase of the laser light either by passing it through a transmissive ground-
glass diffuser, or by passing it through an integrating sphere. A binary-
transmittance aperture then follows, whose geometry determines the
functional form of the speckle PSD. In either case, it is important to have
the aperture uniformly illuminated so that the input PSD is known. A linear
polarizer follows the aperture to increase the contrast of the speckle pattern by
eliminating the incoherent sum of two speckle patterns of orthogonal
polarization. We see the basic layout in Fig. 5.3.
Although not radiometrically efficient, the integrating-sphere method
produces nearly uniform irradiance at its output port,6 as long as we use

Figure 5.2 Relationship between the frequency content of a speckle pattern and a
diffraction pattern, given a single aperture of width W.
100 Chapter 5

Figure 5.3 Laser-speckle setup for MTF tests, using an integrating sphere to illuminate the
aperture with phase-randomized laser light.

appropriate baffling to preclude direct transmission of the laser beam. Smaller


integrating spheres are more flux efficient, at the expense of irradiance uniformity
at the output port. In our experimental work, we found that a 25-mm-diameter
sphere was a suitable compromise. Using a transmissive ground-glass diffuser
requires that the laser illumination be uniform across the full extent of the source
aperture. We can accomplish this by expanding the laser beam so that the
uniform region near the peak of the Gaussian is used, although at a considerable
loss of power. We may increase the overall flux efficiency of the speckle-
generation process using microlens diffusers7 or holographic techniques.8
For any of the laser-speckle MTF tests, we should confirm the input PSD
by imaging the illuminated aperture to verify the uniformity of illumination
across each aperture, and for multi-aperture configurations, to verify the
uniformity between the apertures. There will be some speckle artifacts in any
image of the laser-illuminated aperture. We can reduce these artifacts by
averaging the image irradiance data over a small moving window, or by
averaging over several aperture images with independent speckle patterns. We
can conveniently generate new speckle patterns by very small position changes
of the ground-glass diffuser or the integrating sphere with respect to the input
laser beam. If the large-scale illumination of the aperture is uniform, the
geometry of the aperture, along with the wavelength and distance, will
determine the functional form of the speckle PSD.
In the design of the measurement instrumentation, we choose the aperture
configuration. The rectangular aperture of width W and its PSD were shown in
Fig. 5.2. This aperture allows assessment of MTF over a range of spatial
frequencies in one measurement.9 The upper frequency limit is W/lz.
To avoid aliasing, this upper frequency limit must be less than the Nyquist
frequency of the array. This aperture has the disadvantage that the input PSD is
decreasing linearly with frequency. Thus, when we use the division of Eq. (5.1)
to compute MTF, any noise in the measurement will limit the frequency range
of validity because of division-by-zero artifacts, as discussed in Section 3.6.
We show a dual-slit aperture of separation L and slit width l1 in Fig. 5.4.
This aperture will project random fringes of narrowband spatial frequency
Noise-Target Measurement of MTF 101

Figure 5.4 Dual-slit aperture and the corresponding speckle PSD.

onto the detector array.10 In a manner analogous to the formation of a two-


slit diffraction pattern, the distance z from the dual-slit aperture to the
detector array controls the spatial frequency content of the speckle pattern
according to

jpeak ¼ L∕lz, (5.2)

where jpeak is the peak spatial frequency of the narrow high-frequency


component of the speckle PSD seen in Fig. 5.4. The baseband component at
the low spatial frequencies is not useful in the measurement, and its finite
width limits the lowest frequency at which a measurement can be made. The
spectral width of all features in the PSD scales inversely with z. Smaller slit
widths will result in a narrowing of both the baseband and the high-frequency
components, at the expense of less flux reaching the detector array.
We can use the dual-slit-aperture arrangement to measure MTF past the
Nyquist frequency of the detector array because the input PSD is largely
narrowband. If we know the distance z, we know its peak frequency from
Eq. (5.2). Thus, we can interpret the speckle pattern even if jpeak is aliased
because of the narrowband nature of the speckle produced by the dual-slit
aperture. If jpeak ¼ jNyquist + Dj, there is no spatial-frequency content at
jNyquist  Dj that would complicate the computation of the strength of the
peak. The solid curve in Fig. 5.5 is the speckle PSD for a distance z such that
the spatial frequency jpeak was below the aliasing frequency of the sensor
array. The dotted line is the speckle PSD for a shorter z, for which the spatial
frequency of the fringes was above the aliasing frequency. We see the falloff in
MTF between these two frequencies as a decrease in the peak height of the
PSD between those two values of jpeak. We plot MTF as the peak strength of
the narrowband component as a function of spatial frequency, with
representative results shown in Fig. 5.6. We can reduce the noise seen in
those MTF results by implementing additional PSD averaging over
independent speckle patterns.
102 Chapter 5

Figure 5.5 (left) Typical narrowband laser-speckle pattern and (right) the corresponding
image PSD plot in which both aliased and non-aliased PSDs are shown (reprinted from
Ref. 10).

Figure 5.6 MTF results from a dual-slit speckle measurement of a detector array, showing
measured data beyond the Nyquist frequency (reprinted from Ref. 10).

Implementation of the dual-slit aperture measurement requires either


dual-slit apertures of different spacings L or changing the source-to-detector-
array distance z to generate the range of spatial frequencies needed. We can
implement a mirror-based optical delay line to vary z without motion of the
source or detector. The mirror apertures of the delay line must be sufficiently
large that they do not limit the spatial extent of the source as viewed from the
detector over the range of required distances.
Figure 5.7 shows a two-dimensional slanted-dual-slit aperture10 that
generates a constant input PSD over an extended frequency range along the j
direction. An autocorrelation of the aperture in the x direction produces the
PSD form as shown. Figure 5.8 shows a plot of the two-dimensional PSD and
a speckle pattern resulting from this aperture.
Noise-Target Measurement of MTF 103

Figure 5.7 (left) Slanted-dual-slit aperture and (right) its j-direction PSD (reprinted from
Ref. 10).

Figure 5.8 (left) Two-dimensional PSD plot and (right) a speckle pattern from the slanted-
dual-slit aperture of Fig. 5.7 (reprinted from Ref. 10).

Figure 5.9 shows another two-dimensional aperture11 that generates a


constant input PSD over an extended frequency range along the j direction.
An autocorrelation of the aperture in the x direction produces the PSD form
as shown. Figure 5.10 shows a plot of the two-dimensional PSD and a speckle
pattern resulting from this aperture. This configuration has been used for
measurement of detector array MTF beyond the Nyquist frequency in the
following manner. A measurement of MTF out to Nyquist was first
performed using the flat region of the PSD. The aperture-to-detector distance
was then decreased so that the flat region of the PSD ranged from Nyquist to
twice Nyquist. A new output PSD was acquired, which aliased the high-
frequency information onto the original un-aliased PSD. A PSD subtraction
was performed, which yielded only the high-frequency PSD. The un-aliased
PSD and the high-frequency PSD were then concatenated, and Eq. (5.1) was
104 Chapter 5

Figure 5.9 (left) 45-deg cross aperture and (right) its j-direction PSD (reprinted from Ref. 11).

Figure 5.10 (left) Two-dimensional PSD and (right) speckle pattern from the 45-deg cross
aperture of Fig. 5.9 (reprinted from Ref. 11).

used to yield the MTF. Figure 5.11 shows the results of this measurement. The
Nyquist frequency of the tested array was 107 cy/mm. The discontinuity in the
MTF plot near Nyquist resulted from the absence of data in that vicinity
because of the finite width of the baseband feature in the PSD.

5.2 Random-Transparency MTF Test


We can create transparency targets with random patterns of known PSD.
Unlike the laser-speckle patterns discussed in the last section, these targets
must be imaged onto the detector array using optical elements and, therefore,
are most naturally used for characterizing a complete camera system—optics
and focal-plane array together. We can use these with broadband flux such as
from a blackbody or lamp source. Uniformity of the backlighting source is
important for the accuracy of the measurement, with integrating spheres or
Noise-Target Measurement of MTF 105

Figure 5.11 Speckle MTF measurement to twice the Nyquist frequency of a detector array,
using the aperture of Fig. 5.9. jNyquist ~107 cy/mm (adapted from Ref. 11).

large-area blackbodies having suitable characteristics. We can fabricate the


random patterns on different types of substrates.12 For the visible and the
3- to 5-mm MWIR band, we use either a laser-printing or photographic
process on plastic film. For the MWIR and the 8- to 12-mm LWIR band, we
use optical or electron-beam lithography involving a metal film on an
IR-transparent substrate. Because it is difficult to obtain gray-level transmit-
tance in the IR, our approach is to use an array of nominally square apertures
of various sizes in a metallic film to achieve the desired transmittances. To
avoid diffraction-induced nonlinearity in transmittance as a function of
aperture area, the minimum aperture dimension should be approximately
five times larger than the wavelength to be used.13
Figure 5.12 presents the usual setup for projecting a random-transparency
object scene into a system under test, which consists of both the imaging optics
and a sensor array. We position a transparency at the focus of a collimator
and backlight it using a uniform source. The MTF of the collimator should be
such that it does not limit the frequency range of the measurement;
alternatively, this MTF should be known and corrected for in the data
processing.
We begin our discussion by considering the generation of a random
pattern with a white-noise PSD up to a predetermined cutoff frequency. This
cutoff frequency on the transparency should correspond to the Nyquist
frequency of the system under test, considering the geometrical magnification
(m ¼ fsensor/fcollimator) of the two-lens relay in Fig. 5.12. Given that the pixel
spacings in the object and image planes are related by Dximg ¼ Dxobj  m, we
find that
f sensor
jcutoff ¼ jNyquist : (5.3)
f collimator

We can generate data for the transparency as follows. We create a set of N


random numbers taken from a range of 256 gray-level values, where N is the
106 Chapter 5

Figure 5.12 Setup for the random-transparency MTF test (adapted from Ref. 12).

number of pixels in a row of the detector array. We repeat this for the M rows
of the detector array so that we have an N  M matrix of uncorrelated
random numbers. If we render these data values as square contiguous pixels of
transmittance values on the substrate at the desired spacing Dxobj, we will
have spatial white noise bandlimited to the desired spatial frequency. We see
an example of this type of pattern in Fig. 5.13, along with a PSD computed
from the N random numbers for one line of data. Because the image data is
random, there are fluctuations in the PSD. This noise would average out if we
used more lines of data to compute the PSD, but the single-line PSD estimate
shown demonstrates the white-noise characteristic of the transparency.
In situations where the required spatial resolution of the pattern is well
within the capabilities of the system that generates the transparency, it may be
acceptable to confirm the input PSD in this manner directly from the pattern

Figure 5.13 (left) Bandlimited white-noise pattern and (right) a sample of the PSD
computed from the pattern (reprinted from Ref. 12).
Noise-Target Measurement of MTF 107

data. For other situations, we recommend experimental confirmation of the


input PSD by acquiring a high-resolution image of a portion of the pattern
and computing the PSD from that image. We illustrate this procedure using
the example of an infrared transparency fabricated by electron-beam
lithography, where the transmittance of each pixel was determined by the
fractional open area not covered by a metal film. Figure 5.14 shows a small
region of the original design data, a small region of the as-fabricated
transparency on ZnSe, and the input PSD calculated from the image of the as-
fabricated transparency. We see a falloff in the PSD arising from
imperfections in the fabrication process. These include misplaced metal spots
and some apertures that should be clear, being covered with metal. We used
the fourth-order polynomial fit to the PSD as the input PSD in Eq. (5.1). With
that correction, we obtained MTF results that agreed well with LSF-derived
measurements.12

Figure 5.14 (Upper left) A small region of design data for the IR transparency; (upper right)
microscopic image of a small region of the as-fabricated transparency (pixel size 46 mm);
(lower) the input PSD calculated from a microscopic image of the as-fabricated transparency,
where the smooth curve is a fourth-order polynomial fit (reprinted from Ref. 12).
108 Chapter 5

Figure 5.15 MTF measurement results for bandlimited white-noise pattern (reprinted from
Ref. 12).

Figure 5.15 shows MTF measurement results for the pattern in Fig. 5.13.
In the left figure, the dots are data points from the random-transparency MTF
technique, and the solid line is a fourth-order polynomial fit to the data. The
dashed lines are measured data using a line-response method. The upper
dashed curve corresponds to the situation where the image of the line source is
centered on a column of the detector array; the lower MTF curve corresponds
to the situation where the line image is centered between the columns.
Individual data points are the result of a particular spatial-frequency
component in the image, which have a random position with respect to the
photosite locations. The data points thus fall between the maximum and
minimum LSF-derived MTF curves, and the fitted curve falls midway
between the two line-response MTF curves. In the right figure of Fig. 5.15, we
compare the fourth-order data fit seen in the left figure to an average line-
response MTF, which was measured as follows. The line source was moved
randomly, and 15 LSF-derived MTFs were calculated and averaged. The
comparison between the random-transparency MTF (solid curve) and the
average LSF-derived MTF (dotted curve) shows excellent agreement,
consistent with the analysis of Park et al.,14 where a shift-invariant MTF is
calculated as the average over all possible test-target positions. We confirmed
the repeatability of the random-transparency method by making in-plane
translations of the transparency and comparing the resulting MTFs. We
found a variation in MTF of less than 2%. Thus, we demonstrated the
random-transparency method to be shift invariant. This shift invariance,
which applies to all noise-target methods, relaxes alignment requirements in
the test procedure, as compared to methods not employing noise targets,
where the test procedure generally requires fine positional adjustment of
components to achieve the best response.
Noise-Target Measurement of MTF 109

Other PSD dependences are possible. For example, using the frequency-
domain filtering process seen in Fig. 5.16, we can modify white-noise data to
yield a pattern with several discrete spatial frequencies of equal PSD
magnitude. Figure 5.17 shows a resulting random pattern, along with the
frequency filtering function we used. The PSD is no longer white, resulting in
a discernable inter-pixel correlation. We show the MTF measurements in
Fig. 5.18 by comparing the MTF results from the random discrete-frequency
pattern and the white-noise pattern. Also we show the amplitude spectrum of

Figure 5.16 Generation process of a discrete-frequency pattern (adapted from Ref. 12).

Figure 5.17 (left) Random discrete-frequency pattern and (right) its corresponding filter
function (reprinted from Ref. 12).
110 Chapter 5

Figure 5.18 Comparison of MTF measurements with the random discrete-frequency


pattern (solid line) and the white-noise pattern (dotted line). Also shown is the amplitude
spectrum of the system noise (dot-dashed line) (reprinted from Ref. 12).

the system noise, which we measured by taking the magnitude of the FFT of
the array data, averaged over rows, for a uniform irradiance input equal to the
average value of the discrete-frequency target image. The discrete-frequency
method allows a single-frame measurement of both MTF and spatial noise at
several discrete spatial frequencies.

5.3 Conclusion
Random-noise targets are useful for measuring a shift-invariant MTF. The two
primary methods for generating these targets are laser speckle and transparen-
cies. We generally use laser speckle to test detector arrays because no
intervening optical elements are required. If the input noise PSD is narrowband,
we can use laser speckle to measure MTF past the Nyquist frequency of the
detector array. We generally use transparencies to test camera systems
consisting of fore-optics and a detector array. In both cases, we can generate
a variety of input PSDs, depending on the specifics of the instrument design.

References
1. A. Papoulis, Probability, Random Variables, and Stochastic Processes,
McGraw-Hill, New York, pp. 346–350 (1965).
2. H. Kubota and H. Ohzu, “Method of measurement of response function
by means of random chart,” JOSA 47(7), 666–667 (1957).
Noise-Target Measurement of MTF 111

3. J. S. Bendat and A. G. Piersol, Random Data: Analysis and Measurement


Procedures, Wiley-Interscience, New York, pp. 189–193 (1971).
4. M. Born and E. Wolf, Principles of Optics, Fifth edition, Pergamon Press,
Oxford, p. 383 (1975).
5. J. W. Goodman, “Statistical Properties of Laser Speckle,” in Laser
Speckle and Related Phenomena, J. C. Dainty, Ed., Springer-Verlag,
Berlin, Heidelberg, pp. 38–40 (1984).
6. G. D. Boreman, Y. Sun, and A. B. James, “Generation of laser speckle
with an integrating sphere,” Opt. Eng. 29(4), 339–342 (1990) [doi: 10.
1117/12.55601].
7. A. D. Ducharme, “Microlens diffusers for efficient laser speckle
generation,” Opt. Exp. 15(22), 14573–14579 (2007).
8. A. D. Ducharme and G. D. Boreman, “Holographic elements for
modulation transfer function testing of detector arrays,” Opt. Eng. 34(8),
2455–2458 (1995) [doi: 10.1117/12.207144].
9. G. D. Boreman and E. L. Dereniak, “Method for measuring modulation
transfer function of charge-coupled devices using laser speckle,” Opt. Eng.
25(1), 148–150 (1986) [doi: 10.1117/12/7973792].
10. M. Sensiper, G. D. Boreman, A. D. Ducharme, and D. R. Snyder,
“Modulation transfer function testing of detector arrays using narrow-
band laser speckle,” Opt. Eng. 32(2), 395–400 (1993) [doi: 10.1117/12.
60851].
11. A. D. Ducharme and S. P. Temple, “Improved aperture for modulation
transfer function measurement of detector arrays beyond the Nyquist
frequency,” Opt. Eng. 47(9), 093601 (2008) [doi: 10.1117/1.2976798].
12. A. Daniels, G. D. Boreman, A. D. Ducharme, and E. Sapir, “Random
transparency targets for modulation transfer function measurement in the
visible and IR,” Opt. Eng. 34(3), 860–868 (1995) [doi: 10.1117/12.190433].
13. A. Daniels, G. D. Boreman, and E. Sapir, “Diffraction effects in IR
halftone transparencies,” Infrared Phys. Technol. 36(2), 623–637 (1995).
14. S. K. Park, R. Schowengerdt, and M.-A. Kaczynski, “Modulation-
transfer-function analysis for sampled image systems,” Appl. Opt. 23(15),
2572–2582 (1984).
Chapter 6
Practical Measurement Issues
In this chapter, we will consider a variety of practical issues related to MTF
measurement, including the cascade property of MTF multiplication, the
quality of the auxiliary optics such as collimators and relay lenses, source
coherence, and normalization at low frequency. We will conclude with some
comments about the repeatability and accuracy of MTF measurements and
the use of computers for data acquisition and processing. At the end of the
chapter, we will consider four different instrument approaches that are
representative of commercial MTF equipment and identify the design
tradeoffs.

6.1 Measurement of PSF


We begin with a useful approximate formula. For a uniformly distributed blur
spot of full width w, the resulting MTF, shown in Fig. 6.1, is

sinðpjwÞ
MTFðjÞ ¼ : (6.1)
ðpjwÞ

We can use this simple approach as a handy reality check, comparing a


measured spot size to computer-calculated MTF values. Often when
computers are part of the data-acquisition and data-processing procedures,
we cannot check and verify each step of the MTF calculation. In these
situations, it is a good idea to manually verify the results of the computation
by using a measuring microscope to assess the size of the impulse response
formed by the system.
Figure 6.2 shows a convenient configuration1 for a visible-band PSF
measurement that is easy to assemble. A microscope objective is paired with a
Si charge-coupled-device (CCD) camera head by means of threaded adapter
rings. Even if the distance between the objective and the CCD image plane is
not the standard 160 mm for a microscope, an in-focus image will be formed
when the impulse response being viewed is at some small distance in front of
the objective. It is convenient to use a ground-glass screen on which to project

113
114 Chapter 6

Figure 6.1 MTF for a uniform blur spot of width w.

Figure 6.2 Measuring-microscope apparatus for measuring PSF.

the impulse response of the system under test. This allows us to measure the PSF
irradiance distribution at the plane of the ground glass without regard for whether
incident rays at all angles would be captured by the finite numerical aperture of
the microscope objective. The objective and CCD combination should be first
focused on the ground glass, and the whole assembly should allow precise axial
motion to find the best focus of the impulse response being measured. The
objective and FPA combination should have a three-axis micropositioner,
allowing motion perpendicular to the optic axis, to allow for centering the PSF in
the field of the camera and measurement of the impulse response width.
We can obtain a quick estimate of the width of the blur spot by
positioning the center of the image at one side of the blur spot and noting the
amount of cross-axial micropositioner motion required to place the center of
the image on the other side of the blur spot. Surely the blur spot is not a
uniform irradiance distribution, and there is some arbitrariness in the
assessment of the spot width in this manner. Nevertheless, we can obtain a
back-of-the-envelope estimate for the MTF using that estimate of w. When we
compare Fig. 6.1 [using the manual measurement of w in Eq. (6.1)] to the
computer-calculated MTF, the results should be reasonably close in
magnitude (if not in acutal functional form). If we do not find a suitable
correspondence, we should re-examine the assumptions made in the computer
calculation before certifying the final measurement results. Common errors
Practical Measurement Issues 115

include missed scale factors of 2p in the spatial-frequency scale, incorrect


assumptions about the wavelength or F/# in the computer calculation of the
diffraction limit, incorrect assumed magnification in a re-imaging lens, or an
inaccurate value of detector-to-detector spacing in the image receiver array.
Of course, a more complete MTF measurement can be made if the
irradiance image of the impulse response falling on the ground glass is
captured. A frame grabber digitizes the video signal coming from the detector
array (some cameras will have the option for direct digital output). We can
then Fourier transform the PSF data, being careful to first account for the
magnification of the PSF irradiance distribution by the microscope-objective/
FPA system. The microscope objective is being used nearly on-axis, so its own
MTF should usually not affect the PSF measurement.

6.2 Cascade Properties of MTF


We often account for the combination of several subsystems by simply
multiplying MTFs. That is, we calculate the overall system MTF as a point-
by-point product of the individual subsystem MTFs, as in Eq. (1.9) and Fig. 1.7.
This is a very convenient calculation but is sometimes not correct. We want to
investigate the conditions under which we can multiply MTFs. In consideration
of incoherent imaging systems, the MTF multiplication rule can be simply stated
as follows: We can multiply MTFs if each subsystem operates independently on
an incoherent irradiance image. We will illustrate this rule using examples.
First, we consider the example seen in Fig. 6.3, the combination of a lens
system (curve a) and a detector (curve b) in a camera application. The MTF of
the combination (curve c) is a point-by-point multiplication of individual
MTFs because, surely, the detector responds to the spatial distribution of
irradiance (W/cm2) in the image plane without any partial-coherence effects.

Figure 6.3 Cascade of MTFs of a lens system (curve a) and a detector (curve b) produces
the product MTF (curve c).
116 Chapter 6

A relay lens pair with a diffusing screen at the intermediate image plane is
another case where we can cascade MTFs by means of a point-by-point
multiplication (Fig. 6.4). The exitance [W/cm2] on the output side of the
diffuser is proportional to the irradiance [W/cm2] at the input face. Any point-
to-point phase correlation in the intermediate image is lost in this process. The
diffuser forces the two systems to interact independently, regardless of their
individual state of correction. The two relay stages cannot compensate for the
aberrations of the other because of the phase-randomizing properties of the
diffuser. The MTFs of each stage multiply, and the product MTF is
correspondingly lower. This relay-lens example is a bit contrived because we
typically do not have a diffuser at an intermediate image plane (from a
radiometric point of view, as well as for image-quality reasons), but it
illustrates the MTF multiplication rule by presenting the second stage with an
incoherent irradiance image formed by the first stage.
Figure 6.5 illustrates a case in which cascading MTFs will not work. This
is a two-lens combination where the second lens balances the spherical
aberration of the first lens. Neither lens is well-corrected by itself, as seen by
the poor individual MTFs. The system MTF is higher than either of the

Figure 6.4 Relay-lens pair with a diffuser at the intermediate image plane. The MTF of
each stage is simply multiplied.

Figure 6.5 Pair of relay lenses for which the MTFs do not cascade.
Practical Measurement Issues 117

individual-component MTFs, which cannot happen if the MTFs simply


multiply. The intermediate image is partially coherent, so these lenses do not
interact on an independent irradiance basis.2 Lens 1 does not simply present
an irradiance image to Lens 2. Specification of an incoherent MTF for each
element separately is not an accurate way to analyze the system because that
separate specification does not represent the way that the lenses interact with
each other.
However, if two systems have been independently designed and
independently corrected for aberrations, then the cascade of geometrical
MTFs is a good approximation. We consider the example seen in Fig. 6.6 of
an afocal telescope combined with a final imaging lens. This is a typical
optics configuration for an infrared imager. Each subsystem can perform in
a well-corrected manner by itself. As noted in Chapter 1, the limiting
aperture (aperture stop) of the entire optics train determines the diffraction
MTF. This component does not cascade on an element-by-element basis.
The diffraction MTF is included only once in the system MTF calculation.
This distinction is important when measured MTFs are used. For example,
suppose that separate measured MTF data are available for both
subsystems. Simply multiplying the measured MTFs would count
diffraction twice because diffraction effects are always present in any
measurement.
Because the front end of the afocal telescope in Fig. 6.6 is the limiting
aperture of the system, we can use the measured MTF data for subsystem #1:

MTF1, meas ðjÞ ¼ MTF1, geom ðjÞ  MTF1, diff ðjÞ: (6.2)

To find the geometrical MTF of subsystem #2, we must separate the


geometrical and diffraction contributions3 in the measured MTF of the
subsystem:

Figure 6.6 Combination of an afocal telescope and a detector lens.


118 Chapter 6

MTF2, meas ðjÞ ¼ MTF2, geom ðjÞ  MTF2, diff ðjÞ (6.3)
and
MTF2, meas ðjÞ
MTF2, geom ðjÞ ¼ : (6.4)
MTF2, diff ðjÞ

The only diffraction MTF that contributes to the calculation of the total MTF
is that of subsystem #1, so for the total system MTF, we have

MTFtotal ðjÞ ¼ MTF1, diff ðjÞ  MTF1, geom ðjÞ  MTF2, geom ðjÞ: (6.5)

6.3 Quality of Auxiliary Optics


Figure 6.7 is a schematic of the typical MTF setup, where the unit under test
(UUT) images a point-source object. However, in practice, we often use
auxiliary optical elements in the test setup, as seen in Fig. 6.8. For instance, we
use additional optics to simulate a target-at-infinity condition in order to
magnify a small image-irradiance distribution before acquiring it with a
detector array, or to test an afocal element.
To prevent the auxiliary optics from impacting the MTF results, the
following conditions must be met. First, the aperture of the collimator must
overfill that of the UUT so that all aperture-dependent aberrations of the
UUT are included in the measurement. Second, in the case of the re-imager,
the relay-lens aperture must be sufficiently large that it does not limit the ray
bundle focused to the final image plane. In both instances, the aperture stop of
the end-to-end measurement system must be at the UUT.
We want the auxiliary optics to be diffraction-limited; the geometrical-
aberration blur of the auxiliary optics should be small compared to its
diffraction blur. Because the previous condition requires that the auxiliary
optics overfill the aperture of the UUT—putting the aperture stop at the
UUT—the diffraction limit will also occur in the UUT. If the auxiliary-optics
angular aberrations are small compared to the diffraction blur angle of the
auxiliary optics, then the aberrations of the auxiliary optics will be even
smaller compared to the larger diffraction blur angle of the UUT. Under these
conditions, we can directly measure the image blur of the UUT because both
the diffraction and aberration blur angles of the auxiliary optics are small

Figure 6.7 MTF test where the source is directly imaged by the unit under test.
Practical Measurement Issues 119

Figure 6.8 Auxiliary optics used in MTF-test setups.

compared to the UUT blur. If the auxiliary optics is not diffraction-limited,


then its aberration blur must be characterized and accounted for in the MTF
product that describes the measured data. Then, we divide out this
instrumental aberration contribution to MTF in the frequency domain,
providing that the aberrations of the auxiliary optics are small enough to not
seriously limit the spatial-frequency range of the measurement.
In a typical test scenario, we have a number of impulse responses that
convolve together to give the measured PSF: the diffraction of the UUT, the
aberration of the unit under test, the aberration of the collimator, the detector
footprint, the source size, and the sampling. We want to isolate the PSF of the
UUT, which is

PSFUUT ¼ PSFUUT, diff  PSFUUT, aberr : (6.6)

The other PSF terms should be known and should be much narrower than the
PSF of the UUT so that we can divide them out in the frequency domain
using Eq. (1.9) without limiting the frequency range of the MTF test.
120 Chapter 6

6.4 Source Coherence


In MTF tests that involve the imaging of a back-illuminated extended target
(for instance, LSF, ESF, or bar-target tests), the illumination must be spatially
incoherent to avoid interference artifacts between separate locations in the
image that can corrupt MTF data. If the extended source is itself incandescent
(as in a hot-wire LSF test for the infrared), the source is already incoherent.
It is when an extended aperture is back-illuminated that coherence effects
may become important. When incandescent-bulb sources are used with a
low-F/# condenser-lens setup to back-illuminate the source aperture, partial-
coherence interference effects can be present in the image. The usual way we
reduce the source coherence for an incandescent source is to place a ground-
glass diffuser on the source side, adjacent to the aperture being illuminated, as
seen in Fig. 6.9. The phase differences between various correlation cells of the
diffuser are generally sufficient for reduction of the source coherence.
If we illuminate the slit aperture of Fig. 6.9 with a collimated laser beam,
the spatial coherence across the slit is nearly unity, and we will need a more
elaborate setup to reduce the coherence. We again use a ground-glass diffuser
at the illuminated aperture, but we must mechanically move the diffuser on a
time scale that is short compared to the integration time of the sensor to
average out the speckles in the image data. Piezoelectric transducers are a
common choice to implement such motion.
Concerns about the spatial coherence of the input object generally do not
apply to pinholes because, if a source is small enough to behave as a point
source with respect to the system under test, it is essentially only a single-
point emitter, which is self-coherent by definition. We consider spatial
coherence only when adjacent point sources might interfere with each other.
A pinhole aperture can be illuminated by a laser source without appreciable

Figure 6.9 A ground-glass diffuser is used to reduce the partial coherence of an


incandescent source.
Practical Measurement Issues 121

spatial-coherence artifacts. The incoherent MTF will be measured because


the irradiance of the PSF is used for analysis.

6.5 Low-Frequency Normalization


A practical issue is how to normalize a set of measurements so that MTF ¼ 1
at j = 0. If we are calculating MTF from PSF data, the normalization issue is
straightforward because the Fourier transform of the PSF is set to 1 at j = 0.
However, if we are making measurements with sinewaves, bar targets, or
noise targets, we have a set of modulation data at various frequencies. We do
not have access to a target of arbitrarily low frequency, as seen in the data of
Fig. 6.10. We should try to acquire data at as low a frequency as possible in
any test scenario. Sometimes it is feasible to measure the modulation depth of
“flat-field” data, where the lowest spatial frequency for which the modulation
depth can be measured is a flat field of maximum and minimum irradiance
filling the FOV of the system under test. Thus, the lowest frequency that can
possibly be measured is j = 1/(FOV). The resulting modulation depth would
provide the normalization value for which the MTF can be set to unity, and
negligible error would result from this approximation to zero frequency. If
flat-field modulation data are not available (for instance, in some speckle
measurements), the issue of low-frequency normalization becomes more
subtle. The normalization is important because it sets the MTF value for all
data points taken. Obviously, we cannot simply set the lowest frequency MTF
value in Fig. 6.10 to 1 because that will unduly inflate all of the MTF values in
the data set. An a priori analytical function to which the MTF should be
similar can provide guidance regarding the slope of the MTF curve as j = 0 is
approached. Otherwise, a straight line joining the lowest data point and j = 0
is a reasonable assumption.

Figure 6.10 MTF data points without low-frequency data.


122 Chapter 6

6.6 MTF Testing Observations


When we measure the MTF by two different methods, for instance, a
bar-target test and an impulse-response test, the results should be identical.
When the measured MTFs do not accurately agree, some portion of the
system (typically, the detector array) is not responding in a linear fashion. The
accuracy of commercial MTF measurement systems ranges from 5% to 10%
in absolute MTF with 1% accuracy possible. Radiometric measurements are
difficult to perform to the 1% level, and MTF testing combines both
radiometric- and position-accuracy requirements. The position-accuracy and
repeatability requirements are often quite demanding—often less than 1 mm in
many cases. Good micropositioners and solid mounts are required if we are to
achieve high-accuracy MTF measurements. Even with the best components,
lead-screw backlash errors can occur, and high-accuracy motion must be
made in only one direction.

6.7 Use of Computers in MTF Measurements


Most commercial systems come with computer systems for data acquisition,
processing, and experimental control. Precise movement control of scanning
apertures and other components is very convenient, especially for through-
focus measurements, which are tedious if done manually. The graphics-
display interface allows us to immediately visualize the data and lets us adjust
parameters in real time. Digitized data sets facilitate further signal processing,
storage, and comparison of measurement results.
There is, however, a drawback to the use of computers: software
operations typically divide out instrumental MTF terms, using assumed
values for the slit width, collimator MTF, wavelength range, F/#, and other
parameters. In practice, we must ensure that these settings are correct. We can
check the final results for realism as discussed in Section 6.1. Also, we must
remember that the electronics of the data-acquisition system can trigger from
a variety of spurious signals. It is possible to take erroneous MTF
measurements on a clock-feed-through waveform from the detector array
electronics, rather than from the PSF of the system. As a test, we can cover the
detector array and see if an MTF result is displayed on the monitor.
Occasionally this does happen, and it is worth checking for if the
measurement results are in question.

6.8 Representative Instrument Designs


We now consider four different instrument approaches taken in commercial
MTF-measurement systems. We examine the design tradeoffs and compare
the systems for their viability in visible and infrared applications.
Practical Measurement Issues 123

Figure 6.11 Example system #1: visible ESF.

6.8.1 Example system #1: visible edge response


In the visible ESF design (Fig. 6.11), intended for MTF measurement of
visible-wavelength lenses, a pinhole is illuminated by either a laser or filtered
arc lamp. A relatively high flux level and good SNR allow for a simple system.
A tilt control for the lens under test allows for off-axis performance
measurement. A diffraction-limited refractive collimator limits the aperture
size of lenses that can be tested. Larger apertures can be obtained using
reflective collimators. A scanning “fishtail” blocking device is used as a knife
edge in the image plane. The obtained data are therefore ESF, even with a
pinhole source. The configuration of the knife edge allows for a measurement
in either the x or y direction without changing the apparatus, requiring a two-
dimensional (2D) scan motion. A photomultiplier tube detector is used,
consistent with the visible-wavelength response desired. This sensor typically
gives a high SNR. Because of the high SNR, the derivative operation required
to go from ESF to LSF is not unduly affected by electronics noise. A fast-
Fourier transform (FFT) of the LSF data produces one profile of the MTF.
Scanning in the orthogonal direction produces the other profile.

6.8.2 Example system #2: infrared line response


The infrared line response system (Fig. 6.12) is designed for LSF measurement
of lenses in the LWIR portion of the spectrum. A number of techniques are
employed to enhance the SNR, which is always a concern at IR wavelengths.
A heated ceramic glowbar oriented perpendicular to the plane of the figure
illuminates a slit, which acts as an IR line source. This configuration gives
more image-plane flux than a pinhole source, allowing for the use of the
relatively low-temperture glowbar source consistent with the LWIR band of
interest. A narrow linear detector oriented parallel to the source direction is
scanned to acquire the LSF image. This HgCdTe sensor is cyrogenically
cooled with liquid nitrogen (LN2). The detector serves the function of a
scanning slit as the detector moves. This avoids the difficulty of implementing
a movable cooled slit inside the liquid-nitrogen dewar. A diffraction-limited
reflective collimator allows for larger UUT apertures and broadband IR
124 Chapter 6

Figure 6.12 Example system #2: infrared line response.

operation without chromatic aberrations. A mechanical chopper is used for


modulating the source, allowing for narrowband synchronous amplification.
This reduces the noise bandwidth of the measurement and increases the SNR.
The LSF is measured directly. This approach avoids the derivative operation
needed for an ESF scan (which is problematic at low SNR) and also requires
less dynamic range from the sensor.

6.8.3 Example system #3: visible square-wave response


The visible square-response system4 is rather special in that it approximates a
square-wave test rather than a bar-target test. The system is configured for
testing the MTF of lenses in the visible portion of the spectrum. The object
generator seen in Fig. 6.13 serves as the source of variable-frequency square
waves. The object generator consists of two parts, a radial grating that rotates
at a constant angular velocity and a slit aperture. The center of rotation of the
grating is set at an angle (0 , u , 90 deg) with respect to the slit aperture. This
allows control of the spatial frequency of the (moving) square waves. The
response of the system is measured one frequency at a time.

Figure 6.13 The object generator for the visible square-wave test.
Practical Measurement Issues 125

Figure 6.14 Example system #3: visible square-wave response.

The object generator is backlit and, as seen in Fig. 6.14, is placed at the
focus of a collimator and re-imaged by the lens under test. A long, narrow
detector is placed perpendicular to the orientation of the slit image, and the
moving square waves pass across the detector in the direction of its narrow
dimension. The analog voltage waveform output from the sensor is essentially
the time-domain square-wave response of the system to whatever spatial
frequency the object generator produced. To avoid the necessity of using the
series conversion from CTF to MTF, a tunable-narrowband analog filter is
used, eliminating all harmonics and allowing only the fundamental frequency
of the square wave to pass through. The center frequency of the filter is
changed when, with different u settings, the object frequency is changed. The
waveform after filtering is sinusoidal at the test frequency, and the modulation
depth is measured directly from this waveform. Filtering the waveform
electronically requires the electronic frequency of the fundamental to stay well
above the 1/f-noise region (1 kHz and below), which implies a fast rotation of
the object-plane radial grating. This necessitates a relatively high-power
optical source to back-illuminate the object generator because the detector has
to operate with a short integration time to acquire the quickly moving square
waves. This design works well in the visible, but was not suitable in the
infrared, where the SNR would be lower because of the lower source
temperatures used in that band. Compensating for the low SNR would
126 Chapter 6

Figure 6.15 Example system #4: infrared bar-target response.

require a longer integration time. Lowering the speed of the rotating grating
would put the electronic signal into the 1/f-noise region.

6.8.4 Example system #4: bar-target response


Figure 6.15 shows an IR bar-target system that is intended to measure the
bar-target response of lenses at LWIR wavelengths. The four-bar patterns are
made of metal and are backlit with an extended blackbody source. A separate
bar target is used for each fundamental spatial frequency of interest.
The resulting image is scanned mechanically with a cryogenically cooled
single-element HgCdTe detector. Because the image stays stationary in time, a
slow sensor-scan speed allows for signal integration, improving the SNR. The
bar-target data is processed using the series correction of Eq. (4.4). Analog
filtering as seen in example system #3 is not used because the slow scan speed
of the sensor would put the resulting analog waveforms into the 1/f-noise
region at low frequencies.

6.9 Conclusion
Measurement of MTF requires attention to various issues that can affect the
integrity of the data set. Most importantly, the quality of any auxiliary optics
must be known and accounted for. The optics other than the unit under test
should not significantly limit the spatial frequency range of the test. The
positional repeatability of the micropositioners used is critical to obtaining
high-quality data. We should take care to ensure that the coherence of the
source does not introduce interference artifacts into the image data that could
bias the MTF computation. It is also important that we carefully consider the
issue of low-frequency normalization because this affects the MTF value at all
frequencies. Computers are ubiquitous in today’s measurement apparatus. We
should make sure that all default settings in the associated software are
consistent with the actual measurement conditions.
Practical Measurement Issues 127

References
1. Prof. Michael Nofziger, University of Arizona, personal communication.
2. J. B. DeVelis and G. B. Parrent, “Transfer function for cascaded optical
systems,” JOSA 57(12), 1486–1490 (1967).
3. T. L. Alexander, G. D. Boreman, A. D. Ducharme, and R. J. Rapp,
“Point-spread function and MTF characterization of the kinetic-kill-vehicle
hardware-in-the-loop simulation (KHILS) infrared-laser scene projector,”
Proc. SPIE 1969, pp. 270–284 (1993) [doi: 10.1117/12.154720].
4. L. Baker, “Automatic recording instrument for measuring optical transfer
function,” Japanese J. Appl. Science 4(suppl. 1), 146–152 (1965).

Further Reading
Baker, L., Selected Papers on Optical Transfer Function: Measurement, SPIE
Milestone Series, Vol. MS 59, SPIE Press, Bellingham, Washington
(1992).
Chapter 7
Other MTF Contributions
We now consider the MTF contributions arising from image motion, image
vibration, atmospheric turbulence, and aerosol scattering. We present a
first-order analysis of these additional contributions to the system MTF. Our
heuristic approach provides a back-of-the-envelope estimate for the image-
quality impact of these effects, and a starting point for more advanced
analyses.

7.1 Motion MTF


Image-quality degradation arises from movement of the object, image
receiver, or optical-system line of sight during an exposure time te. We
consider uniform linear motion of the object at velocity vobj and a
corresponding linear motion of the image at velocity vimg, as shown in
Fig. 7.1. Over an exposure time te, the image has moved a distance vimg  te.
This one-dimensional motion blur can be modeled as a rect function:

Figure 7.1 Linear motion blur is the product of image velocity and exposure time.

129
130 Chapter 7

hðxÞ ¼ rect½x∕ðvimg  te Þ, (7.1)

leading to an MTF along the direction of motion,

sinðpjvimg te Þ
MTFalong-motion ðjÞ ¼ : (7.2)
ðpjvimg te Þ

7.2 Vibration MTF


Sometimes the platform on which the optical system is mounted vibrates.
Typically, we analyze the effect of the vibration one frequency at a time, thus
assuming sinusoidal motion. The most important distinction is between
high-frequency and low-frequency motion. We compare the temporal period
of the motion waveform to the exposure time of the sensors te. The case of
high-frequency sinusoidal motion, where many oscillations occur during
exposure time te, is the easiest for us to analyze. As seen in Fig. 7.2, we model
the vibration by assuming that a nominally axial object point undergoes
sinusoidal motion of amplitude D perpendicular to the optic axis. The
corresponding image-point motion will build up a histogram impulse response
in the image plane. This impulse response will have a minimum at the center
of the motion and maxima at the image-plane position corresponding to the
edges of the object motion because the object is statistically most likely to be
found near the peaks of the sinusoid where the object motion essentially stops
and turns around. The process of stopping and turning leads to more

Figure 7.2 High-frequency sinusoidal motion builds up a histogram impulse response in


the image plane.
Other MTF Contributions 131

residence time of the object near those locations and thus a higher probability
of finding the object near the peaks of the sinusoidal motion.
If the sinusoidal object motion has amplitude D, the total width of h(x) is
2D, assuming unit magnification of the optics. There is zero probability of the
geometrical image point being found outside of this range, which leads to the
impulse response depicted in Fig. 7.3. If we take the Fourier transform of this
h(x), we obtain the corresponding vibration MTF seen in Fig. 7.4.1
For low-frequency sinusoidal vibrations, the image quality depends on
whether the image-data acquisition occurs near the origin or near the extreme
points of the object movement. As stated earlier, the velocity slows near
the extreme points and is at its maximum near the center of the motion. In the
case of low-frequency sinusoidal vibrations, we must perform a more detailed

Figure 7.3 Impulse response for sinusoidal motion of amplitude D.

Figure 7.4 MTF for sinusoidal motion of amplitude D.


132 Chapter 7

analysis to predict the number of exposures required to get a single lucky shot
where there is no more than a prescribed degree of motion blur.2

7.3 Turbulence MTF


Atmospheric turbulence results in image degradation. We consider a random
phase screen (Fig. 7.5) with an autocorrelation width w that is the size of the
refractive-index eddy and a phase variance s2. The simplest model is one of
cloud-like motion, where the phase screen moves with time but does not
change form—the frozen-turbulence assumption. We model an average
image-quality degradation over exposure times that are long compared with
the motion of the phase screen. Image quality estimates for short exposure
times require more complicated models.
We consider diffraction from the eddys to be the cause of MTF
degradation (blurring). We assume that w ≫ l, which is consistent with the
typical eddy sizes (1 cm , w , 1 m) encountered in practice. Refractive
ray-deviation errors can be determined from a separate angle-of-arrival
analysis, resulting in image-plane motion (distortion). As seen in Fig. 7.6, the
impulse response h(x) consists of a narrow central core from the unscattered
radiation and a wide diffuse region from the scattered radiation. Larger phase
variation in the screen leads to more of the impulse-response power being
contained in the scattered component.
The narrow central core of the impulse response contributes to a broad
flat MTF at high frequencies. The broad diffuse scattered component of the
impulse response will contribute to an MTF rolloff at low frequencies. We can
write3 the turbulence MTF as
    2 
lj
MTFðjÞ ¼ exp s 1  exp 
2
, (7.3)
w

where j is the angular spatial frequency in cycles/radian. This MTF is plotted


in Fig. 7.7, with the phase variance as a parameter. For a phase variance near
zero, turbulence contributes minimal image-quality degradation because most

Figure 7.5 Frozen-turbulence model for a phase screen.


Other MTF Contributions 133

Figure 7.6 Impulse-response form for atmospheric turbulence.

Figure 7.7 Turbulence MTF parameterized on phase variance.

of the light incident on the phase screen passes through unscattered. As the
phase variance increases, more light will be spread into the diffuse halo of the
impulse response seen in Fig. 7.6, and the MTF will roll off at low frequencies.
For all of the curves in Fig. 7.7, the MTF is flat at high frequencies. The
atmospheric turbulence MTF is only one component of the MTF product,
and the high-frequency rolloff typically seen for overall system MTF will be
caused by some other MTF component.
For a large phase variance, the turbulence MTF of Eq. (7.3) reduces to a
Gaussian form:
  2 
2 lj
MTFðjÞ ¼ exp s , (7.4)
w

with a 1/e rolloff frequency of j1/e ¼ w/(ls). In this limit it is straightforward


to identify the issues affecting image quality. The transfer function has a
higher rolloff frequency (better image quality) for larger eddy size w
134 Chapter 7

Figure 7.8 Comparison of field imagery and turbulence simulations (adapted from Ref. 4).

(less diffraction), shorter l (less diffraction), and smaller s (less phase


variation). The effect of turbulence is less for shorter propagation paths.
Recent advances in modeling image degradation caused by atmospheric
turbulence have been made at the U.S. Army Night Vision Lab4 by
researchers who compared model predictions and field-measurement videos.
They found good agreement for low and medium turbulence strength (values
of the refractive-index structure constant Cn2 between 5 and 10 1014) and
for short and medium ranges (300 and 650 m), as seen in Fig. 7.8. The main
points of the model are the use of a short-exposure atmospheric MTF
expression to account for blurring and an angle-of-arrival expression to
account for the motion of image features.

7.4 Aerosol-Scattering MTF


Forward scattering from airborne particles (aerosols) also causes image
degradation. We consider a volume medium with particles of radius a, as seen
in Fig. 7.9, and assume that the particle concentration is sufficiently low that
multiple-scattering processes are negligible. We consider diffraction from the
particles to be the primary beam-spreading mechanism.5,6 According to
Eq. (7.5), attenuation of the transmitted beam power f is caused by both
absorption (exponential decay coefficient ¼ A) and scattering (exponential
decay coefficient ¼ S). Thus, the 1/e distance for absorption is 1/A, and for
scattering is 1/S:
Other MTF Contributions 135

Figure 7.9 Forward-scattering model for aerosols.

fðzÞ ¼ f0 eðAþSÞz : (7.5)

Absorption is not a diffraction process and does not depend on spatial


frequency. Because when we set MTF(j ¼ 0) ¼ 1 the absorption effects will be
normalized out, only the scattering process is important for development of an
aerosol MTF.
We typically assume that the particle size a . l, consistent with the usual
range of particle sizes of interest for aerosols 100 mm , a , 1 mm. Thus, the
size of the diffracting object a for scattering is much smaller than was w for the
case of atmospheric turbulence, with consequently larger forward-scatter
diffraction angles. If a  l, which is important only for very small particles, or
for imaging in the infrared portion of the spectrum, the scattering is nearly
isotropic in angle. In this case, the image quality is significantly degraded.
Within the limitation of the forward-scatter assumptions, the impulse
response for aerosol scatter has a narrow central core from the unscattered
radiation, along with a diffuse scattered component. This is similar in
functional form to the plot seen in Fig. 7.6, but different in scale. The spatial
extent of the halo is wider for aerosol scattering than for turbulence because of
the smaller size of the scattering particles. The aerosol-scattering MTF has
two functional forms: one corresponding to the low-frequency rolloff region
resulting from the Fourier transform of the broad halo and one corresponding
to the flat high-frequency region resulting from the transform of the central
narrow portion of the impulse response. The transition spatial frequency jt
marks the boundary between these two functional forms. For the aerosol
MTF we have

MTFðjÞ ¼ expfSzðj∕jt Þ2 g, for j , jt (7.6)


and

MTFðjÞ ¼ expfSzg, for j . jt , (7.7)

where z is the propagation distance, and the transition frequency is


136 Chapter 7

Figure 7.10 Increasing the propagation path decreases the aerosol-scattering MTF at all
frequencies.

Figure 7.11 Increasing the scattering coefficient decreases the aerosol-scattering MTF at
all frequencies.

jt ¼ a∕l (7.8)

in angular spatial frequency [cycles/radian] units. Shorter wavelengths and


larger particles yield less diffraction and result in better image quality. For
longer paths (larger z), the MTF decreases at all frequencies, as shown in
Fig. 7.10. For more scattering (larger S), MTF decreases at all frequencies, as
shown in Fig. 7.11.

7.5 Conclusion
This chapter provided a brief consideration of MTF contributions from
motion, vibration, turbulence, and aerosols. The treatment is intended as a
first-order approach to a complex topic. Additional information can be found
in the references and further-reading list.
Other MTF Contributions 137

References
1. O. Hadar, I. Dror, and N. S. Kopeika, “Image resolution limits resulting
from mechanical vibrations. Part IV: real-time numerical calculation of
optical transfer functions and experimental verification,” Opt. Eng. 33(2),
566–578 (1994) [doi: 10.111/12.153186].
2. D. Wulich and N. S. Kopeika, “Image resolution limits resulting from
mechanical vibrations,” Opt. Eng. 26(6), 529–533 (1987) [doi: 10.1117/12.
7974110].
3. J. W. Goodman, Statistical Optics, John Wiley and Sons, New York
(1985).
4. K. J. Miller, B. Preece, T. W. Du Bosq, and K. R. Leonard, “A data-
constrained algorithm for the emulation of long-range turbulence-degraded
video,” Proc. SPIE 11001, 110010J (2019) [doi: 10.1117/12.2519069].
5. Y. Kuga and A. Ishimaru, “Modulation transfer function and image
transmission through randomly distributed spherical particles,” JOSA A
2(12), 2330–2336 (1985).
6. D. Sadot and N. S. Kopeika, “Imaging through the atmosphere: practical
instrumentation-based theory and verification of aerosol MTF,” JOSA A
10(1), 172–179 (1993).

Further Reading
Andrews, L., and R. Phillips, Laser Beam Propagation through Random
Media, Second edition, SPIE Press, Bellingham, Washington (2005) [doi:
10.1117/3.626196].
Bohren, C. F., and D. R. Huffman, Absorption and Scattering of Light by
Small Particles, Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim
(1998).
Ishimaru, A., Wave Propagation and Scattering in Random Media, Wiley-
IEEE Press (1997).
Kopeika, N., A System Engineering Approach to Imaging, SPIE Press,
Bellingham, Washington, Chapters 14, 16, and 17 (1998) [doi: 10.1117/3.
2265069].
Zege, E. P., A. P. Ivanov, and I. L. Katsev, Image Transfer Through a
Scattering Medium, Springer-Verlag, Berlin (1991).
Index
A diffraction-limited MTF, 22
aberrations, 24, 27, 29, 33 diffraction MTF, 25, 117
aerosol scattering, 135 division by zero, 79
aliasing, 44, 90
angular spatial frequency, 6, 24 E
astigmatism, 30 edge-spread function (ESF), 70,
atmospheric turbulence, 132 74, 123
autocorrelation, 21, 25, 99 electro-optical systems, 39
auxiliary optics, 118 electronic networks, 61
electronics noise, 64, 79
B
bar target, 126 F
bar-target-to-MTF conversion, 92 fiber bundles, 58
birefringent filters, 48 finite source size, 78
boost filter, 62 flat field, 8, 121
focal-plane array (FPA), 39, 51
C four-bar target, 89
charge-carrier diffusion, 61 frame grabbers, 61
charge-transfer inefficiency, 60
coherence effects, 120 G
coma, 30 geometrical MTF, 28, 117
contrast transfer function (CTF), 86 ground glass, 113–114
convolution theorem, 7, 41 ground-glass diffuser, 100, 116, 120
critical spatial frequencies, 18
crosstalk MTF, 59 I
cutoff frequency, 24, 47, 105 image modulation depth (IMD), 88
impulse response, 1
D instantaneous field of
defocus, 12, 29 view (IFOV), 55
detection, recognition, and integrating spheres, 99–100, 104
identification, 18 interlacing, 53
detector arrays, 39
detector footprint, 41 J
diffraction, 18 Johnson criteria, 18

139
140 Index

L point-spread function (PSF),


laser speckle, 98 1, 67, 113
line-spread function (LSF), 68, power spectral density (PSD), 64,
74, 123 79, 97
linearity, 4, 39
R
M random-transparency target, 104
microdither, 51 resolution, 16
microscanning, 51
minimum modulation curve S
(MMC), 31 sampling, 43
modulation depth, 9, 42, 89 sampling MTF, 50
modulation transfer (MT), 11 scan velocity, 55, 62
modulation transfer function, 11 separability, 43
motion blur, 129 separable function, 6
MTF area (MTFA), 17, 63 shift invariance, 4, 39, 49,
multiplication of transfer 97, 108
function, 8, 115 signal-averaging techniques, 77
signal-processing-in-the-element
N (SPRITE) detectors, 61
noise-equivalent modulation spatial frequency, 5
(NEM), 16, 63 spherical aberration, 30
noise targets, 97 square wave, 124
normalization, 8, 10, 87, 121 square-wave targets, 86
Nyquist frequency, 44, Strehl ratio, 28
51, 100–101, 105
T
O through-focus MTF, 34
obscured-aperture systems, 25 time-delay-and-integration (TDI), 57
optical transfer function (OTF), 9
oversampled knife-edge test, 81 V
vibration, 129
P
phase reversal, 12, 29 W
phase transfer function (PTF), wavefront error, 30
9, 12 white noise, 106
Glenn D. Boreman is Professor and Chairman of the
Department of Physics & Optical Science and Director of
the Center for Optoelectronics & Optical Communications at
the University of North Carolina at Charlotte. He is co-
founder and Board Chairman of Plasmonics, Inc. (Orlando).
From 1984 to 2011 he was on the faculty of the University of
Central Florida, where he is now Professor Emeritus. He has
supervised to completion 35 MS and 27 PhD students. He
has held visiting research positions at IT&T (Roanoke), Texas Instruments
(Dallas), US Army Night Vision Lab (Ft. Belvoir), McDonnell Douglas
Astronautics (Titusville), US Army Redstone Arsenal (Huntsville), Imperial
College (London), Universidad Complutense (Madrid), Swiss Federal
Institute of Technology (Zürich), Swedish Defense Research Agency
(Linköping), and University of New Mexico (Albuquerque). He received
the BS in Optics from the University of Rochester, and the PhD in Optics
from the University of Arizona. Prof. Boreman served as Editor-in-Chief of
Applied Optics from 2000 to 2005, and Deputy Editor of Optics Express from
2014 to 2019. He is coauthor of the graduate textbooks Infrared Detectors and
Systems and Infrared Antennas and Resonant Structures (SPIE Press), and
author of Modulation Transfer Function in Optical & Electro-Optical Systems
(SPIE Press) and Basic Electro-Optics for Electrical Engineers (SPIE Press).
He has published more than 200 refereed journal articles in the areas of
infrared sensors and materials, optics of random media, and image-quality
assessment. He is a fellow of SPIE, IEEE, OSA, and the Military Sensing
Symposium. He is a Professional Engineer registered in Florida. Prof.
Boreman served as the 2017 President of SPIE.
Modulation Transfer

Modulation Transfer Function in Optical and Electro-Optical Systems Second Edition


Function in Optical
and Electro-Optical
Systems
Second Edition Modulation Transfer
Glenn D. Boreman Function in Optical
This second edition, which has been significantly expanded since the 2001 edition,
introduces the theory and applications of the modulation transfer function (MTF)
and Electro-Optical
used for specifying the image quality achieved by an imaging system. The book
begins by explaining the relationship between impulse response and transfer
function, and the implications of a convolutional representation of the imaging
Systems
process. Optical systems are considered first, including the effects of diffraction and
aberrations on the image, with attention to aperture and field dependences. Then
Second Edition
electro-optical systems with focal-plane arrays are considered, with an expanded
discussion of image-quality aspects unique to these systems, including finite sensor
size, shift invariance, sampling MTF, aliasing artifacts, crosstalk, and electronics
noise. Various test configurations are then compared in detail, considering the
advantages and disadvantages of point-response, line-response, and
edge-response measurements. The impact of finite source size on the measurement
data and its correction are discussed, and an extended discussion of the practical
aspects of the tilted-knife-edge test is presented. New chapters are included on
speckle-based and transparency-based noise targets, and square-wave and
bar-target measurements. A range of practical measurement issues are then
considered, including mitigation of source coherence, combining MTF
measurements of separate subsystems, quality requirements of auxiliary optics, and

BOREMAN
low-frequency normalization. Some generic measurement-instrument designs are
compared, and the book closes with a brief consideration of the MTF impacts of
motion, vibration, turbulence, and aerosol scattering.

P.O. Box 10
Bellingham, WA 98227-0010
Glenn D. Boreman
ISBN: 9781510639379
SPIE Vol. No.: TT121
TT121

You might also like