Download as pdf or txt
Download as pdf or txt
You are on page 1of 36

Sensitivity and Image Quality of Digital Cameras

27.10.04 Version 1.1 − work in progress

Sensitivity and Image Quality of Digital Cameras


Dr. Friedrich Dierks, Basler AG*)

Abstract - Image quality and sensitivity are important criteria when selecting a camera for machine vision. This
paper describes how these quantities are defined and how they are measured. It is intended as a practical guide
and the measurements can be done quite easily. Nevertheless, a thorough theoretical background is presented
for the different tasks in keeping with the German engineer’s saying “Es gibt nichts Praktischeres als eine gute
Theorie” (there is nothing more practical than a good theory).
making camera comparisons yourself. Since data
Different machine vision tasks require different
sheets tend to be unclear and to give measures
cameras. In a dim light situation, a camera with
which look good but often do not help with the task
high absolute sensitivity is required. If bright light
of choosing the best camera for your situation,
is present, a good signal-to-noise ratio will be pre-
these skills are quite important.
ferred. If the scene includes both very bright and
very dark areas, you will want a camera with a Although many additional criteria such as price,
large dynamic range. If there are strong reflections size, interface type, support, etc. must be consid-
in the image, a camera with low blooming and ered when making a camera purchase decision, this
smear is essential. And so on. There is no single article focuses only on image quality and sensitiv-
camera that fits all tasks perfectly, instead, sensors ity. It also only looks at machine vision tasks and
and cameras are specialized for specific tasks and it does not discuss other applications like scientific
is the machine vision engineer’s job to choose the image processing.
best camera for a given task.

Mathematical Model

Parameters

Matching the
model to the
data

Measurement
data

Fig. 1 Basler A600f camera


Camera
If you need to choose a camera for your machine
vision task and you don’t know how to evaluate the
camera’s image quality and sensitivity in a system- Fig 2 Characterizing a camera by matching a
atic way, this paper will help. The theoretical back- mathematical model to measurement data
ground presented in this paper will let you better
understand the data sheets for the cameras available For new machine vision designs, use of digital
on the market and the description of the practical cameras is in today's mainstream. Thus all measures
measurement tasks will give you a good basis for given in this paper assume digital camera output. If

*)
Web: http://www.baslerweb.com/
Email: friedrich.dierks@baslerweb.com

Image Quality of Digital Cameras.doc Page 1 of 36  Copyright Basler, 2004


Sensitivity and Image Quality of Digital Cameras

27.10.04 Version 1.1 − work in progress

you want to analyze an analog camera, you can text. For more information, you might want to look
treat the camera and a suitable frame grabber to- at [1], [11], [13], [5], or [14].
gether as though they were a digital camera.
The paper starts with an explanation of the basic 1 Modeling the Camera’s Behavior
processes inside a camera. A simple mathematical This section describes the fundamental mechanisms
model is derived for the camera and this is the basis governing the conversion of the light shining on a
of most of the measures described in the rest of the single pixel in a camera’s sensor into the corre-
article. Knowing the parameters of the model is key sponding gray value in the resulting digital image.
to comparing different cameras with respect to a
given machine vision task (see Fig 2). The parame-
A number of photons ...
ters are determined by matching the mathematical
model to measurement data such as the irradiance ... hitting a pixel during exposure time ...
of light, temporal and spatial noise in the image,
etc. The final section describes how to identify and ... creates a number of electrons ...
describe image artifacts that are beyond the scope
of the mathematical model, for example, image
... forming a charge that is converted
interference caused by EMI, defective pixels, by a capacitor to a voltage ...
blooming and smear.
... and then amplified ...

... and digitized ...

... resulting in a digital gray value.


42

Fig. 4 From photons to digital gray values

The signal flow from light to digital numbers can


be summarized as follows (Fig. 4). During exposure
time, a certain number of photons hits the pixel
under consideration, with each photon having a
certain probability of creating a free electron. The
electrons are collected in the pixel and form a
charge which is, after the end of exposure, trans-
formed into a voltage by means of a capacitor. This
voltage is amplified and digitized, yielding a digital
gray value for the corresponding pixel in the image
delivered by the camera.
Fig. 3 Basler A102f camera
In the next few paragraphs, the separate steps of
To illustrate the measurements, examples are given this process are analyzed in more detail. The goal is
for two Basler 1394 cameras − the A600f CMOS to derive a simple mathematical description that is
camera which delivers 105 fps @ VGA (see Fig. 1) detailed enough to allow a basic understanding of a
and the A102f CCD camera which delivers 15 fps camera’s sensitivity and image quality.
@ 1.45 MPixel. All measurements are performed
with a tool called DeepViewer. DeepViewer was 1.1 Step One: From Photons to Electrons
developed at Basler and is used to investigate and A photon hitting a pixel may create a free electron.
optimize the image quality of the cameras we build. It does so with a certain probability. CCD and
Some of the methods described here have been CMOS sensors treat the freed electrons differently.
incorporated in the EMVA 1288 “Standard for Nevertheless, both types of sensors can be de-
Measurement and Presentation of Specifications for scribed by the same mathematical model.
Machine Vision Sensors and Cameras” (see [15]).
The standard provides a very concise description of 1.1.1 CCD Sensors
how to characterize a camera. This paper focuses
more on explanations and gives background infor- In a CCD sensor, the free electrons are caught in a
mation for the methods described in the standard's "potential well" in each pixel. At exposure start, the
wells are emptied. After the end of exposure, the
electrons collected in the pixel wells are shifted out

Image Quality of Digital Cameras.doc Page 2 of 36  Copyright Basler, 2004


Sensitivity and Image Quality of Digital Cameras

27.10.04 Version 1.1 − work in progress

of the sensor sequentially. When each charge While a CCD pixel outputs a charge packet, an
packet arrives at the sensor array’s border, it is active CMOS pixel outputs a voltage; the conver-
loaded into a capacitor yielding a voltage propor- sion from charge to voltage is done by a capacitor
tional to the total number of the electrons in the contained inside of the CMOS pixel. The more
package. photons that hit the pixel during exposure time, the
lower the resulting voltage.
For example, a capacitor of size 32 fF (= femto
Farad) yields a conversion factor of: CCD sensors have only one conversion capacitor
and it is used for all charge packets sequentially. In
qe 1.6⋅ 10−19 C contrast, each CMOS pixel contains its own con-
Kc = = = 5µV/e- (1)
C 32 ⋅ 10−15 C/V version capacitor. On one hand, this allows faster
sensors to be built because massive parallelization
which represents the capacitor’s voltage increase is possible. But on the other hand, small variations
when a single electron is added. in the size of these capacitors cause variations in
The size of the conversion capacitor is typically the conversion factor and this yields higher fixed
chosen so that the maximum expected number of pattern noise (see section 1.6.2).
electrons will create the maximum voltage swing
In the mathematical model of the active CMOS
∆u suitable for the utilized semiconductor technol- pixel, the number of electrons initially charged into
ogy. Assuming ∆u = 1V , the maximum number of the capacitor has the same role as the full well ca-
electrons a capacitor of 32 fF can hold becomes: pacity in a CCD pixel. Saturation occurs if the ca-
∆u 1V pacitor is completely discharged.
N e. sat = = = 200ke − (2)
K c 5µV/e- Because it is easier to get rid of superfluous photo-
current in an active CMOS pixel containing several
The conversion capacitor is not the only part in the transistors than it is in a relatively simple CCD
sensor that has a limited ability to hold electrons. pixel, CMOS sensors are much more robust against
The pixel well itself, the transfer registers and the blooming and smear. This is one of the advantages
subsequent electronics can saturate as well. of CMOS sensors over CCD sensors.
The pixel well in particular can hold only a certain To simplify matters, the rest of this article refers to
number of electrons and this is called the full well CCD sensors only. Nevertheless, the results are also
capacity of the pixel. If more free electrons are valid for CMOS sensors.
generated, the pixel is over-exposed and “spills
over”. This can cause two types of artifacts in the 1.1.3 Mathematical Model for Step One
image:
Photons do not arrive at regular time intervals like
 If the electrons spilling over are caught by
..p….p….p….p.. but rather irregularly like
adjacent pixels, these pixels also start to be-
come bright, even if they are not illuminated. ..p…p..….p.p…p. The total number of photons nq
This effect is called blooming. arriving during exposure time is thus a stochastic
variable that can be described by a Poisson distribu-
 If the electrons spilling over are caught by tion. The Poisson distribution is determined by a
transfer registers that are shifting the charge
single parameter, the mean µ p (for details see [1]):
from other pixels toward the conversion ca-
pacitor, the whole column where the over- nq ~ P ( µ p ) (3)
exposed pixel is located will brighten. This ef-
fect is called smearing. The mean can be measured using a radiometer as
The robustness against blooming and smear is an will be described in more detail in section 2.2. For
important quality measure for a CCD camera (see µ p > 100 , the Poisson distribution looks pretty
section 3.6). much like the Normal distribution. The Poisson
distribution has the important characteristic that its
1.1.2 CMOS Sensors variance equals its mean:
The active pixel of a CMOS sensor can be modeled σ p2 = µ p (4)
as a capacitor that is pre-charged at exposure start
and discharged during exposure time by the current So by knowing the mean number of photons arriv-
from a photodiode. This so-called photocurrent is ing at the pixel, you also know the noise level of the
formed by the free electrons created by the incom- incoming light signal which otherwise would be
ing photons. hard to measure.

Image Quality of Digital Cameras.doc Page 3 of 36  Copyright Basler, 2004


Sensitivity and Image Quality of Digital Cameras

27.10.04 Version 1.1 − work in progress

Note that light has an inherent signal-to-noise ratio tron. The ratio of light sensitive area to the to-
which increases with the square root of the number tal (geometrical) area of the pixel is called the
of photons hitting a pixel during exposure time: geometrical fill factor. For line scan sensors,
the geometrical fill factor is nearly 100% be-
µp cause there is enough room on the chip on both
SNR p = = µp (5)
σp sides of the pixel line to hold all of the neces-
sary electronics. The fill factor of area scan
This equation explains why using more light typi- cameras depends on the number of transistors
cally results in better image quality. required for a single pixel.
Each arriving photon might − with a certain prob-  To guide as many photons as possible to the
ability η − create a free electron. Thus the number light sensitive parts of the pixel, some sensors
of created electrons ne is Poisson also distributed: are coated with micro lenses which focus the
incoming light on the photo-sensitive part of
ne ~ P ( µ e ) (6) the pixel (Fig. 5). This results in a so-called op-
tical fill factor. One drawback of micro lenses
The mean number of created electrons can be com- is that the sensitivity becomes more dependant
puted from the mean number of arriving photons as: on the angle at which the light rays hit the sen-
sor (see section 3.8). Fixed pattern noise is also
µ e = ηµ p (7)
increased because the transmittance of the
and since the signal is Poisson distributed, the vari- lenses varies slightly due to non-homogeneities
ance of the electron signal equals this mean and can in the lens production process.
also be computed from the mean number of pho-  If a photon actually hits the light sensitive part
tons: of the pixel, it creates a free electron with a
probability called the quantum efficiency of the
σ e2 = µ e = ηµ p = ησ 2p (8)
pixel. Quantum efficiency is dependent on the
The probability η is called the total quantum effi- wavelength of the incoming light, the sensor
material and the sensor’s layer structure.
ciency and is the result of several combined effects:
 Because they hit non light-sensitive parts of the
pixel, such as the shift registers in a CCD sen- micro lense
sor, some photons hitting the (geometrical)
pixel never have a chance to create a free elec-

light sensitive area


The following naming convention is used for the
mathematical model:
geometrical pixel
 n x denotes a number of things of type x. n x is
a stochastic quantity. Fig. 5 : Enhancing the fill factor using micro lenses
 µ x denotes the mean of the quantity x. For comparing cameras, it's sufficient to know the
total quantum efficiency, i.e., the product of the
 σ x denotes the standard deviation and σ x2 the optical fill factor and the quantum efficiency. For
variance of the quantity x. most cameras (and even sensors) the quantum effi-
ciency and the fill factor are not usually provided
 The subscript p denotes quantities related to the by the manufacturer and are hard to measure sepa-
number of photons hitting the geometrical pixel rately without special apparatus.
during exposure time.
The linear correspondence between the number of
 The subscript e denotes quantities related to the photons and the number of electrons in equation (7)
number of electrons collected in the pixel. only holds true until the pixel saturates. In this case,
the mean stays at saturation level even if more
 The subscript d denotes quantities related to the
photons arrive:
number of (fictive) dark noise electrons col-
lected in the pixel. µe = µ e. sat (9)
 The subscript y denotes quantities related to the
digital gray values.

Image Quality of Digital Cameras.doc Page 4 of 36  Copyright Basler, 2004


Sensitivity and Image Quality of Digital Cameras

27.10.04 Version 1.1 − work in progress

The variance becomes zero:


The mathematical model for a single pixel con-
σ e2. sat → 0 (10) sists of the following equations:
because additionally generated free electrons will µ y = K (µ e + µ d ) = K (ηµ p + µ d )
no longer be caught by the well and the number of
electrons in the well stays stable. ( 2
σ y2 = K 2 ηµ p + σ d )
1.2 Step Two: From Electrons to Digital and has the structure shown below.
Numbers
Step one from photons to electrons yields the num- dark noise
ber ne of electrons inside the capacitor. Step two total
quantum nd conversion
includes the conversion to a voltage, the amplifica- efficiency
tion and the digitization of the signal yielding a gain
digital gray value y . ne
np η K y
The signal processing in the camera adds noise to
the light signal at all stages. For a first go at camera number of number of digital
photons electrons grey value
comparison, however, it is sufficient to assume that
the whole process simply adds constant white noise
to the number of electrons in the conversion capaci-
tor. This noise is called dark noise (or read noise or µ y = K (µ e + µ d ) = K (ηµ p + µ d ) (13)
the noise floor) in contrast to the photon noise
which is the noise coming from the light itself. and since photon noise and dark noise are stochasti-
cally independent, the variance becomes:
1.2.1 Mathematical Model Continued
σ y2 = K 2 (σ e2 + σ d2 ) = K 2 (ηµ p + σ d2 ) (14)
The whole conversion from electrons to digital gray
values covers several stages and can be modeled by Note that in equation 14, the variance of the noise is
a single linear scaling factor K . This is called the multiplied by K 2 while in equation 8, the variance
total conversion gain (also called system gain) and is multiplied by η only (no square!). This shows
has the unit of digital numbers per electron [DN/e-]. the fundamental difference between step one and
A bit more imaginable and thus easier to remember step two. Step one from photons to electrons deals
is the inverse 1 K , which is the number of elec- with thinning out stochastic events by applying a
trons that will cause the digital output number to probability η while step two from electrons to
increase by one. digital values deals with scaling a continuous signal
linearly by a factor K . This difference in how the
Digital cameras typically come with a Gain and a
signal transfer works makes it possible to measure
Brightness control. It is especially important to
K and η separately even if you can't measure the
understand the role of Gain G when comparing the
sensitivity of cameras. The Gain feature can be number of free electrons directly.
used to adjust the total conversion factor:
1.3 Image Quality and Sensitivity
K = K 0G (11)
Image quality can be described by the signal-to-
The dark noise and any offsets added to the signal noise ratio (SNR) of the digital output signal. The
inside the camera are modeled via Gaussian white SNR is defined as:
noise:
µ y − µ y .dark ηµ p
SNR y = =
(
nd ~ N µ d ,σ d2 ) (12) σy (ηµ p + σ d2 )
(15)

with a variance σ d2 and a mean µ d , which can be


with:
adjusted by the camera’s Brightness control.
µ y.dark = Kµ d (16)
The digital output signal y is the sum of the pho-
ton generated electrons and the dark noise ampli- being the signal the camera shows while capped.
fied by the total conversion gain. Thus the mean of Only the photon induced part of the signal is of
the output signal becomes: interest here and the constant offset can be sub-
tracted from the signal. The SNR varies with the

Image Quality of Digital Cameras.doc Page 5 of 36  Copyright Basler, 2004


Sensitivity and Image Quality of Digital Cameras

27.10.04 Version 1.1 − work in progress

pixel exposure µ p and depends on only two camera This quantity is the smallest detectable amount of
parameters, the quantum efficiency η and the dark light and is called the absolute sensitivity threshold
of a camera. By convention, a signal buried under
noise’s variance σ d2 . Gaussian white noise can be detected only if it’s
amplitude is greater than or equal to the standard
Note that the image does not depend on the conver- deviation of the noise, a condition which can be
sion gain K and thus can't be influenced by the
expressed as SNRy ≥ 1 or ld SNRy ≥ 0 yielding the
camera’s Gain control G . This is true because by
amplifying the signal, you also amplify the noise above equation.
and thus the image quality is not improved (see, In the presence of a great amount of light
however, section 1.4).
( ηµ p >> σ d2 ), the photon noise dominates equa-
ld SNRy tion 15 and the signal-to-noise ratio becomes:
ld DYN
1
2 ld µe.sat
SNR y ≈ ηµ p = η SNR p (21)
/2
=1
pe
slo Thus the SNR of the output signal follows the SNR
5 ld = log2
of the light signal and increases with the square of
1
=
pe

the amount of pixel exposure. Under these operat-


slo

1 5 10 15
ing conditions, only an increase in the quantum
1
ld µp efficiency will increase the image quality.
-ld η ld σ d-ld η ld µp.max
The corresponding part of the graph in Fig. 6 is a
= ld µp.min
line with a slope of 1 2 crossing the abscissa at
Fig. 6 : Signal to noise ratio vs. number of photons
1η .
η =50%, σ d =64e- , µ e. sat =65ke-
The maximum achievable signal-to-noise ratio is
Fig. 6 shows the output signal-to-noise ratio reached shortly before saturation and is given by
SNR y versus the number of collected photons µ p the ordinate of the rightmost point of the graph in
Fig. 6:
in a double-logarithmic diagram.
For analog cameras, a base ten logarithm and the SNR y. max = ηµ p. max = µ e.sat (22)
ratio of powers is traditionally used yielding the
unit [dB] for the SNR. For digital cameras, it is This quantity describes the maximum achievable
more suitable to use a base two logarithm (ld = image quality of a camera and depends only on the
logarithmus dualis) yielding the unit of [bit] which saturation capacity µ e. sat of the camera.
is related to [dB] by a factor 6.02 [dB]/[bit]:
SNRdB = 20 log SNR (17) Quality measures derived from the mathematical
model:
SNRbit = ld SNR
log SNR 20 log SNR SNRdB (18) Signal-to-Noise ratio
= = =
log 2 20 log 2 6.02 µ y − µ y .dark ηµ p
SNR y = =
The characteristic curve in the diagram can be ap- σy (ηµ p + σ d2 )
proximated by two straight lines. In the case of low
light ( ηµ p << σ d2 ), the dark noise dominates equa- Dynamic Range
tion 15 and the signal to noise ratio becomes: µ p. max µ y. max
DYN = =
ηµ p µ p. min σ y .temp.dark
SNR y ≈ (19)
σd Absolute Sensitivity Threshold
The corresponding part of the graph in Fig. 6 is a σd
line with a slope of 1 crossing the abscissa at: µ p. min =
η
σd Maximum Achievable Image Quality
µ p. min = (20)
η
SNR y. max = ηµ p. max = µ e.sat

Image Quality of Digital Cameras.doc Page 6 of 36  Copyright Basler, 2004


Sensitivity and Image Quality of Digital Cameras

27.10.04 Version 1.1 − work in progress

A single image can contain bright areas as well as  Make sure the camera delivers enough dy-
dark areas. An important quality measure for a namic range to reveal both the bright as well as
camera is the quotient of the largest to the smallest the dark parts of your image.
detectable signal level. This is called the dynamic
There are some limitations to the diagram in Fig. 7:
range and is defined as:
 It is valid only for monochromatic green light.
µ p. max µ y. max In section 1.7 we’ll see how to deal with white
DYN = = (23)
µ p. min σ y .temp.dark light.

This quantity can be read as the horizontal exten-  It deals with temporal noise only. In section
sion of the graph in Fig. 6. Note that the above 1.6.2 we’ll see how to deal with spatial noise.
equation is valid only for cameras with linear re-  It does not take limited digital resolution into
sponse to light. High-dynamic-range (HDR) cam- account. We’ll discuss that problem in section
eras behave different. 1.6.3.
Fig. 7 shows real measurement data taken from the  It assumes a constant output signal with added
Basler A600f and A102f cameras with monochro- white noise. In reality, the noise might not be
matic green light at λ = 550nm . white and there might even be artifacts in the
image such as stripes, etc. Section 3 is devoted
to that problem.
8 A102f
7 A600f 1.4 How Gain Fits in
Excellent
6 Acceptable Most digital cameras have a gain feature which lets
you adjust the total conversion gain K. This quan-
5
tity, however, is not part of the equations governing
4 the image quality. So you may wonder why a Gain
3
feature exists at all.

2 If the analog-to-digital converter (ADC) has enough


resolution to deliver the full dynamic range and all
1 bits are delivered to the user, there is indeed no
0 need for a gain feature, at least not for machine
vision purposes. The A600f, for example, does not
4 6 8 10 12 14 16 18 provide Gain in its Mono16 mode where the full
ld MeanP
10 bit ADC output is delivered.
Fig. 7 Signal-to-noise ratio of the Basler A600f There are however several use cases were a Gain
and A102f cameras. For details see text. feature is useful:

This diagram is a good starting point when check- To reduce the bandwidth required to transfer the
ing to see if a camera is suitable for a given ma- image from the camera to the PC, a Mono8 format
chine vision task: is often used even with cameras whose ADC can
deliver more than 8 bits per pixel.
 Determine the available pixel exposure in the
darkest and the brightest parts of your image Due to quantization noise (see section 1.6.3), you
(how to do this and how to create the diagram cannot deliver more than 9 bit dynamic range with
will be explained later). 8 bit data. By using Gain, however, you can map a
suitable section from the 10 bit curves in Fig. 7 to
 Check the diagram to see if the camera delivers the 8 bit output data. With high Gain, the lower bits
enough SNR to get acceptable image quality of the data are forwarded to the PC and with low
over the whole exposure level range. Using the Gain, the upper bits.
ISO 12232 definition (see [16]), an “excellent”
SNR value is 40 (= 5.3 bit) and an “acceptable” Gain also makes sense in the context of displaying
SNR value is 10 (= 3.3 bit). If more exposure is video on a monitor. Although amplifying the signal
needed, try increasing the shutter time, lower- does not increase image quality because the noise is
ing the f-number of the lens or adding more amplified at the same rate as the useful signal, using
light. gain might make the image look better on screen.
Despite the arguments given above, apparently
there are cameras were increasing the Gain does

Image Quality of Digital Cameras.doc Page 7 of 36  Copyright Basler, 2004


Sensitivity and Image Quality of Digital Cameras

27.10.04 Version 1.1 − work in progress

increase image quality, that is, SNR. This happens pixel image pixel area
lens diameter d
if noise is added to the signal after(!) the gain is area yo x yo yi x yi
applied (see Fig 8).

nd1 nd2

ne ao ai
np η K1 K2 y
object in lens with image on
scene focal length f sensor
variable gain

Fig. 9 : Using sensors with different pixel sizes


Fig 8 Adding noise after the Gain
Each pixel on the sensor with area Ai = yi2 corre-
Assuming µ d = 0 the model equations become: sponds to a small patch in the scene pixel with area
µ y = K1K 2ηµ p (24) Ao = y o2 . This patch emits light with a certain radi-
ance, which is radiated power [W] per area [m2]
(
σ y2 = (K1K 2 )2 ηµ p + σ d21 + K 22 σ d22) (25) and solid angle [sr]. Only a small amount P of the
emitted power is caught by the lens:
if the noise sources d1 and d2 are stochastically
independent. Referring the noise source n2 back to πd 2
P = Ro Aoτ in [W] (29)
the first sum node yields: 4ao2

(
σ y2 = K 2 ηµ p + σ d2 ) (26) P is computed as the product of the object’s radi-
ance Ro , the pixel patch’s area Ao it the scene, the
with the total conversion gain: transmission coefficient τ = 70%...90% , and the
solid angel the lens covers with respect to the pixel
K = K1K 2 (27)
patch. The solid angel is computed as the area of
and the standard deviation of the noise: the lens π D 2 4 divided by the square of the dis-
2 tance ao between the lens and the light emitting
σ 
σ d2 = σ d21 +  d 2  (28) patch.
 K1 
Provided the lens diameter d is kept constant, the
Note that σ d describes a fictive electron noise light emitted from the patch in the scene is caught
source. This fictive source is composed of the noise by the lens and mapped to the pixel on the sensor
actually added to the electrons in the sensor and no matter how large the pixel. A pixel with a dif-
other noise added later in the signal chain which is ferent size requires a lens with a different focal
referred back to electrons. The last equation shows length, but as long as the same patch in the scene is
that increasing the first amplification factor K1 mapped to the pixel, the amount of collected and
delivered light is always the same. This condition
decreases the fictive noise measure σ d if noise is must always be met for a fair comparison of cam-
added after the amplification stage. So with cam- eras. If you’re using a camera with higher resolu-
eras containing these kinds of noise sources, in- tion, the total amount of light collected by the lens
creasing the Gain will increase image quality. must be distributed to more pixels and thus each
pixel will end up with a smaller portion of light.
1.5 Pixel Size and Sensor Resolution This is the price you pay for the higher resolution.
How to deal with different pixel sizes and sensor
resolutions poses a common problem with compar- 1.5.1 Binning
ing cameras. Fig. 9 shows a scene mapped onto a A good example of the price you pay for increased
sensor by means of a lens with a focal length f and a resolution is the binning feature on some cameras.
circular aperture with diameter d. With binning, the signal from two of more pixels is
added inside of the camera in order to increase
image quality. Let's assume we are using an 8k line
scan camera that can be made into a 4k line scan
camera by adding the signals from pairs of adjacent

Image Quality of Digital Cameras.doc Page 8 of 36  Copyright Basler, 2004


Sensitivity and Image Quality of Digital Cameras

27.10.04 Version 1.1 − work in progress

pixels. Each pixel has a signal µ y and a dark which includes the focal length f, the distance ao
. Assuming µ d = 0 , the binned signal ~
noise σ d2 y between the object and the lens, and the distance ai
has a mean and a variance of: between the lens and the image on the sensor. Also
of importance is the magnification:
µ &y& = µ y1 + µ y 2 ≈ 2 µ y = 2Kηµ p (30)
size of the image ai y
m= = = i (34)
( (
σ ~y2 = K 2 (ηµ p1 + ηµ p 2 ) + σ d21 + σ d2 2 )) (31)
size of the object ao yo

(
≈ 2K 2 ηµ p + σ d2 ) which is typically m << 1 . Combining the two
equations above yields:
and the signal-to-noise ratio yields:
ai mao
µ ~y ηµ p f = = (35)
2 1+ m m + 1
SNR~y = =
σ ~y 2 ηµ p + σ d2 (32) The machine vision task at hand typically deter-
mines the magnification m and the object distance
= 2 SNR y ≈ 1.4 SNR y
ao . The above equation then yields the required
This shows that a camera with two pixel binning focal length f.
does indeed deliver better image quality than a non-
binned camera – at the price of halving the resolu- When the terms Ao = Ai m2 and
tion. The image quality is better simply because the ma o = (m + 1) f ≈ f , which follow from the equa-
binned pixels in the 4k line scan camera receive tions above, are inserted into eq. 29, it yields:
twice the light as the pixels in the 8k camera.
πd 2 Ai
In general, binning N pixels will increase the sig- P = Roτ (36)
4 f2
nal-to-noise ratio by a factor of N .
With constant diameter d, the radiant power col-
1.5.2 How Much Light Do I Have? lected at the pixel is constant for different pixel
areas Ai as long as the term Ai f 2 can be kept
To compare cameras, you must know how much
light will strike the sensor with your given optical constant. This is exactly the condition the focal
setup. Let's compute the radiant power P in [W] length f must fulfill in order to map the scene patch
collected by a pixel from the object’s radiance Ro to the pixel for different pixel sizes.
in [W/m2sr] for a given lens. The process of map- Note that because vignetting effects are neglected,
ping the light emitted from the patch in the scene to the equation above is valid only for small magnifi-
the pixel on the sensor is governed by the lens cation factors and near the optical axis. For more
equation: details, see [5] for example.
1 1 1 Of course, changing the pixel size has some addi-
= + (33) tional implications not covered by the equation
f ao ai
above:
 Using pixels smaller than 5 µm will get you
into trouble with the optical resolving power of
Rules for fair camera comparisons: today’s typical lens and they will no longer de-
liver a sharp image.
 The scene must be captured at the same resolu-
tion. When comparing cameras with different  Smaller pixels typically have a smaller full
resolutions, crop the image of the camera with well capacity. This limits the maximum signal-
the higher resolution to the same image width to-noise ratio and the dynamic range the sensor
and height as that provided by the camera with can deliver.
the lower resolution.  Due to the geometric fluctuations inherent in
 The cameras must look at the same scene. the semiconductor production process, smaller
When comparing cameras with different pixel pixels tend to have higher spatial noise (see
sizes, use lenses with different focal lengths to section 1.6.2). This is especially true for
make sure the cameras see the same scene. For CMOS sensors.
a fair resolution, the lenses must have the same
aperture diameter.

Image Quality of Digital Cameras.doc Page 9 of 36  Copyright Basler, 2004


Sensitivity and Image Quality of Digital Cameras

27.10.04 Version 1.1 − work in progress

Eq. 36 contains the lens diameter. This is normally eras and adjust the gain settings so that the output
not known, but can be expressed using the lens’ levels µ y are the same for both cameras.
f-number which is defined as:
Finally, measure the standard deviation of the noise
f
f# = (37) σ y in both images and compute the signal-to-noise
d
( )
ratio SNR y = µ y − µ y.dark σ y . The camera with
where d is the diameter of the lens aperture and f is
the focal length. Inserting this relation in eq. 36 the higher SNR value delivers better image quality
yields: and is thus more sensitive. Note that this compari-
son is only valid for the given optical setup in terms
π Ai of light intensity and spectral distribution.
P = Ro τ (38)
4 f #2
1.6 More Details on Noise
The radiance Ro in the scene can be measured with To this point, only a very rough model of the tem-
a radiometer. The transmission of the lens is around poral noise has been used. This section describes
τ = 70%...90% . The area of the pixel Ai comes the different sources of temporal noise in more
from the camera’s data sheet and the f-number f # detail and introduces spatial noise as well as quan-
tization noise to the model.
is your chosen lens setting. The result is the radiant
power collected by a single pixel in [W].
1.6.1 Temporal Noise
The easiest way to determine the amount of light in
a given optical setup is, of course, to use a camera Temporal noise describes the variations in the value
with known total quantum efficiency η and total of a single pixel when observed over time. The
conversion gain K as a measurement device. From most important components are described in this
the mean grey value, the number of photons col- section (for more details see [1] for example).
lected in a pixel during exposure time is computed Photon noise is inherent in light itself and has al-
according to: ready been discussed in previous sections.
µy Pixels deliver not only a photo current, but also a
µp = (39) dark current formed by thermally generated free
ηK
electrons. The fluctuation of this dark current adds
Note that the total conversion gain K is dependent dark current noise (don’t confuse this with the dark
on the camera’s Gain setting. More details on light noise which is the total noise when no light is pre-
measurement are given in section 2.2. sent and contains more components than the dark
current noise). At around room temperature, the
1.5.3 Direct Camera Comparisons dark current approximately doubles each time the
temperature increases by ~8°C. It is therefore im-
If you don’t want to measure the absolute quality of portant to make sure a camera is in thermal equilib-
a camera, but only want to compare two cameras rium before starting any measurement and to keep it
you have at hand in a given optical setup, things as cool as possible during operation in the field.
can be simplified. The dark current of a pixel is described by the num-
First, make sure each pixel receives the same ber of electrons thermally generated per ms expo-
amount of light. If you do the measurement without sure time. Since the number of thermally generated
a lens, adjust the shutter values of the two cameras electrons during exposure time follows a Poisson
so that AiTexp = const were Ai is the area of the distribution, the variance equals the mean:
2
pixel and T is the exposure time. If you do the test µ dark current = σ dark current = N d Texp (40)
with a lens and the cameras have the same pixel
size, simply use the same lens with the same f- Note that many cameras have dark current compen-
number settings and the same shutter for both cam- sation, so the mean might possibly not behave as
eras. If the pixel sizes are different, adjust the focal expected. The noise, however, will be there and
length and the f-number according to the equations when compensation is used will even be increased.
given in the previous section. This happens because the compensation value itself
– since it is estimated from the stochastic signal –
Run a live image for both cameras, cap them and will be noisy and this noise adds to the dark current
adjust the brightness settings so that the dark signal noise.
is the same for both cameras. Then uncap the cam-

Image Quality of Digital Cameras.doc Page 10 of 36  Copyright Basler, 2004


Sensitivity and Image Quality of Digital Cameras

27.10.04 Version 1.1 − work in progress

The strong dependence of the dark current on the To take spatial noise into account, the mathematical
temperature can be modeled as: model for a single pixel must be extended.
ϑ −ϑ0
kd σ y2.total = σ y2.temp + σ y2.spatial
Nd = Nd0 2 (41)
  (45)
Another more imaginable measure is the time re- = K 2 ηµ p + σ d + σ o + S gη 2 µ p 
2 2 2 2
quired to completely fill the sensor’s well with  1424 3 14 4244 3
thermally generated electrons:  temporal spatial 

µ e. sat The spatial noise consists of a constant part σ o2 ,


Tdark current .sat = (42)
Nd which models the inter-pixel variation of the offset
2 2
To render the dark current and its noise negligible, and a part S gη 2 µ p , which increases with the
your exposure time must be much smaller than that. amount of light and models the variation of the
After the capacitor used to convert the electrons gain. Fig. 10 shows the resulting signal flow. n s
into a voltage is reset, the number of electrons re- represents the spatial noise component.
maining in the capacitor varies due to temporal
fluctuations. This variability results in reset noise. n
d
The variance of the reset noise is given by:
n
kTC e
2 np η K y
σ reset = (43)
qe2
n
where k = 1.38 10 −23 J K and is Boltzmann’s s

constant, T the absolute temperature, C the capac-


ity, and qe = 1.6 10 −19 C the charge of the electron. Fig. 10 : Signal flow in the camera
Reset noise can be suppressed by correlated double
Taking spatial noise into account extends the for-
sampling (for details see [1]).
mula for the signal-to-noise ratio of the output sig-
Amplifier and 1/f noise are added to the signal by nal as follows:
the analog amplifier circuitry.
ηµ p
See section 3 for more on EMI noise. SNR y = (46)
(ηµ + (σ
p
2 2
d +σo )+ S η µ )
2 2
g
2
p
1.6.2 Spatial Noise
The offset noise simply adds to the dark noise. The
Spatial “noise” describes the variations that occur gain noise, however, is a new quality of noise. This
from pixel to pixel even when all pixels are illumi- can be seen when it becomes so large that it domi-
nated with the same light intensity and the temporal nates equation 46 yielding:
noise is averaged out. Spatial noise includes two
1
main components, offset noise and gain noise. Be- SNRy = (47)
low saturation, each pixel maps the incoming light Sg
to the output signal using a linear function:
and putting an upper limit to the signal-to-noise
y = Offset + Gain ⋅ n p (44) ratio of the camera. Mentioning this is very impor-
tant and Fig. 11 shows the effect on the graph of the
Neither the gain nor the offset are truly the same for signal-to-noise ratio versus the number of photons.
each pixel, but vary slightly causing fluctuations in
the output signal y . CMOS sensors in particular
suffer from spatial noise and they often contain a
shading correction function to compensate for off-
set and/or gain fluctuations. Offset noise is also
known as dark signal non-uniformity (DSNU) or
fixed pattern noise (FPN) and gain noise is known
as photo-response non-uniformity (PRNU). How-
ever since these terms are defined inconsistently in
the literature, we’ll stick to offset noise and gain
noise in this paper.

Image Quality of Digital Cameras.doc Page 11 of 36  Copyright Basler, 2004


Sensitivity and Image Quality of Digital Cameras

27.10.04 Version 1.1 − work in progress

ld SNRy
Mathematical model for monochromatic light:
− ld Sg
µ y = K (µ e + µ d ) = K (ηµ p + µd )
gain noise

photon noise
= Kηµ p + µ y .dark
123
light induced
dark noise + offset noise

ld µp
σ y2 = σ y2.total = σ y2.temp + σ y2.spat
ld µp.min ld µp.max  
= K 2 ηµ p + σ d + S gη 2 µ p + σ o 
2 2 2 2
 1424 3 14 4244 3
Fig. 11 : Signal- to-noise ratio vs. number of pho-
 temporal spatial 
tons taking spatial noise into account
σ y2.temp = K 2ηµ p + σ y2.temp.dark
Fig. 12 shows the computed SNR for the A600f 123
light induced
CMOS camera based on the temporal and the total
noise respectively. As predicted by the mathemati- 2 2
cal model, the SNR for the total noise starts to satu- σ y2.spat = K 2 S gη 2 µ p + σ y2. spat .dark
14243
rate for high pixel exposures. Due to the very low light induced
spatial noise of the camera’s CCD sensor, the same
data taken for the A102f CCD camera shows very µy → µ y.sat
saturation
little difference between the two curves.
σ y2 →0
saturation
8
µ y. sat
7 µ e. sat = ηµ p.sat =
K
6
using the following quantities:
5

4
µp mean number of photons

3
µy mean gray value

2 Temporal Noise µe mean number of photon generated electrons

1
Total Noise µd mean number of (fictive) temporal dark
noise electrons
0 K overall system gain
8 10 12 14 16 18 20 η total quantum efficiency
ld MeanP
σ y2 variance of the gray value’s total noise
Fig. 12 Signal-to-noise ratio for temporal and total σ y2.total variance of the gray value’s total noise
noise respectively for the A600f camera
σ y2.temp variance of the gray value’s temporal noise
There are different sources of spatial noise.
σ y2.spat variance of the gray value’s spatial noise
Variations in the pixel geometry affect the light
sensitive area and with CMOS sensors especially, σ d2 variance of the (fictive) temporal dark noise
the value of the conversion capacitor inside the electrons
pixel and thus the conversion gain. This kind of σ o2 variance of the spatial offset noise
spatial noise decreases when larger pixels and finer
structures are used in sensor fabrication. S g2 variance coefficient of the spatial gain noise

Variations in the donation and crystal defects also µ y. sat mean gray value if the camera is saturated
result in offset and gain noise and can cause par- µ e. sat mean equivalent electrons if the camera is
ticularly strong differences in the dark current of saturated
the pixels. This leads to starry images with sparsely µ p. sat mean number of photons collected if the
scattered pixels either shining out brightly (hot
pixels) or staying too dark (cool pixels). This effect camera is saturated
becomes stronger with exposure time and tempera-

Image Quality of Digital Cameras.doc Page 12 of 36  Copyright Basler, 2004


Sensitivity and Image Quality of Digital Cameras

27.10.04 Version 1.1 − work in progress

ture and is hard to compensate for by using shading y


correction (see section 1.6.4). If pixels have a very
dim reaction or no reaction at all, they are called
dead pixels or defective pixels and require special
µy
treatment.
σy
Fast sensors typically parallelize the read-out of the
sensor and the analog-to-digital conversion. This
leads to different data paths for different groups of ∆y
pixels and since the parameters of the paths never t
match exactly, the groupings often can be seen in p(y)=N( µy ,σ y )
the image. A notorious example is the even-odd
mismatch for sensors with two outputs where the
Fig. 13 : Sampling the amplitude
pixels in the even lines are read out in parallel with
the pixels in the odd lines. Camera manufacturers
calibrate the difference to zero during production, Assumption (b) is very restrictive and not fulfilled,
but this calibration is truly valid only for the operat- for example, for a constant signal with added nor-
ing point where it was made and tends to become mally distributed white noise (Fig. 13). For this
imperfect under other operating conditions. important class of signals, Fig. 14 shows the maxi-
mum bias you get if you estimate the mean of the
Last but not least, systematic temporal signal dis- signal or the standard deviation (see [4]).
turbances can cause patterns such as stripes in the
image that do not move because the noise is syn-
chronized with the sensor read-out. Even if this 0,6
kind of noise is in fact of temporal origin, it be- 0,5 (M-m).max
haves like spatial noise. Often, this kind of noise 0,4 (S-s).max
varies with temperature, frame rate, etc. 0,3 (Lit-s)
Most kinds of spatial “noise” are very badly mod- 0,2
eled by white noise. Section 3 describes more suit- 0,1
able tools to deal with spatial noise. 0 s
0 0,1 0,2 0,3 0,4 0,5 0,6 0,7 0,8 0,9 1
1.6.3 Quantization Noise

This type of noise is added because the camera's Fig. 14 : Maximum error due to quantization noise
digital output never exactly matches the analog (for details see text)
value of the internal output voltage. If the digital
output has q bits resolution, its value ranges from 0 The x-axis is the standard deviation of the analog
signal in digital units. The difference between the
to y max = 2q − 1 and the smallest step is ∆y . The estimated mean M and the true mean m of the sig-
quantization introduces additional noise. The noise nal is negligible only for σ > 0.5 . For less noise,
is non-linearly dependent on the shape of the signal the maximum error rises quickly to ~0.5, which
y and is thus difficult to model. It can be done, corresponds to the well known A/D converter error
however, if (a) the analog signal can be described of ±0.5 LSB. Thus equation 48 is only valid when
as a stochastic variable with mean µ y and variance enough noise is present.
σ y2 and (b) the probability distribution of the sig- The difference between the estimated standard
nal’s amplitude is bandwidth limited with respect to deviation S and the true standard deviation s fol-
the amplitude’s sampling grid. In this case, it can be lows equation 49, but also only if σ > 0.5 or better
shown (see [3]) that the mean and the variance of yet if σ > 1.0 . For less noise, the bias increases
the digitized signal are: faster up to ~0.5.

µ~ = µ (48) From Fig. 14, it follows as a general rule that the


y y
quantization step should be chosen as:
and: ∆y ≤ 2σ y (50)
∆y 2
σ~y2 = σ y2 + (49) where σ y is the variance of the analog signal. If
12
this condition is fulfilled, there will be almost no
The second term in the last equation is called quan-
tization noise.

Image Quality of Digital Cameras.doc Page 13 of 36  Copyright Basler, 2004


Sensitivity and Image Quality of Digital Cameras

27.10.04 Version 1.1 − work in progress

nately, this is hard to achieve. You can, however,


Rules for quantization noise: use this technique to compensate not only for gain
 For low-bias estimation of the gray value noise, but also for lens shading and non-
mean, make sure that the standard deviation is homogeneous lighting. This requires the camera
user to expend some effort capturing a shading
σ y > 0.5 DN .
image. In addition, all non-homogeneities resulting
 For low-bias estimation of the gray value vari- from the background during acquisition of the shad-
ance, make sure that the standard deviation is ing image are imposed on all following images as a
negative image and must be filtered out beforehand.
σ y > 1.0 DN .
Another drawback is the digital nature of the multi-
 With a digital number format containing N bits, plication, which causes the so-called hedgehog
you can transfer a signal with a dynamic range effect. This effect causes some gray values to be-
of N+1 bits. come more frequent than others resulting in spikes
in the histogram and thus adding another kind of
quantization noise (see section 3.3.2).
bias in the mean estimation. In addition the stan-
Shading correction is a very powerful tool. But
dard deviation will follow the formula: because it is only valid for the operating point
1 where it was taken, it can't heal all wounds. Chang-
σ q2 = σ y2 + (51) ing the exposure time, the gain, or the temperature,
12
and with gain shading sometimes even changing the
If the analog signal has a standard deviation of light level, will cause the shading correction to
σ y = 1 2 DN the standard deviation of the quan- become imperfect.
tized signal becomes σ q = 0.58 DN which is ~16% Fig. 15 shows the spatial dark noise of the A600f
of σ y . For estimation of the standard variation for different shutter times. The A600f comes with
an integrated offset shading feature. The shading
σ y = 1 DN is more appropriate which yields values are computed for each camera during pro-
σ q = 1.04 DN and or ~4% of σ y . duction using a shutter time of 10 ms, the maximum
exposure time suitable for the camera’s 100 frames
For larger quantization steps, there is a strong error per second rate.
in both the mean and the standard deviation. The
error heavily depends on the mean of the analog 1,6
signal and is not predictable with a linear model.
1,4
An interesting corollary is that an analog signal 1,2
with p bits dynamic range can be transferred using
p-1 digital bits. 1,0
0,8
1.6.4 Shading Correction 0,6

Since spatial noise is fairly static, you can compen- 0,4


sate for it up to a certain point by using a technique 0,2
called shading correction.
0,0
For offset shading (or dark shading), a compensa- 0 10.000 20.000 30.000 40.000
tion value is added to each pixel so that in effect, all Shutter [us]
pixels have the same gray value in darkness. The
compensation value is determined from an averaged Fig. 15 Spatial dark noise of the A600f measured
image taken in complete darkness. Note that offset at different exposure times
compensation consumes a certain amount of the
gray value range available and thus slightly reduces Of course, a camera with offset shading correction
the dynamic range of the camera. This technique is does not follow the standard camera model that
quite often used with CMOS sensors to compensate would predict a constant spatial dark noise level. In
for their strong spatial noise. reality, the correction is best for the operating point
For gain shading (or white shading), each pixel is used to determine the correction values and its
multiplied by its own correction factor. The correc- utility falls on both sides of this operating point.
tion factors are determined from an averaged image Note that the spatial noise does not drop to zero as
taken with completely uniform lighting. Unfortu- one could assume. This is a result of the quantiza-

Image Quality of Digital Cameras.doc Page 14 of 36  Copyright Basler, 2004


Sensitivity and Image Quality of Digital Cameras

27.10.04 Version 1.1 − work in progress

tion noise. Fig. 14 predicts a maximum error for the 1.7.2 Spectral Distribution of the Quantum
estimated standard deviation of 0.5 in the case Efficiency
where the true standard deviation is zero. This is
close to the minimum value of 0.6 in Fig. 15. As already mentioned, the quantum efficiency η
depends heavily on the wavelength λ of the light.
1.7 Dealing with White Light Each sensor has a different spectral characteristic
as given in the sensor’s data sheet. Fig. 16 shows an
1.7.1 Energy and Number of Photons example from the sensor used with the Basler
A600f camera (see [6]).
When dealing with white light and during light
measurement, it is more convenient to think of the
light in terms of radiant energy instead of a number
of photons. This section describes how those two
quantities are interrelated.
The average number µ p of photons hitting a pixel
during exposure time can be computed from the
irradiance J (in [W/m2]) at the surface of the sensor
and this is measured by replacing the camera with a
radiometer. The total energy of all photons hitting
the pixel during the exposure time Texp can be com-
puted as:
E p = JATexp (52)

were J is the irradiance measured with the radiome- Fig. 16 Quantum efficiency versus wavelength for
ter and A is the (geometrical) area of the pixel. The the sensor in the Basler A600f camera
energy of a single photon with the wavelength λ is (from [6]).
computed as:
Instead of the absolute numbers, the relative spec-
hc 19.9 ⋅ 10−17 J tral distribution η rel (λ ) of the quantum efficiency
E photon = = (53)
λ λ nm is given in many data sheets. In this case, the peak
quantum efficiency – that is the maximum of the
using Planck’s constant h = 6.63 10−34 Ws 2 and the curve in Fig. 16 – is scaled to 1. To measure this
speed of light c = 3.0 108 m s . quantity, we start with the mathematical model of
the camera:
The total number of photons hitting the pixel is then
computed as: µ y − µ y.dark = Kη (λ )µ p (λ ) (56)

Ep where:
µp = = Φ pTexp (54)
E photon µ y .dark = Kµ d (57)
with: is the gray value of the capped camera. Section 2.1
JAλ describes how to measure the quantum efficiency
Φp = (55) η0 for monochromatic light with a certain wave-
hc
length λ0 . Taking this result for granted, the rela-
being the photon flux into the pixel. To get a better
feeling of the numbers here, an example from a tive spectral distribution of the quantum efficiency
typical measurement session is helpful. In this ex- can be measured by varying the wavelength while
ample, exposing a pixel with an area of keeping the number of photons µ p collected in
A = (6.7 µm)2 using green light with a wavelength each pixel constant:
of λ = 550nm and an irradiance of J = 0.4 W/m2
yields Φ p = 49 p ~ µs . Exposing for Texp=100µs η (λ ) µ y (λ ) − µ y.dark
ηrel (λ ) = = (58)
η0 µ y (λ0 ) − µ y .dark
yields an average number of photons per pixel of µ p = const
µ p = 4900 p ~ , which results in a signal-to-noise
Sometimes, instead of the relative spectral distribu-
ratio of the light of SNR p = µ p = 70 =ˆ 6.1 bit . tion of the quantum efficiency, the relative spectral

Image Quality of Digital Cameras.doc Page 15 of 36  Copyright Basler, 2004


Sensitivity and Image Quality of Digital Cameras

27.10.04 Version 1.1 − work in progress

distribution of the response R is given. Fig. 17


shows an example for the sensor used in the Basler Total QE
60%
A102f camera (see [7]). A600f
50% A102f
40%

30%

20%

10%

0%
350 450 550 650 750 850 950 1050
Fig. 17 Relative response versus wavelength for Wavelength [nm]
the sensor in the Basler A102f camera
(from [7]). Fig 18 Total quantum efficiency vs. wavelength
for the A600f and the A102f cameras
The relative spectral response is measured quite
like the relative spectral quantum efficiency. But Note that this diagram in fact compares the sensitiv-
instead of keeping the number of photons collected ity of the cameras for the case where enough light is
in the pixel constant while varying the wavelength, present. In this case, the signal-to-noise ratio of the
the energy of the collected photons is kept constant. image SNR y follows the signal-to-noise ratio
As already discussed in section 1.7.1, the number of
photons collected in the pixel equals the total en- SNR p of the light itself multiplied by the square
ergy E p of those photons divided by the energy of root of the quantum efficiency η as explained in
one photon: section 1.3:

Ep E pλ SNR y ≈ η (λ )SNR p (61)


µp = = (59)
E photon hc
In the visible range of the spectrum, the CCD-based
where h is Planck’s constant and c the speed of A102f camera has more than twice the total quan-
light. Inserting this formula in eq. 56 and assuming tum efficiency of the CMOS-based A600f camera.
E p = const yields: One reason is that the A600f’s CMOS sensor has a
global shutter that results in a very low geometrical
fill factor. On the other hand, the A600f has a
µ y (λ ) − µ y.dark
Rrel (λ ) = higher total quantum efficiency than the A102f in
µ y (λ0 ) − µ y .dark near infrared. This is true because the CMOS proc-
E p = const (60) ess used to produce the A600f’s CMOS sensor has
λ a donation profile ranging deeper into the silicon
= η rel (λ )
λ0 and is thus capable of catching photons with a
longer wavelength.
So the relative spectral distribution of the response
equals the relative spectral distribution of the quan- Another very interesting graph derived from the
tum efficiency scaled with the term λ λ0 . spectral distribution of the total quantum efficiency
is the spectral distribution of the absolute sensitivity
You can use the equation above and the values in threshold. This is the minimum number of photons
Fig. 17 to compute the spectral distribution of the required to get a signal equaling the noise floor of
total quantum efficiency for the A102f camera. Fig the camera. In section 1.3, it was shown that this
18 shows the quantum efficiency for the A102f and quantity is computed according to:
the A600f drawn in the same diagram. The respec-
σd
tive values at 550 nm have been measured using the µ p. min = (62)
photon transfer technique described in section 2.1. η (λ )
The rest of the graphs have been taken from the which is the dark noise divided by the quantum
spectral distributions given in the data sheets for the efficiency. Fig 19 shows the results for the A600f
cameras’ sensors. and the A102f cameras.

Image Quality of Digital Cameras.doc Page 16 of 36  Copyright Basler, 2004


Sensitivity and Image Quality of Digital Cameras

27.10.04 Version 1.1 − work in progress

µe
10000
µ.p.min µ e = ∫ dµ e =
A600f 0
λmax
(67)
A102f ATexp
1000 =
hc ∫ λη (λ )J λ (λ ) dλ
λmin

100 The integration interval [λmin , λmax ] must cover


those parts of the spectrum were the light source
10 emits energy, that is J λ (λ ) ≠ 0 , and the sensor is
light sensitive at the same time, that is η (λ ) ≠ 0 .
1
In the special case of monochromatic light, the
350 450 550 650 750 850 950 1050
spectral density of the irradiance is a delta function
Wavelength [nm]
peaking at the monochromatic light’s wavelength
Fig 19 Absolute sensitivity threshold vs. wave- λ0 :
length for the A600f and the A102f cam- J λ (λ ) = J (λ0 )δ (λ − λ0 )
eras
As expected, inserting this in the equation above
This diagram in fact compares the sensitivity of the yields:
cameras in the case of very low light. You can
clearly see that the A102f has a much higher abso- ATexp
µe = λ0η (λ0 )J (λ0 ) = ηµ p (68)
lute sensitivity than the A600f even in near infra- hc
red. Note that the y-axis is logarithmic.
1.7.4 Black Body Radiator
1.7.3 Camera Response to White Light
Another special case is the black body radiator.
Until now, we have dealt with monochromatic light This is a quite good approximation for incandescent
only. White light, however, is a mixture of photons lamps such as Halogen bulbs. The spectral density
of all kinds of colors (= wavelengths). of a black body's light can be given analytically and
is dependent on a single parameter only, the color
For monochromatic light, we have found the num-
temperature T.
ber of photons collected in the pixel to be:
The black body radiator’s spectral exitance, that is
ATexp
µp = λJ (63) the radiant power emitted per unit area in the inter-
hc val [λ , λ + dλ ] , is given by Planck’s Law:
with the pixel area A, the exposure time Texp, the
wavelength λ, the irradiance J, Planck’s constant h, 2πhc 2 1  W 
Jλ = hc λkT
in  2  (69)
and the speed of light c. λ5
e −1 m m
Changing to differential variables, the number of were h = 6.63 10−34 Ws 2 is Planck’s constant,
collected photons whose wavelength is inside the 8
interval [λ , λ + dλ ] can be written as: c = 3.0 10 m s is the speed of light and
−23
k = 1.38 10 J K is the Boltzmann constant.
ATexp
dµ p = λJ λ (λ ) dλ (64)
hc The irradiance has the same spectral distribution
but a different amplitude and can therefore be writ-
where J λ is the spectral density of the irradiance ten as:
on the sensor in [W/m2m]. With:
J λ (T , λ )
dµ e = η (λ ) dµ p (65) J λ = J eff (70)
J norm
follows: with J eff being the effective irradiance for the
ATexp given color temperature and J norm being an arbi-
dµ e = λη (λ )J λ (λ ) dλ (66)
hc trary constant for normalizing the equation. Insert-
ing this into eq. 67 yields:
Integrating this differential equation yields:

Image Quality of Digital Cameras.doc Page 17 of 36  Copyright Basler, 2004


Sensitivity and Image Quality of Digital Cameras

27.10.04 Version 1.1 − work in progress

both be set to one because they cancel out. η eff


Dealing with white light from incandescent
lamps: cancels out directly and J norm cancels out via the
computation of J eff (see next section). The two
 Compute the effective wavelength λeff and
constants were chosen here to get somewhat mean-
the effective quantum efficiency η eff from the ingful values for λeff and J eff . λeff is something
color temperature T of the light and the spectral like the weighted mean of the wavelength where the
distribution of the quantum efficiency. weighting function is the product of the light’s
 Compute the lux conversion coefficient Η (T ) spectrum and the quantum efficiency’s spectral
from the color temperature. distribution. J eff is the weighted mean of the ra-
diometric irradiance where the weighting function
 Measure the irradiance in [lux] with a pho-
is the product of the light’s spectrum.
tometer and compute the effective irradiance
J eff .
1.7.5 Lux
 Use λeff , η eff , and J eff as you would use the The task remains to measure the effective irradi-
analog quantities with monochromatic light. ance J eff . This is achieved by using a photometer
that gives you the irradiance in [lux]. In addition,
you need to know the color temperature T.
λ max
ATexp J eff
µe =
hc J norm ∫ λη (λ )J λ (T , λ ) dλ (71) The photometric irradiance J lx in [lux] is defined
as:
λ min

λmax
With the normalizing factor (for a discussion see
below): J lx = Lo ∫ Lλ (λ )J λ (λ ,T ) dλ (77)
λmin
λ max
J norm = ∫ J λ (T , λ ) dλ (72) where Lλ is the spectral sensitivity distribution of
λ min the human eye. This is presented as a table in
CIE 86-1990 (see [17] and below).
and the effective quantum efficiency:
lux
λ max L0 = 683 and the interval [λmin , λmax ] con-
1 W m2
η eff =
λmax − λmin ∫ η (λ ) dλ (73)
tains the human visible part of the spectrum.
λ min
Spectral Spectral
the effective wavelength for the given sensor and Wavelength sensitivity L Wavelength sensitivity L
spectral distribution can be computed as: λ [nm] [1] λ [nm] [1]
380 0.00004 580 0.870
λ max
1 390 0.00012 590 0.757
λeff =
J normη eff ∫ λη (λ )J λ (T , λ ) dλ (74) 400 0.0004 600 0.361
λ min 410 0.0012 610 0.503
420 0.0040 620 0.381
yielding the following equation which describes the 430 0.0116 630 0.265
creation of electrons by white light: 440 0.023 640 0.175
450 0.038 650 0.107
J eff Aλeff 460 0.060 660 0.061
µ e = ηeff Texp 470 0.091 670 0.032
hc (75)
480 0.139 680 0.017
= η eff Φ p.eff Texp 490 0.208 690 0.0082
500 0.323 700 0.0041
with: 510 0.503 710 0.0021
520 0.710 720 0.00105
J eff Aλeff 530 0.862 730 0.00052
Φ p.eff = (76)
hc 540 0.954 740 0.00025
550 0.995 750 0.00012
being the effective number of photons hitting a 560 0.995 770 0.00003
pixel per unit time. The normalizing factor J norm 570 0.952
and the effective quantum efficiency η eff could

Image Quality of Digital Cameras.doc Page 18 of 36  Copyright Basler, 2004


Sensitivity and Image Quality of Digital Cameras

27.10.04 Version 1.1 − work in progress

Inserting eq. 70 yields: 2.1 Photon Transfer Method


λ max Fig. 21 again shows the model for a single pixel.
Lo
J lx = J eff
J norm ∫ Lλ (λ )J λ (λ , T ) dλ (78) n
λ min d
= J eff Η (T )
n
e
np η K y
with the lux conversion coefficient:
λ max
Lo Fig. 21 : Model for a single pixel
Η (T ) =
J norm ∫ Lλ (λ )J λ (λ , T ) dλ (79)
λ min
The photon transfer method (see [12]) allows the
which depends on the color temperature and can be measurement of the quantum efficiency η and the
used to compute the effective irradiance from the conversion gain K from the input and output signals
photometric irradiance measured in [lux]. Note that of a camera even though the number of electrons
the integration interval for computing the lux con- inside the sensor cannot be measured directly.
version coefficient must be the same as for the
In principle, only two measurements are required,
computation of the effective wavelength.
one with and one without exposure. If no light is
As an example, Fig 20 shows the spectral distribu- present, that is µ p = 0 , the mean of the digital gray
tions of the A600f’s total quantum efficiency, the value depends only on the offset:
human eye’s sensitivity and a black body radiator’s
irradiance for a color temperature of T=3200K. µ y .dark = Kµ d (80)

and its temporal noise depends only on the dark


120% 5%
noise of the camera:
4%
100%
4% σ y2.dark = K 2σ d2 (81)
80% 3%
QE A600f
3% Using these quantities, the part of the mean of the
60% Human Eye 2%
output signal that is generated by the photon signal
40% J.lambda.rel 2%
can be expressed as:
1%
20%
1% µ y − µ y .dark = Kηµ p (82)
0% 0%
350 450 550 650 750 850 950 1050 and the part of the temporal output noise generated
by the photon noise can be expressed as:

Fig 20 Spectral distribution of the A600f’s QE, σ y2 − σ y2.dark =K 2ηµ p (83)


the human eye and the radiant power for a
color temperature of T=3200K Solving this system of equations yields the desired
quantity’s conversion gain:
The resulting effective wavelength is
σ y2 − σ y2.dark
λeff = 617 nm , the effective quantum efficiency K= (84)
µ y − µ y .dark
ηeff = 15% and the lux conversion coefficient
lux and total quantum efficiency:
Η (T ) = 78 . These values are found by nu-
W m2 µ y − µ y .dark
merical integration. η= (85)
Kµ p

2 Identifying the Model For practical purposes, more than two measure-
Parameters ments must be taken in order to increase the meas-
urement accuracy. Details are shown in the follow-
This section describes the details of the measure- ing sections.
ments needed to identify the unknown parameters
of the mathematical model given in the previous 2.2 Optical Setup
section.
For practical implementation of the photon transfer
measurement method and especially for the meas-

Image Quality of Digital Cameras.doc Page 19 of 36  Copyright Basler, 2004


Sensitivity and Image Quality of Digital Cameras

27.10.04 Version 1.1 − work in progress

urement of spatial noise, you must illuminate the First, you must make sure that the light source stays
sensor in your camera as homogeneously as possi- inside of the horizon of all pixels. From the geomet-
ble and measure the irradiance the sensor experi- rical setup, you read:
ences as precisely as possible. A good way to
achieve this is to have the camera “look” without a q− x r− x
tan ϕ = = (86)
lens at a circular homogeneous light source. The p h
best light source of this type would be an Ulbricht
For a camera with a C-mount, the radius of the
sphere, but a back illuminated piece of frosted glass
mount is q ≈ 12.5 mm and the flange focal length
or a homogeneously illuminated white surface with
a baffle might also do. is p ≈ 17.5 mm . The A600f, for example, has a ½”
sensor with 659x494 pixels. The pixels are 9.9 µm
x 9.9 µm square. So the distance between the center
circular light source

and a vertex pixel is:


mount
sensor 1
xmax = 6592 + 4942 9.9µ m ≈ 4mm
2
This results in an opening angle for this camera of:
12.5 − 4
tan ϕ = ≈ 0.486
Fig. 22 : Optical set-up 17.5
ϕ ≈ 26°
Cameras come with a mount where the lens is
screwed in. This mount acts as a horizon for the If the light source has a fixed radius r, the minimum
pixels on the sensor’s surface and restricts what distance to place it from the camera is:
they can “see” outside of the camera. Note that the
r − xmax
horizon is different for pixels in the center as com- hmin = (87)
pared to pixels at the edges of the sensor (see Fig. tan ϕ
22). To make sure that each pixel receives the same
Inserting the values computed for the A600f and
amount of light, it is vital that the complete light
assuming a small Ulbricht sphere with r = 20mm
source is inside the horizon for all of the pixels in
the sensor. If the light source is too large, the pixels yields hmin ≈ 33mm . If this distance is under-run, a
at the center of the sensor will see more of it and baffle must be applied to the source to reduce its
the illumination will become non-homogeneous. radius appropriately.
However, even if the condition described above is Next, the uniformity of the irradiance on the sensor
met, the light on the sensor will still be slightly with respect to the eccentricity x should be com-
non-homogeneous. Fig. 23 shows the geometrical puted. In [5], the following formulas are given for a
setup that can be used to compute the non- geometrical setup as shown in Fig. 23:
homogeneities of the irradiance J experienced at a
pixel located at distance x from the optical axes of 1  k 
J= πR 1 −  (88)
the camera. It is assumed that the light source has 2  k + r2
2 

Lambert characteristics and emits light with a radi-
ance R. with:

light source aperture sensor h2 + x 2 − r2


k= (89)
2h
r
ϕ
q ϕ Referring all geometrical measures to the radius of
x the light source r and introducing the following
abbreviations:
h x k
p hˆ = , xˆ = , kˆ = (90)
h
r r r
yields:

Fig. 23 : Geometrical Setup

Image Quality of Digital Cameras.doc Page 20 of 36  Copyright Basler, 2004


Sensitivity and Image Quality of Digital Cameras

27.10.04 Version 1.1 − work in progress

Your most important task is to avoid stray light. It


1  kˆ 
J= πR 1 − (91) is especially important to cover the threads of the
2  
kˆ 2 + 1  C-mount with non-reflecting, black material.

and: Fig 25 shows the grey values on the center line of


the A600f for hˆ = 25 where the light is nearly par-
hˆ 2 + xˆ 2 − 1
kˆ = (92) allel and the theoretical non-uniformity is better
2hˆ than 10−4 . The non-uniformity is below 1%.
The irradiance experienced by the pixel in the cen- U-1
ter of the sensor is: 2,0%
1,5%
πR
J 0 = J (xˆ = 0 ) = (93) 1,0%
1 + hˆ 2 0,5%
0,0%
The uniformity U can be defined as the quotient of
-0,5%
the irradiance at a distance x from the optical axis
-1,0%
and the irradiance of the central pixel:
-1,5%
J -2,0% x/r
U= (94)
-0,160

-0,120

-0,080

-0,040

0,000

0,040

0,080

0,120

0,160
J0

Fig 24 shows the uniformity versus the relative


Fig 25 Grey values on the center line of the A600f
distance ĥ and the relative eccentricity x̂ . It is no
surprise that the uniformity degrades with the ec- Filters to make the light monochromatic are either
centricity. There are two ways to get better uni- mounted in the light source itself or directly in front
formity. Either keep a small distance between the of the camera.
sensor and the light source so that the pixels
“drown in a sea of light” or keep the sensor far The irradiance is measured by replacing the camera
away from the light source to end up getting nearly with a radiometer also carefully avoiding stray
parallel light rays. Your ability to keep a short dis- light.
tance is limited by the horizon introduced by the Fig 26 shows a measurement setup with an Ulbricht
mount. Using a large distance has the drawback that sphere. The filter is placed directly in front of the
the absolute amount of collected light decreases camera.
with the inverse square of the distance. Using paral-
lel light is also a perfect tool to show dirt on the
sensor glass which introduces new non-
homogeneities and hinders the measurement of
spatial noise.

1
0.95
0.9 0.1
Uniformity U

0.85 0.3

0.8 0.5
0.75 0.7
0.7 0.9
0.65 relative excentricy x/r Fig 26 Illuminating a camera with an Ulbricht
0.6 sphere
0.55
0 2 4 6 8 10
Relative Distance h/r
2.3 Computing Mean and Noise from
Images
Fig 24 Uniformity vs. relative distance ĥ and
relative eccentricity x̂ The spatial and temporal noise and the mean are
computed from images which consist of N pixels

Image Quality of Digital Cameras.doc Page 21 of 36  Copyright Basler, 2004


Sensitivity and Image Quality of Digital Cameras

27.10.04 Version 1.1 − work in progress

each with a gray value yij where i is the row index The spatial noise can the be computed for both line
and j the column index. The mean gray value is scan and area scan cameras from the difference
computed as: between the total and the temporal noise assuming
that both are stochastically independent:
1
µy =
N
∑y i, j
ij (95) σ y2.spatial = σ y2.total − σ y2.temp (98)

The temporal noise is computed from the difference If the temporal noise is much larger than the spatial
noise, which is typical for CCD sensors, the estima-
between two images with gray values yijA and yijB . tion of the spatial noise becomes quite inexact. In
This way, you eliminate the spatial noise and light- this case, it is more suitable to take the mean from
ing non-homogeneities. The following formula is multiple images until the temporal noise is aver-
based on the computation of the variance of the aged out and compute the total noise of the aver-
difference yijA − y ijB , which is the term in square aged image. This approximates the spatial noise.
Note, however, that the standard deviation of the
brackets: temporal noise decreases with the square root of the
noise. So to get it down to 3%, you must average
11 2
σ y2.temp = 
2 N
∑ (yijA − yijB )  (96) some 1000 images.
 i, j  Another approach for computing the spatial noise is
The factor 1 comes from the fact that the variance to use a temporal low-pass filtered version of the
2
camera image where the mean is computed from a
of the difference yijA − y ijB is twice the size of the set of N images taken from a live image stream.
variance of each value yij assuming that the values This can be done recursively by processing each
are stochastically independent. pixel according to the following algorithm:
When computing the total noise of a single image, ky k + y k
the same trick of using the difference can be ap- y k +1 = (99)
k +1
plied to get rid of lighting non-homogeneities.
Here, the difference is taken from adjacent pixels in σ y2.temp
pairs of columns assuming that the pixels are ex- σ k2 = (100)
k +1
posed to the same lighting level. The index k runs
over pairs of columns: where y k is the pixel’s value in the k-th image
with 0 ≤ k ≤ N − 1 and N is the total number of
1 1 2
σ y2.total = 
2  N 2
∑ (yi,2k − yi,2k +1 )  (97) images processed. The low-pass filtered image is
i ,k  formed by the pixel values y N , which have a tem-
Note that since the differences are taken from a poral variance of σ N2 .
single image, the mean can be computed from N/2
To reduce the temporal noise in the image to 10%
pixel pairs only.
of the spatial noise, run the recursion to increasing
The difference method is somewhat sensitive to N until the following condition is met:
non-stochastic noise. The total noise computation
particularly suffers from even/odd column mis- σ y .spat ≥ 10 ⋅ σ N (101)
match. Much better results are achieved if the pixel
Another problem with the noise computation arises
pairs are taken from columns that are not adjacent,
when either the temporal or the spatial noise is not
but instead have a slightly larger distance, for ex-
white, i.e., when the measurement values are not
ample one column, in between.
stochastically independent. To check this, apply an
The formulas above hold true for area scan cam- FFT as described in section 3.4. The FFT also al-
eras. For line scan cameras, equation (97) must be lows a measurement of the “full” and “white” parts
modified so as to sum up the differences for the of the noise as described later.
data in a single captured line only. This is necessary
because the pixels in multiple captured lines are not 2.4 Measuring Quantum Efficiency and
stochastically independent since they share the Conversion Gain
same spatial noise. On the other hand, in equation
(96) the difference between pairs of captured lines By using the photon transfer method, the conver-
can be summed up because each line originates sion gain and the total quantum efficiency are com-
from the same physical pixels.

Image Quality of Digital Cameras.doc Page 22 of 36  Copyright Basler, 2004


Sensitivity and Image Quality of Digital Cameras

27.10.04 Version 1.1 − work in progress

puted from temporal noise measured with and sometimes happens and the usual cause is non-
without light applied. linear behavior in the camera electronics, the shut-
ter time generator and/or the analog-to-digital con-
Prepare the measurement by setting up the lighting
verter.
as described in section 2.2. Configure the camera so
that it outputs the least significant bits of the ana-
y = 0,0044x
log-to-digital converter (ADC). Some cameras, for 1200

example, use a 12 bit ADC but only deliver 8 bits.


1000
Those cameras typically provide a feature called

MeanY (bright - dark)


digital shift which lets you choose which pixels are 800
output.
600
To make sure that quantization noise does not spoil
400
the results, choose the gain so that σ y .temp.dark ≈ 1 .
200
To make sure the variance is estimated without
bias, choose the offset so that µ d ≥ 3σ y ≈ 3 . 0
0 50.000 100.000 150.000 200.000 250.000 300.000
MeanP
Adjust the lighting so that you can take a reason-
able series of measurements with varying exposure Fig. 28 : µ y − µ y .dark versus µ p
times yielding an output y from darkness to satura-
tion.
Match a line going through the origin to the linear
For each measurement compute the number of part of the data as shown in Fig. 27. According to
photons µ p as well as the mean µ y and the tempo- the photon transfer method, the slope of this line is
the conversion gain of the camera:
ral noise σ y2.temp from the corresponding images.
Run a second measurement series with the same σ y2 − σ y2.dark
K= (102)
exposure times but with the camera capped and µ y − µ y .dark
compute the mean µ y.dark and the temporal noise
From the formula of the matching line shown in the
σ y2.temp.dark in darkness. diagram, it follows that the A600f has an inverse

e
conversion gain of 1 K = 53 DN , that is, you need
Plot σ y2.temp − σ y2.temp.dark versus µ y − µ y .dark as
53 electrons in order to increase the LSB of the 10
shown in Fig. 27 and µ y − µ y .dark versus µ p as bit ADC by one (the A600f has no analog gain).
shown in Fig. 28.
Match another line going through the origin to the
linear part of the data shown in Fig. 28. According
y = 0,0187x
18,00 to the photon transfer method, the slope of this line
16,00 divided by the conversion gain K equals the quan-
VarY.temp (bright - dark)

14,00 tum efficiency:


12,00
µ y − µ y .dark
10,00 η= (103)
8,00 Kµ p
6,00
From the formula of the matching line shown in the
4,00
diagram, it follows that the A600f has a quantum
2,00
efficiency of 26% for green light (550 nm).
0,00
0,00 200,00 400,00 600,00 800,00 1.000,00 The strongest sources of error for this measurement
MeanY (bright - dark) are the measurement of the radiance, which can be
spoiled by a clumsy optical setup, and the computa-
Fig. 27 : σ y2.temp − σ y2.temp.dark versus µ y − µ y .dark tion of the temporal noise, which can be spoiled by
the effects of non-white noise.
Both graphs should be linear between darkness and
2.5 Measuring Saturation Capacity
saturation. If that is not the case and the measure-
ment setup is not buggy, the camera does not be- Adjust the camera’s gain and digital shift (if neces-
have according to the mathematical model and the sary) so that the camera’s sensitivity is as low as
photon transfer method cannot be applied. This possible. Adjust the lighting and the shutter to satu-

Image Quality of Digital Cameras.doc Page 23 of 36  Copyright Basler, 2004


Sensitivity and Image Quality of Digital Cameras

27.10.04 Version 1.1 − work in progress

rate the camera. Compute the number of photons y = 0,0002x + 48,994


57
µ p. sat required for saturation. The saturation capac-
56
ity is then computed as: 55

54
µe. sat = ηµ p. sat (104)

MeanY dark
53
52
For the A600f, the value is 47 ke-. Some cameras
51
have no sharp saturation point, but instead saturate
slowly. In this case, it’s sometimes more reasonable 50

to use the drop in the noise to identify the saturation 49

point. The point of maximum temporal noise is a 48


0 10.000 20.000 30.000 40.000
good measure for the saturation point. If the light- Shutter [us]
ning is not homogeneous enough, the sharpness of
the saturation point will also degrade since some Fig. 29 : µ y.dark versus T
pixels will saturate before others because they are
receiving more light.
2.7 Measuring Gain and Offset Noise
2.6 Measuring Dark Current From the noise part of the mathematical model:
Plot µ y . dark versus the shutter time and match a line
 
to the data as shown in Fig. 29. Dividing the slope σ y2 = K 2 ηµ p + σ d + σ o + S gη 2 µ p 
2 2 2 2
(107)
of the line by the conversion gain yields the number  1424 3 14
4244 3
of electrons thermally created per millisecond. This  temporal spatial 
is the measure for the dark current: follows:
µ y.dark
Nd =
KTexp
in [e-/ms] (105)
2
[
2 2
σ y2. spatial = K 2 σ o + S gη 2 µ p ] (108)

With:
The value for the A600f is 11 e-/ms at 25°C. Re-
member that this measure is strongly temperature 2
σ y2. spatial .dark = K 2σ o (109)
dependent. Taking the saturation capacity of the
camera into account, the result for the time required this equation can be rewritten as:
to saturate the camera solely by dark current is ~4 s.
2 2
Some cameras have a dark current compensation
σ y2. spatial − σ y2.spatial .dark = S g K 2η 2 µ p
(110)
feature. In this case, you will not find the corre- 2
(
= S g µ y − µ y .dark )2
spondence stated above. You can use this corre-
spondence as an alternate: yielding:
2 2 2
σ y.temp.dar k −K σ d0 σ y2. spatial − σ y2. spatial .dark
Nd = (106)
2
K Texp Sg = (111)
µ y − µ y .dark
2
to compute the dark current. Draw σ y.temp.dar k ver-
To compute S g , plot σ y2. spatial − σ y2.spatial .dark
sus shutter time. Match a line (with offset) to the
linear part of the data in the diagram. The slope of versus µ y − µ y .dark as shown in Fig. 30 and match
this line divided by the square of the overall system a line going through the origin to the linear part of
gain K also yields the dark current N d . Note, how- the data. The slope of the line is the gain noise S g
ever, that the dark current compensation itself adds in [%]. As shown in the diagram, the value for the
noise to the signal so the result might no longer be A600f is 0.5%.
“the” dark current.

Image Quality of Digital Cameras.doc Page 24 of 36  Copyright Basler, 2004


Sensitivity and Image Quality of Digital Cameras

27.10.04 Version 1.1 − work in progress

4,50
y = 0,0047x perature ϑ , determine the dark current N d by
4,00 taking a series of measurements with varying expo-
StdabwY.spatial (bright - dark)

3,50 sure times as described above.


3,00
2,50 Draw the following diagram: log 2 N d versus
2,00 ϑ − 30°C . Check to see if the diagram shows a
1,50 linear correspondence and match a line to the linear
1,00 part of the data. From the mathematical model, it
0,50 follows that:
0,00
0,00 200,00 400,00 600,00 800,00 1.000,00 1.200,00 ϑ − 30°C
MeanY (bright - dark) log 2 N d = + log 2 N d 0 (113)
kd

and thus the inverse of the slope of the line equals


Fig. 30 : σ y2. spatial − σ y2.spatial .dark versus
the doubling temperature k d and the offset taken to
µ y − µ y .dark the power of 2 equals the 30°C dark current N d 30 .

The offset noise is computed by dividing the spatial TBD : show example
dark noise by the conversion gain:
3 Artifacts and How to Investigate
σ y. spatial .dark Them
σo = (112)
K
Images from digital cameras contain many more
To check for variations, plot σ y. spatial .dark versus “things” than predicted by the mathematical model
exposure time as shown in Fig. 31. You can take derived in section 1. These things are called arti-
the mean of the values to compute the offset gain. facts. This section describes a set of tools for ana-
The value for the A600f is 42 e-. lyzing them.
Many of these tools are part of the image viewers
1,60 that typically come with frame grabbers. Others
1,40 come with the most popular machine vision librar-
ies. Some, however, are special and you maybe
StdAbw.spatial (dark)

1,20

1,00
need to write some code yourself. Basler has col-
0,80
leted these tools in a program called DeepViewer.
Fig. 32 shows Deep Viewer's internal structure.
0,60

0,40

0,20

0,00
0 10.000 20.000 30.000 40.000
Shutter [us]

Fig. 31 σ y. spatial .dark versus Texp

2.8 Temperature Dependency


The doubling temperature k d of the dark current is
determined by measuring the dark current as de-
scribed above for different housing temperatures.
The temperatures must vary over the whole operat-
ing temperature range of the camera.
Put a capped camera in a climate exposure cabinet
and drive the housing temperature to the desired
value for the next measurement. Before starting the
measurement, wait at least 10 minutes with the
camera reading out live images to make sure ther-
mal equilibrium has been reached. For each tem-

Image Quality of Digital Cameras.doc Page 25 of 36  Copyright Basler, 2004


Sensitivity and Image Quality of Digital Cameras

27.10.04 Version 1.1 − work in progress

boarder of the cropped image, it makes it easier


Camera to be analyzed
to run spatial filters.
 The extractor tool forwards only, for example,
Margin the even lines of the image and discards the
odd lines. As described in section 3.4.2, some
sensors have a different data path for even and
Extractor odd lines and this can cause severe artifacts in
the image. Forwarding only data arriving via
Spatial
Preprocessor
the same data path suppresses these artifacts
Filter which otherwise might mask away all other ef-
fects. Typical pixel groups to extract are
Temporal
even/odd lines, even/odd columns, and the bits
Filter
of the Bayer color mask. The latter results in
Offset
monochromatic images corresponding to one
Shading of the four colors of the G1/R/G2/B-Bayer
mask.
Gain
Shading
 A high-pass spatial filter will suppress illumi-
nation non-homogeneities. A spatial low-pass
filter is used to compute a smoothed gain shad-
ing image (see below).
 A low-pass temporal filter will result in an
Statistic image containing spatial noise only. On the
Measures
other hand, a temporal high-pass filter will re-
Spectrum
sult in an image containing temporal noise
Analysis only.
 The offset shading tool adds a separate offset to
DefectPixel
Analyzer
each pixel. The offset image can be created us-
Display and ing the temporal low-pass filter on a dark live
Analysis Tools image yielding an offset shading image con-
Contrast
Enhancer taining the spatial offset noise only. The offset
image can give interesting insight into the
Histogram structure of the spatial offset noise. It is also
useful for creating a gain shading image.
 The gain shading tool multiplies each pixel by
Line View
its own correction factor. The gain correction
image is created from a uniformly illuminated
live image with offset shading enabled and
Image with a temporal low-pass filter applied. The
Viewer gain image can give interesting insight into the
structure of the spatial gain noise. If the gain
Fig. 32 Internal data flow of the Basler Deep- shading is intended to correct lens and illumi-
Viewer tool nation non-uniformities, a spatial low-pass fil-
ter should be applied during creation of the
The images coming from a camera (or a file) are correction image to suppress high-frequency
first fed into a preprocessor that contains the fol- non-uniformities resulting from the scene.
lowing stages:
The pre-processed image is fed into a set of display
 A margin tool that can be used to crop images, and analysis tools described in the following sec-
that is, an adjustable number of rows or col- tions in more detail. The examples presented come
umns on the sides of the image can be taken from various (anonymous) Basler and non-Basler
out of consideration. This is useful to suppress cameras.
the border effects shown by some cameras. In
addition, since it ensures that the filter mask During measurement, the camera should be config-
runs inside of the physical image even when ured to show its artifacts with maximum intensity,
the center of the mask is positioned at the that is…

Image Quality of Digital Cameras.doc Page 26 of 36  Copyright Basler, 2004


Sensitivity and Image Quality of Digital Cameras

27.10.04 Version 1.1 − work in progress

 …use maximum shutter time


 …use maximum gain
 …adjust brightness so that the noise level is
well above zero
 …make sure the camera delivers the least sig-
nificant bits from the analog-to-digital con-
verter, for example, either by transferring the
maximum number of bits per pixel or by set-
ting the digital shifter (if available) accordingly Fig 33 : Dark image original and with shift = 4,
offset = 0
 …make sure the cameras is at operating tem-
perature
3.2 Line View
3.1 Contrast Enhancer The human eye is not very good at estimating the
Most artifacts are found in the dark image. A good absolute amount of brightness fluctuations. The line
camera will, however, deliver a dark image which view tool helps to overcome this limitation. The
looks completely “black”. The reason is that the horizontal line view, for example, consists of a
combination of graphics card, monitor and human cursor that can be shifted in y-direction. The grey
eye is not very sensitive to small fluctuations near values below the cursor are drawn as graphics di-
the zero level of the signal. rectly in the image as shown in Fig 34. The vertical
line view has a vertical cursor that can be moved in
To make the artifacts visible, the contrast must be x-direction.
enhanced. This is done by multiplying the grey
value signal with a factor >1 and subtracting an
offset to make sure the resulting signal stays within
the 8 bit range that today’s PCs can display. To
avoid the “hedgehog effect” (see 3.3.2), only pow-
ers of two should be used for multiplication. So
instead of a multiplication, a shift operation can be
applied:
Fig 34 Horizontal line view
y ′ = y ⋅ 2 shift − offset
= ( y << shift ) − offset 3.3 Histogram
Fig 33 shows the effect of the contrast enhancer. The histogram tool counts the number of pixels
The lower image is the same as the upper one, but having a certain grey value and displays the statis-
with shift = 4 and offset = 0 applied. You can see tics as a graphic. Fig 35 shows an example. The
some high frequency noise as well as bright, non- histogram shows the bell shape gestalt as you
regular horizontal stripes and thin, regular white would expect for Gaussian white noise. The small
diagonal stripes (when this paper is printed, these lines at the top of the image show the smallest and
stripes might be hard to see). the largest grey value as well as the end of the his-
togram range. As you can see, the “tails” of the
histogram are asymmetric giving a hint that there
might be hot pixels in the image.

Fig 35 Histogram

TBD : Logarithmic histogram

Image Quality of Digital Cameras.doc Page 27 of 36  Copyright Basler, 2004


Sensitivity and Image Quality of Digital Cameras

27.10.04 Version 1.1 − work in progress

3.3.1 Missing Codes Note that factors of the form 1 n do not show
peaks.
A typical artifact found with the histogram tool is
missing codes. Fig 36 shows an example. Every

Output grey value


second grey value is not present. This can happen if 3
3
a line on the PCB representing a digital bit is bro- 4
ken or short circuited. Sometimes, analog-to-digital 3
converters also produce (non-regular) missing
codes especially with high gain settings.

Entry grey value

Fig 38 The cause of the hedgehog effect

Fig 36 Histogram with missing codes Digital multiplications in a camera typically arise
from some sort of gain shading correction, either
for each pixel or for groups of pixels such as for
3.3.2 Hedgehog Effect each column (see also section 3.4.2).
Sometimes the context of the image indicates that 3.4 Frequency Spectrum Analysis
the histogram should be smooth, but it seems to
develop peaks as shown in the upper part of Fig 37. A very common type of artifact in images is stripes
This is called the hedgehog effect and results from occurring at any inclination angle. Stripes are often
a digital multiplication inside of the camera. The caused by electromagnetic interference (EMI) and
lower part of Fig 37 shows the same effect but with can also result from the internal structure of the
“inverse peaks”. sensor. The following sections describe the genesis
of stripes and explains how to analyze them using
spectral analysis.

3.4.1 Stripes Due to EMI

High frequency regular EMI noise expresses itself


in the image as stripes. The reason is shown in Fig
39. The pixel values are read out from the sensor
line by line and there is a small timing gap between
every two lines. If there is some continuous (un-
wanted) EMI signal added to the analog pixel sig-
nal, it will expose itself as a regular bright-dark
pattern in each line. Due to the readout gap between
the lines, the start of the pattern typically varies
regularly from line to line by a fixed number of
Fig 37 Histogram with hedgehog effect columns. The result of this phase shift is inclined
stripes in the image.
Fig 38 shows the cause of the hedgehog effect.
Assume an image with a flat histogram where all image with
diagonal stripes time
grey values are present with the same frequency. If
the gray values are digitally scaled with a factor <1,
some of the input grey values are mapped to the
same output value. In the example, 3 entry values
are normally mapped to one output value. But de-
pending on the slope of the line (= the scaling fac- gap between
tor), sometimes 4 entry values map to one output line readouts
value. The corresponding output grey value thus
collects 4/3 times more pixels than the other grey
values and shows up as a peak in the histogram.
Fig 39 : EMI induced noise with unsynchronized
noise

Image Quality of Digital Cameras.doc Page 28 of 36  Copyright Basler, 2004


Sensitivity and Image Quality of Digital Cameras

27.10.04 Version 1.1 − work in progress

The frequency of the EMI signal, and thus the wave each pixel in turn to the sensor’s output port thus
length and the inclination angle of the pattern, often creating the analog video signal. Variations in the
varies with the temperature. manufacturing process of the column amplifiers
cause a slight difference in amplification for each
Sometimes the EMI signal is synchronized to the
column of pixel values. As a result, the image
start of the line readout. This can happen when at
shows non-regular, vertical stripes. This kind of
the start of line readout, more current is drawn from
stripes is often corrected by using column gain
some DC/DC converter and it reacts with an un-
shading where each column is multiplied by an
wanted voltage swing (see Fig 40 for explanation
appropriate correction factor.
and Fig 41 for an example). Since the swing always
has the same form and always comes at the start of
line readout, the result is a non-regular, vertical
stripe pattern.

image with
diagonal stripes time

gap between
line readouts

Fig 40 : EMI induced noise with line-synchronized


noise
Fig 42 Internal structure of the Fillfactory IBIS5a-
Note that although EMI induced stripes expose
1300 CMOS sensor (from [8])
themselves as spatial noise, they are in fact of tem-
poral nature.
CCD sensors have a different internal structure that
can also create stripes in the image. Fig 43 shows a
sensor with two output ports. The pixels from the
even columns with indices 0, 2, 4, … are output via
one port and the pixels from odd columns are out-
put via another. Each port feeds a separate analog-
to-digital converter (ADC). After conversion, the
two digital pixel value streams are merged into a
single stream and output from the camera.
odd columns
ADC

Fig 41 EMI pulse synchronized to line readout


start

3.4.2 Stripes Due to Sensor Structure ADC


even columns
Another common cause for stripes is sensors with
internal structures that move data from different Fig 43 CCD sensor with separate outputs for even
pixel groups via different paths. A typical example and odd columns
is the IBIS5a-1300 CMOS sensor from Fillfactory
(see [8], Fig 42). When a line is to be read out, all The benefit of this architecture is that it can de-
of the pixels in that line are activated and their crease the total readout time and increase the sen-
signals are delivered via a row of column amplifiers sor‘s frame rate considerably. The drawback is that
to an analog multiplexer. The multiplexer connects differences in the gain and offset of the two analog

Image Quality of Digital Cameras.doc Page 29 of 36  Copyright Basler, 2004


Sensitivity and Image Quality of Digital Cameras

27.10.04 Version 1.1 − work in progress

branches can make the even and odd columns look stripes. In section 3.4.4 we’ll see how to deal with
different and result in regular vertical stripes. This that problem.
so called even-odd-mismatch is compensated for by
Applying a Discrete Fourier Transformation (DFT)
the camera manufacturer during the production of
each camera. Under varying operating conditions, to a row’s grey values y (k ) results in a complex
however, the compensation can become imperfect. signal Y (n ) :
There are other sensor architectures where even and
odd rows (instead of columns) are output via differ- y (k )←→ Y (n )
DFT
(114)
ent paths and this causes horizontal stripes. Some
sensors even output half of the pixels to the left and If the image is cropped to a width being a power of
half to the right. If the two output channels are not two W = 2 p , the calculations are simplified. As a
properly matched, this can result in a brightness result, the index k of the row signal y (k ) runs in the
step in the middle of the image. To find out what
interval k ∈ [0,W − 1] and the index n of the Fourier
kind of effects you can expect, check the sensor’s
transformed signal runs in the interval
data sheet.
n ∈ [0,W − 1] . Since the signal y (k ) contains real
3.4.3 Frequency Spectrum Analysis numbers only, the Fourier transformed signal is
conjugate symmetric:
Periodic disturbances in one-dimensional signals
are often analyzed by checking the signal’s power Y (n) = Y * (W − n) (115)
spectrum. The power spectrum is computed by
applying a one-dimensional (1D) Fourier Trans- This means that it in fact contains only W 2 + 1
formation to the signal. You would think that for independent values. The others can be computed
analyzing stripes in two-dimensional images, the from the equation above. The interpretation of that
usage of the two-dimensional (2D) Fourier trans- mathematical fact is that Y (n ) contains the values
formation would be more suitable, but for several
for positive as well as negative frequencies. It is
reasons it is not.
sufficient to deal with the positive frequencies only.
One reason is that the two-dimensional spectrum is
The squared amplitude of the Fourier transformed
hard to display. Typically, it is converted to a grey
signal is called the power spectrum:
value image while applying some non-linear trans-
formations to the grey values. This is done to make 2
Y (n ) = Y (n )Y * (n ) (116)
the resulting image fit the 8-bit range that can be
displayed by today’s monitors. If you do this to an This is computed by multiplying the complex signal
image coming from a capped camera, the resulting with its conjugate complex version.
two-dimensional spectrum is typically a black im-
age with some very thin, sparsely distributed white Unfortunately, the DFT implies that the grey value
spots. The spots correspond to regular stripes with a signal is periodic (see Fig. 44). It behaves as if the
certain wavelength and inclination angle and their row signal does not stop with the right-most data
brightness corresponds to the amplitude of the point, but the signal will repeat itself starting with
stripes. But the human eye is not very good at read- the left-most data point of the row. If there is a
ing these kinds of images. horizontal brightness gradient in the image, this
causes unwanted effects because it will result in a
The other reason is that the 2D Fourier Transforma- step when the signal is periodically complemented
tion is pretty noisy when taken from a single image as shown in Fig. 44.
only (see below).
y(k) step
The one-dimensional Fourier transformation is a
much more suitable way to deal with stripes in
images. As shown in Fig 39, the signal causing the k
stripes is the same in each row – at least with re- 0 W-1
spect to the signal’s form and wavelength – and the
periodic original periodic
phase is different for each row. As a result, most
complementation row signal complementation
information about the stripes is contained in a sin-
gle row and it is sufficient to apply the one-
dimensional Fourier transformation to one row Fig. 44 Periodic complementation of a discrete
only. By using just one row, you will be missing Fourier transformed signal
information about the inclination angle of the
However it’s not only the gray value signal but also
its spectrum that is periodically complemented and

Image Quality of Digital Cameras.doc Page 30 of 36  Copyright Basler, 2004


Sensitivity and Image Quality of Digital Cameras

27.10.04 Version 1.1 − work in progress

overlap.

Computing the Spectrogram: To eliminate these problems, you should take four
counter-measures (for details see [10]):
 Restrict the number of pixels per line so that
 Use homogeneous illumination only with no
the largest number N = 2q is less than or equal gradient over the image. The most interesting
to the image width. spectrum is taken from a dark image anyway.
 For each of the M lines of the image, compute  Subtract the mean of the entire row signal from
the amplitude of the Fourier transformed: each gray value and use signed float numbers
− Prepare an array y (k ) with the length for the calculation of the FFT:
2N . 1
− Copy the pixels from the image to the first
y (k ) := y (k ) −
W
∑ y (k ) (117)
k
half of the array ( 0 ≤ k ≤ N − 1 ).
 Apply a window to the row data, e.g., a Ham-
− Compute the mean of the pixels in the first ming window. Note that you need to rescale
half of the array: the amplitude spectrum.
Fehler! Es ist nicht möglich, durch die  Pad the row signal with trailing zeros:
Bearbeitung von Feldfunktionen Ob-
jekte zu erstellen.  y (k ) for k ∈ [0,W − 1]
y ′(k ) =  (118)
0 for k ∈ [W ,2W − 1]
− Subtract the mean from the values:
y (k ) := y (k ) − y The last step doubles the length of the row signal.
As a consequence, the Fourier transformed signal
− Fill the second half of the array with zeros also doubles in length. Note that no new informa-
( N ≤ k ≤ 2N −1 ) tion has been added to the signal. The additional
values result from interpolation of the original val-
− Apply a (Fast) Fourier Transformation to ues (see [10]). If you ignore the values for negative
the array: frequencies, the resulting Fourier transformed will
2 N −1 nk consist of W+1 values from the interval n ∈ [0,W ] .
j 2π
Y (n ) = ∑ y (k )e 2N
Another problem arises because the power spec-
k =0
trum is very noisy when it is computed from the
The frequency index n runs in the interval DFT of only a single row. Even worse, the noise’s
0 ≤ n ≤ N yielding N+1 (!) complex result amplitude is in the same order of magnitude as the
values. spectrum signal itself and the noise does not vanish
when more points are added to the DFT. This prop-
− Compute the amplitude of the Fourier erty of the discrete Fourier Transformation explains
transformed as: why the two-dimensional DFT also looks rather
noisy.
Y (n ) = Y (n )Y * (n )
A simple but very effective method to deal with the
 Take the mean of the amplitude values for all problem is the Bartlett-method (see [10]). Compute
M lines of the image: the power spectrum separately for each row signal
and take the mean of the resulting power spectra
1
Y (n ) =
M
∑ Y j (n ) over all rows. The variance of the resulting signal
2
j
Y (n ) will decrease with the square root of the
where Y j (n ) is the amplitude of the Fourier number of rows available in the image.
transformed of the j-th line. If y r (k ) is the grey value signal of the row with
index r ∈ [0, H − 1] were H is the height of the
The N+1 values Y (n ) with 0 ≤ n ≤ N form the
2
spectrogram of the image. It should be flat with image and if Y r (n ) is the corresponding power
occasional peaks only. spectrum for each row, the resulting Bartlett-
smoothed signal is computed as:
this causes problems if the complemented spectra

Image Quality of Digital Cameras.doc Page 31 of 36  Copyright Basler, 2004


Sensitivity and Image Quality of Digital Cameras

27.10.04 Version 1.1 − work in progress

1 2
∑ Y r (n )
2
Y (n ) = (119)
1600
H r
Basler A600f
1400
For display and analysis, it does make more sense Basler A102f
1200
to use the smoothed amplitude spectrum Y (n )
1000
instead of the power spectrum:
800
2
Y (n ) = Y (n ) (120) 600

Note that the smoothing needs to be done on the 400


squared amplitude. Because the amplitude spectrum 200
has the same “unit” as the digital grey values, it can
be referred from digital numbers to electrons and 0
even to photons just as you can with the digital grey
value signal. Using the photon-referred amplitude
spectrum:
period length
1
Y p (n ) = Y (n ) (121)
Kη Fig 45 : Photon referred amplitude spectra for the
Basler A600f and the Basler A102f camera
where K is the conversion gain and η is the total (in darkness)
quantum efficiency makes the spectra of different
cameras comparable. As you can see, both cameras have a floor of white
Fig 45 shows as an example the spectra of the noise and this is the flat part of the spectrum.
A600f and the A102f taken with the cameras The white noise floor’s value can be extracted from
capped. The amplitude is given in photons. Because the spectrum automatically by computing the spec-
it is more convenient for understanding the spec-
trum, the period length on the abscissa is given in trum’s median. Sort the values Y (n ) and take the
units of pixels instead of the frequency: value with the index N 2 :

λ=2
W
in [px] (122) σ y.white = sort ( Y (n ) n = 0,1,2,K N ) N (124)
index =
n 2

W is the width of the image and n is the index of the The full amount of noise in the image, on the other
value in the spectrum running in the interval hand, can be computed by taking the geometric
n ∈ [0,W ] . The right-most point where n = W has a mean of the noise amplitudes:
period length of two. This would be, for example, a N
1
∑ Y (n )
2
signal where the even columns are bright and the σ y2. full = (125)
odd columns are dark. The point in the middle of N n =1
the abscissa corresponds to a period length of four
and so on. The left most point with a period length Determining the noise level this way prevents some
of infinity is the mean value of the signal. Since the problems with the algorithms given in section 2.3.
mean was subtracted before from each row, this The algorithms can be confused by certain kinds of
value should be zero. Note that if you do the meas- artifacts in the image.
urement with light and the lighting is not com- The non-whiteness coefficient is a suitable measure
pletely homogeneous, it will cause some signal on of how “white” the noise is:
the left of the spectrum and this can be ignored.
σ 2y. full
Another convenient scaling is the absolute fre- F= (126)
quency derived from the pixel clock of the camera: σ y2.white
f pixel n The coefficient should be ~1. If it is larger, the
f = in [Hz] (123) noise is not white.
2 W
The rightmost value equals the Nyquist frequency, As can be seen in Fig 45, the noise floor of the
which is half of the pixel clock. A600f CMOS camera is much higher than the
A102f CCD camera. The noise level of the A102f is
so low that some EMI signals show up as small

Image Quality of Digital Cameras.doc Page 32 of 36  Copyright Basler, 2004


Sensitivity and Image Quality of Digital Cameras

27.10.04 Version 1.1 − work in progress

peaks. These peaks are also present in the A600f, 46 shows an example using simulated stripes with
but they are buried in the larger amount of white added noise. In the upper image, the frequency
noise. cursor is placed at the peak which indicates the base
frequency of the stripes. As a consequence, the
The A600f has two quite large peaks at a period
small yellow dots follow the stripes.
length of four and two. They come from the internal
structure of the sensor where four columns share a
common analog-to-digital converter resulting in
vertical stripes of period length four. The stripes are
not sine shaped and thus not only the base fre-
quency (the peak at 4) but also higher harmonics
(the peak at 2) are present.
The amplitude spectrum is very useful to check for
periodic non-uniformities in the image and to get a
feeling for how large they are. However, it is not
easy to map the peaks in the spectrum to the stripes
in the image. The reason is the missing phase in-
formation.
This problem can be attacked with several exten- Fig 46 Stripes with phase (for details see text)
sions of the spectrum analysis described in the next
section. In the lower image, the cursor is placed at some
other frequency where only simulated Gaussian
3.4.4 Drawing the Phase Information white noise is present. As a consequence, the phase
of the lines is not coherent but is irregular. Due to
This section deals with the question - does a peak in
the leakage effect (see [10]), however, it is not
the amplitude spectrum at a certain frequency cor-
completely irregular.
respond with some stripes seen in the (contrast
enhanced) image? The frequency of interest is de-
3.4.5 Coherent Amplitude Spectrum
scribed by the index n0 . For the line with the index
r, the Fourier transformed at the frequency of inter- Typically, an EMI signal is not of pure sine shape.
est has the value Y (n0 ) . Up to this point, we have
r Consequently it shows up in the amplitude spec-
trum not only as one peak at the base frequency, but
used only the amplitude of this complex number.
as a whole series of peaks including higher harmon-
Now we will also use the phase:
ics. In analyzing a spectrum, it is important to
ϕ r (n0 ) = arc Y r (n0 ) in [rad] (127) check which peaks belong together and form a
group. This is especially important for camera de-
The frequency with index n0 corresponds to a sine signers because their task is to understand each
group of peaks and get rid of them. (This section is
wave with a wavelength of:
somewhat beyond the scope of the normal camera
W user.)
λ (n0 ) = 2 (128)
n0 A simple but very powerful technique is to freeze
various parts of the camera electronics and to see if
in units of pixels. We could display the position of a group of peaks starts to move. Another technique
this sine wave in the row with index r by drawing a is to attach small wires to possible sources of EMI
series of dots at a distance of λ (n0 ) from one an- signals and to use them as antennas. Move the an-
other. However, the sine waves for the different tennas to sensitive parts of the electronics and
rows start at different points ∆r (n0 ) and this can be check to see if groups of peaks begin to rise.
computed from the phase shift as: You can, however, extract information about which
peaks belong together from the spectrum itself. For
ϕ r (n0 )
∆r (n0 ) = λ (n0 ) in [px] (129) a certain frequency with index n0 , if you want to
2π know whether other frequencies contain signals that
Now the series of dots starts at a different place for are coherent, start by shifting the signal for each
each row. In a case where the frequency corre- row so that the corresponding phase at the base
sponds to some stripes, the dots drawn will follow frequency becomes zero ϕ r (n0 ) → 0 . This can be
the stripes. Otherwise, they will be incoherent. Fig done in the frequency domain as follows (see [10]):

Image Quality of Digital Cameras.doc Page 33 of 36  Copyright Basler, 2004


Sensitivity and Image Quality of Digital Cameras

27.10.04 Version 1.1 − work in progress

r  n 
Yˆ (n ) = Y r (n )exp − jϕ r (n0 )  (130) 5
 n0 
4
All row signals are now in sync with respect to the
base frequency and should also be in sync for any 3
other frequency containing coherent signals. To test
this, we normalize the shifted Fourier transformed 2
values:
r 1
r Yˆ (n )
Y 0 (n ) =
ˆ (131)
r
Yˆ (n ) 0

infinite
32,0
16,0
10,7
8,0
6,4
5,3
4,6
4,0
3,6
3,2
2,9
2,7
2,5
2,3
2,1
2,0
period [px]
and take the mean over all rows:
1 Fig 48 Spectrum and coherent spectrum (for de-
∑ Yˆ 0 (n )
r
M (n ) = (132) tails see text)
H r

where H is the height of the image. If the terms 3.4.6 Stripe Detector
r
Yˆ 0 (n ) have similar angles, the absolute value
The method described in the section above can be
M (n ) of this mean is 1. If not, M (n ) degrades to adapted to yield a measure of how “stripy” the
peaks of a spectrum are.
zero. Fig 47 shows why. The phase can be thought
of as vectors with their tips on the unity circle. If Compute the phase difference of pairs of lines:
the vectors share a similar angle, the mean of the
*
tips is near the circle. Otherwise, the mean will be
Yˆ (n ) = Yˆ (n )  Yˆ (n )
∆r r r −1
near the center. (133)
 
∆r
mean ∆r Yˆ
mean Yˆ 0 (n ) = (134)
∆r

r
and treat that mean as you did Yˆ 0 (n ) in the previ-
ous section. The result is a measure of how coher-
ent the signal is per frequency.
coherent phase incoherent phase

3.4.7 Characteristic Waveform


Fig 47 Checking for phase coherency (for details The coherent spectrum as described in the section
see text) above raises the question of what the corresponding
waveform looks like. This can be computed by
Now if we draw the spectrum derived from Y (n ) taking the mean of the row signals shifted by each
together with Y (n ) ⋅ M (n ) , we can immediately individual row’s phase shift. This is exactly the
same in the spatial domain as we did in the previous
see which peaks form a group. Fig 48 shows an section in the frequency domain. As we saw in
example. The peak with a period of 16 pixels was section 3.4.4, each line has a phase shift of:
investigated. As we see, it is coherent with itself but
there are two other peaks which correspond to the ϕ r (n0 )
3rd and the 5th harmonics. ∆r (n0 ) = λ (n0 ) (135)

in units of pixels. Now we make each line begin
with phase zero by shifting it by ∆r (n0 ) to the left
yielding:

(
yˆ r (k ) = y r k + ∆r (n0 ) ) (136)

Image Quality of Digital Cameras.doc Page 34 of 36  Copyright Basler, 2004


Sensitivity and Image Quality of Digital Cameras

27.10.04 Version 1.1 − work in progress

and then take the mean over all lines: 3.9 Micro-Lenses
1 TBD
m(k ) =
H
∑ yˆ r (k ) (137)
 Angle dependency of the QE
r

where H is the height of the image. The resulting


grey value form m(k) corresponds to the coherent
4 Acknowledgements
spectrum. Fig 49 shows the characteristic function TBD : mention everyone who has given feedback
for the large peak shown at a period of 3.2 in Fig ☺
48. It is a very interesting pattern composed of
several sine waves. 5 References
[1] Holst, Gerald, “CCD arrays, cameras, and
135 displays”, JCD Publishing, 1998,
130 ISBN 0-9640000-4-0
125 [2] Marston, Neil, “Solid-state imaging : a cri-
tique of the CMOS sensor”, PhD thesis, Uni-
120
versitiy of Edinburgh, 1998, (http://??)
115 [3] Kronmüller, Heinz, “Digitale Signalverarbei-
110 tung”, Springer 1991, ISBN 3-540-54128-4
105 [4] Dierks, Friedrich, “Quantisation Noise in
Camera Images”, Basler AG internal paper
100
[5] Burke, Michael, „Image Acquisition“, Hand-
700 750 800 850 900 book of machine vision engineering, vol. 1,
x-coordinate
Chapman&Hall, 1996, ISBN 0-412-47920-6
[6] Datasheet of the Micron CMOS sensor
Fig 49 Characteristic function (for details see text) MT9V403, see http://www.micron.com
[7] Datasheet of the Sony CCD sensor
A very useful special case is ∆r (n0 ) = 0 . The result- ICX285AL, see http://www.framos.de/
ing characteristic function is the mean of all rows [8] Datasheet of the Fillfactory CMOS sensor
and will show column non-uniformities. IBIS5a-1300, see http://www.fillfactory.com
[9] Burke, Michael, „Image Acquisition“, Hand-
3.5 Dirt and Defective Pixels book of machine vision engineering, vol. 1,
Chapman&Hall, 1996, ISBN 0-412-47920-6
TBD [10] Kammeyer, Karl, Kroschel, Kristian, Digitale
 Definition of defective pixels Signalverarbeitung, Teubner Stuttgart 1989,
ISBN 3-519-06122-8
 Image differences: parallel light / diffuse light [11] Jähne, Bernd, Practical Handbook on Image
 Skipping defective pixels for spatial noise Processing for Scientific and Technical Appli-
measurement cations, CRC Press 2004, ISBN
[12] Janesick, James R., CCD characterization
 Dealing with defective pixels in the camera using the photon transfer technique, Proc.
SPIE Vol. 570, Solid State Imaging Arrays, K.
3.6 Blooming and Smear Prettyjohns and E. Derenlak, Eds., pp. 7-19
TBD (1985)
[13] Janesick, James R., Scientific Charge-Coupled
 What blooming and smear looks like (A102f) Devices, SPIE PRESS Monograph Vol.
 Measurement à la Kodak PM83, ISBN 0-8194-3698-4
[14] Theuwissen, Albert J.P., Solid-State Imaging
with Charge-Coupled Devices, Kluwer Aca-
3.7 Linearity
demic Publishers 1995, ISBN 0-7923-3456-6
TBD [15] EMVA standard 1288 “Standard for Meas-
urement and Presentation of Specifications for
 Definition & Measurement
Machine Vision Sensors and Cameras”, see
http://www.emva.org
3.8 Modulation Transfer Function
[16] ISO standard 12232 “Photography - Electronic
TBD still-picture cameras- Determination of ISO
speed”, see http://www.iso.org/
 Definition & Measurement

Image Quality of Digital Cameras.doc Page 35 of 36  Copyright Basler, 2004


Sensitivity and Image Quality of Digital Cameras

27.10.04 Version 1.1 − work in progress

[17] “Spectral Luminous Efficiency Function for


Photopic Vision”, CIE 86-1990, see
http://www.cie.co.at/publ/abst/86-90.html

Image Quality of Digital Cameras.doc Page 36 of 36  Copyright Basler, 2004

You might also like