Professional Documents
Culture Documents
A Novel Ultra-High Speed Camera For Digital Image
A Novel Ultra-High Speed Camera For Digital Image
net/publication/228872237
CITATIONS READS
49 3,433
2 authors:
Some of the authors of this publication are also working on these related projects:
Machining process metrology for selection of optimum machining parameters View project
Digital Image Correlation (DIC) Measurement Accuracy and Sources of Error View project
All content following this page was uploaded by Vis Madhavan on 07 January 2015.
Abstract
Multi-channel gated-intensified cameras are commonly used for capturing images at ultra-high
frame rates. The use of image intensifiers reduces the image resolution and increases the error
in applications requiring high-quality images, such as digital image correlation. We report the
development of a new type of non-intensified multi-channel camera system that permits
recording of image sequences at ultra-high frame rates at the native resolution afforded by the
imaging optics and the cameras used. This camera system is based upon the concept of using a
sequence of short-duration light pulses of different wavelengths for illumination and using
wavelength selective elements in the imaging system to route each particular wavelength of
light to a particular camera. As such, the duration of the light pulses controls the exposure time
and the timing of the light pulses controls the interframe time. A prototype camera system
built according to this concept comprises four dual-frame cameras synchronized with four
dual-cavity pulsed lasers producing 5 ns pulses in four different wavelengths. The prototype is
capable of recording four-frame full-resolution image sequences at frame rates up to 200 MHz
and eight-frame image sequences at frame rates up to 8 MHz. This system is built around a
stereo microscope to capture stereoscopic image sequences usable for 3D digital image
correlation. The camera system is used for imaging the chip–workpiece interface area during
high speed machining, and the images are used to map the strain rate in the primary shear zone.
2
Meas. Sci. Technol. 19 (2008) 085503 A Hijazi and V Madhavan
the local intensity of light at that pixel. After the sensor has
been exposed to light for a period of time (the integration or
exposure time), a pattern of charges is collected in the pixels
(i.e., a frame is captured). The pattern of charges is then read
out to a storage device, freeing the sensor to capture another
image. The two most widely recognized types of MOS sensors
are the complementary metal oxide semiconductor (CMOS)
and the charge-coupled device (CCD). Both the CMOS and
CCD sensors were invited around the same time, however due
to the more complicated design of CMOS sensors, the CCD
technology developed much faster and CCD sensors became
more dominant.
The basic difference between CCD and CMOS sensors is
in the way an image (i.e., the pattern of charges collected in the
pixels) is transferred out of the sensor after it has been captured.
In the most basic type of CCD arrays, known as the ‘full
frame’ CCD [7], once a photogenerated pattern of charges is
collected in the pixels due to exposure to the incident radiation,
the incoming light is shuttered. Below the image section
there is another row of similar storage elements, which are not
photosensitive (i.e., shielded from light), known as the read-
out section. The charges in the pixels are transferred down one
row at a time into the read-out section, which in turn transfers
charges along the row then the electric charges are converted
to voltage and transferred out through the output amplifier.
Once the analog signal (corresponding to charge level in each
pixel) exits the sensor, it is then digitized and transferred to a
storage device. Since the serial transfer of the charges through
the read-out section takes a relatively long period of time, an
external shutter (mechanical, magneto-optic or electro-optic)
is typically used in order to avoid ‘smear’ of the image (i.e.,
Figure 2. Common types of CCD arrays: (a) frame-transfer CCD,
the pick up of charges during the transfer period) [7]. Once the (b) interline CCD.
entire frame has been read out, the shutter can then be opened
to capture the next frame. In a CMOS sensor, on the other quickly transferred (parallel transfer) to the storage section
hand, each pixel has its own charge-to-voltage conversion, in a process known as frame-transfer, whereupon the imaging
and in the modern designs, the sensor also includes amplifiers, section is ready again to acquire one more image. The frame
noise correction and digitization circuits, so the sensor outputs is then read out of the storage section by serial transfer through
digital signal. The design of a CMOS sensor allows the signals the read-out section. The interline CCD, figure 2(b), works
to be transferred out of the pixels in parallel, and thus the read- in a similar manner except that the storage section is in the
out speed can be much higher than that of a CCD sensor. form of shielded columns spaced between the columns of the
However, the other functions which are integrated into the imaging pixels. At the end of the exposure time, the charge
pixels increase the design complexity and reduce the area in each pixel is transferred to the storage pixel adjacent to it
available for light capture, which is known as the ‘fill factor’. and then read out in the usual way. In the interline CCD, the
Also, with each pixel doing its own conversion, uniformity is pattern of charges can be transferred to the storage sections in
lower. Each of the two types of sensors has its advantages and a faster manner as compared to a frame-transfer CCD, due to
disadvantages. For scientific imaging applications, the most the fact that the charge in each pixel is transferred directly to
significant advantages of CCDs, as compared to CMOS, are its corresponding storage element. However, the fact that there
the higher sensitivity and dynamic range (where both are due are insensitive regions in the image area makes the interline
to the higher fill factor) and the lower noise level, whereas the CCDs less suitable for scientific applications [7].
most significant disadvantage is the low read-out speed. One of the advantages of frame-transfer and interline
The most commonly used types of CCD arrays are shown CCD arrays over the basic ‘full frame’ CCD is the minimal
schematically in figure 2, the ‘frame-transfer’ CCD, 2(a), and smearing arising from the fact that charges are transferred
the ‘interline’ CCD, 2(b) [6, 7]. A frame-transfer CCD is from the imaging pixels much faster using parallel transfer.
divided into two identical sections, namely, the image section, In many frame-transfer and interline CCD arrays, a frame
which is photosensitive, and the storage section, which is can be transferred to the storage section in less than a
shielded from light. At the end of the time allowed for microsecond. Therefore, for these types of CCDs, the need
charge collection (i.e., the exposure time or the integration for an external shutter (to avoid smearing) is practically
time), the entire array of charges in the image section is eliminated. With the elimination of external shuttering,
3
Meas. Sci. Technol. 19 (2008) 085503 A Hijazi and V Madhavan
4
Meas. Sci. Technol. 19 (2008) 085503 A Hijazi and V Madhavan
illumination pulse duration defines the ‘effective’ exposure collected by the objective lens is delivered to the internal
time. This approach allows the acquisition of a pair of images, CCDs/cameras using a beam-splitter that splits the image into
one in the image section and the other in the storage section. multiple identical images (multiple beam-splitters arranged
Thus, in the dual-frame mode of operation the camera can in a branching configuration can also be used). Each of
capture two frames in very quick succession, and then has the internal cameras has an image intensifier comprising a
to wait for hundreds of milliseconds (the read-out time of photocathode screen that emits photoelectrons proportional
the two frames) before it can capture another pair of images. to the image intensity, a microchannel plate that uses the
The minimum interframe separation is limited by the frame- avalanche effect to amplify the electron current, a scintillator
transfer time, which can be as small as 50 ns for the most that reforms a visible image and a CCD optically coupled to
recent models. Cameras optimized to minimize the frame- the scintillator screen to form an electronic image. The ability
transfer time and aimed entirely at this mode of operation to switch the microchannel plate on or off rapidly is used
are referred to as dual-frame cameras, and a variety of such as an external shutter to control the exposure time (exposure
cameras are available commercially. times down to about 1.5 ns can be achieved) and the exposure
It should be noted that the illumination should be provided sequence of the CCDs/cameras. The shuttering provided by
in the form of a pulse that ends before frame transfer begins in the intensifiers removes the requirement that the illumination
order to prevent smearing. Otherwise, smearing during frame- be pulsed. Typically illumination is provided in the form of
transfer will be significant, since the frame-transfer time is not a single flash of light of duration long enough to capture a
small when compared with the effective exposure time (i.e., the sequence of frames. The second function of the intensifier is
duration of the illumination pulse) and interframe separation. to amplify the received light (gains up to 103 are commonly
It should also be apparent that this method of image recording used) which helps reduce the relative importance of read-out
is more suitable for low ambient light conditions since the noise. Though frame rates in excess of 100 MHz can be
actual exposure time of the second frame is relatively long. achieved with such cameras, the resolution of the images is
To realize operation at the minimum interframe time limited by the resolution of the intensifiers, which (currently)
that these cameras will allow, namely, the frame transfer is 76 line-pairs mm−1 at best. Also, use of image intensifiers
time, pulsed illumination has to be provided by sources that reduces image quality due to distortion caused by the coupling
are bright enough to adequately expose the CCDs within a optics, cross-talk between adjacent pixel elements caused by
time interval that is a small fraction of the frame transfer the microchannel plate and due to the fact that shot noise is also
time, shortly before and shortly after the transfer of the first amplified by the intensifier. Note that, since image intensifiers
frame. Pulsed lasers are increasingly being used as the are expensive, each of the intensified cameras is usually a
illumination source where they are capable of providing up dual-frame camera such that the frame count is doubled.
to 1 J of illumination within pulse duration as short as a There have been also some attempts to devise multi-
few nanoseconds. If a single laser head is used to produce
channel non-intensified high speed cameras [12, 13]. Such
a train of pulses at a very high repetition rate, the energy per
cameras consist of multiple frame-transfer or interline
pulse will be very small and not sufficient for many imaging
CCDs/cameras sharing the same viewing axis using a beam-
applications. Therefore, when multiple pulses with very short
splitter. This type of camera works in synchronization with
inter-pulse separation are needed, multiple synchronized laser
pulsed illumination sources (typically a laser), where the
heads are typically used. Dual-frame cameras are usually used
pulses’ duration and separation define the effective exposure
in conjunction with dual-pulse (dual-cavity) Nd:YAG lasers
time and interframe separation for the captured images. Note
for PIV in high speed flows. This type of camera is capable
that pulsed illumination is necessary or otherwise smearing
of capturing full-resolution images of good quality; however,
will be significant since the frame-transfer time is not small
the maximum frame rate is limited by the frame-transfer time
anymore when compared to the exposure time. Also, it should
and the number of obtainable frames is limited to two.
be noted that the interframe separation between two successive
frames is limited by the frame-transfer time, since no external
2.4. Acquiring high-resolution image sequences at shuttering is provided. In addition, there are practical
ultra-high speeds limitations on the total number of internal CCDs/cameras that
In order to capture sequences of more than two full-resolution can be used because the light collected by a single objective
images at ultra high speeds, multiple cameras combined into lens needs to be split among several CCDs/cameras. Multi-
multi-channel cameras systems are typically used. Multi- channel non-intensified cameras working according to this
channel cameras consist of multiple CCDs or cameras sharing technique have been reported to be able to capture four full-
the same viewing axis (using beam splitter(s), rotating mirror, resolution consecutive images at frame rate up to 1 MHz [13].
rotating prism, etc), which are triggered in very quick In addition, there are other types of multi-channel non-
succession to capture a sequence of images [6]. By the intensified ultra-high speed cameras such as the rotating mirror
use of multiple cameras, the frame rate limitation imposed camera [6] and the Cranz–Schardin camera [14]. In the
by the read-out time is eliminated and multiple images, rotating mirror camera, a mirror rotating at a very high speed
corresponding to the number of internal cameras or CCDs, is used to deliver the incoming light to the internal cameras
can be recorded. one at a time. However, images captured using rotating mirror
In the most commonly used type of multi-channel cameras are not suitable for use in PIV or DIC because they
cameras, known as a gated intensified camera, the light suffer some loss of resolution due to image drag resulting from
5
Meas. Sci. Technol. 19 (2008) 085503 A Hijazi and V Madhavan
6
Meas. Sci. Technol. 19 (2008) 085503 A Hijazi and V Madhavan
7
Meas. Sci. Technol. 19 (2008) 085503 A Hijazi and V Madhavan
from the four dual-cavity lasers are combined along a single up to 20 MHz. For shorter interframe intervals down to 5 ns
axis using high-pass dichroic mirrors. The illumination pulses (corresponding to a frame rate of 200 MHz), and event based
are then delivered through a liquid-optic cable to the coaxial triggering, a custom built circuit consisting of photodiodes,
illuminator module of the stereo microscope. The liquid- digital delay generators and high speed comparators is used.
optic cable helps decohere the light reaching the target so that The DaVis StrainMaster software package is used to
speckle patterns are not observed in the images. A picture of communicate with and control the cameras, setup the laser
the prototype camera system being used in our lab is shown in triggers, setup trigger delays to compensate for signal
figure 7.
propagation delays, etc. It is also used for carrying out
The cameras are controlled through the camera control
calibrations of the imaging system at each of the zoom settings
cards which communicate with the camera to setup the
used and for carrying out 2D and 3D DIC.
operating parameters such as the exposure time and to
download the images. A programmable timing unit (PTU)
is used to send the triggering signals to the lasers and the 5. Discussion
cameras. Twelve channels of the timing generator are used for
triggering the lasers and the cameras, one channel for each of
The design concept presented in this paper facilitates the
the four cameras and one channel for triggering each of the
eight laser pulses. Note that for cameras running in the dual- fabrication of a multi-channel ultra-high speed camera capable
frame mode, only one trigger signal is needed per camera in of a capturing a sequence of high quality images usable in DIC
order to initiate the first exposure while the timing of the second or PIV applications. It should be apparent that this concept
frame is controlled by the second illumination pulse. Though is quite general and is applicable to a variety of camera types
two trigger events are needed to trigger each laser pulse (one for and pulsed illumination sources. According to the concept
the flash lamp and one for the Q-switch), however, by using the described here, a system comprising n dual-frame cameras,
rising and falling edges of the trigger signal to trigger the flash each of which is limited to a maximum framing rate of f ,
lamp and the Q-switch, respectively, one channel is used for can be used to acquire a sequence of 2n frames at a framing
triggering each laser pulse. The PTU used has a time resolution rate up to n f , as illustrated for the prototype camera in the
of 50 ns and thus can run the camera system at a frame rate timing diagram in figure 8(a). Specifically, eight frames can
8
Meas. Sci. Technol. 19 (2008) 085503 A Hijazi and V Madhavan
9
Meas. Sci. Technol. 19 (2008) 085503 A Hijazi and V Madhavan
(a) (b)
(c ) (d )
10
Meas. Sci. Technol. 19 (2008) 085503 A Hijazi and V Madhavan
each pair of images obtained simultaneously from the left and References
right perspectives (a and b for instance) is correlated to locate
the same features in both the views. By the application of [1] Madhavan V, Mahadevan D, Sheshadri A, Yegneswaran K,
Adibi-Sedeh A, Hijazi A and Saket-Kashani M 2008
triangulation to the positions of corresponding points in the
Experimental determination of velocity and strain rate fields
images in the two views, the topography of the surface at the in metal cutting J. Mech. Phys. Solids at press
time of observation is obtained. By cross correlating images at [2] Sutton M, Wolters W, Peters W, Ranson W and McNeill S
two times, and using the topographical information determined 1983 Determination of displacements using an
by triangulation, the three components of displacement at each improved digital correlation method Image Vis. Comput.
1 133–9
point on the surface are obtained, as shown in figure 10(e). The
[3] Willert C, Raffel M, Kompenhans J, Stasicki B and Kahler C
velocity field can be obtained by dividing the displacement by 1996 Recent applications of particle image velocimetry in
the time interval, provided the time interval is sufficiently short aerodynamic research Flow Meas. Instrum. 7 247–56
to obtain an instantaneous spatial velocity field rather than a [4] Lecordier B, Mouqallid M, Vottier S, Rouland E, Allano D and
time averaged material rate. The gradient of the velocity field Trinite M 1994 CCD recording method for cross-correlation
PIV development in unstationary high-speed flows Exp.
yields the strain rate field, from which the equivalent strain rate
Fluids 17 205–8
can be obtained, see for instance, figure 10( f ). By correlating [5] Luo P, Chao Y, Sutton M and Peters W 1993 Accurate
the images obtained while translating the workpiece without measurement of three-dimensional deformations in
cutting it, it is found that images can be processed to result in deformable and rigid bodies using computer vision
a strain data point every eight pixels (a spatial resolution of Exp. Mech. 33 123–32
[6] Ray S 1997 High Speed Photography and Photonics
2 µm) with a strain noise less than 0.1% strain [1].
(Bellingham, WA: SPIE Press)
[7] Burt D 1995 Extending the performance limits of CCD image
7. Concluding remarks sensors GEC J. Res. 12 130–40
[8] Nebolsine P, Snyder D and Grace J 2001 MHz class
repetitively Q-switched high-power ruby lasers for
A novel multi-channel non-intensified ultra-high speed camera high-speed photographic applications Proc. AIAA
system comprising multiple dual-frame cameras and pulsed 2001-0845
illumination sources is presented. The camera system is based [9] Lowrance J, Mastrocola V, Renda G, Swain P, Kabra R,
on the idea of using light pulses of different wavelengths and Bhaskaran M, Tower J and Levine P 2004 Ultra-high frame
routing light of a particular wavelength to only a particular CCD imagers Proc. SPIE 5210 67–75
[10] Smith G, Phillips P, Middleton M and Archibald A 1994
camera using dichroic beam-splitters so that the exposure High-speed CCD read-out camera system Proc. SPIE
timing can be controlled by the timing of the light pulses. 2273 61–70
The concept is quite general and is applicable to a variety [11] Etoh G et al 2003 An image sensor which captures 100
of cameras and pulsed illumination sources. This concept consecutive frames at 1000 000 frames/s IEEE Trans.
allows the interframe separation to be infinitesimally small and Electron Devices 50 144–51
[12] Willert C, Stasicki B, Raffel M and Kompenhans J 1995
results in a camera system where non-intensified cameras can Digital video camera for application of particle image
be combined together to capture a sequence of high-resolution velocimetry in high-speed flows Proc. SPIE 2546
images at ultra-high speed. A prototype of the camera system 124–34
was built around a stereo microscope and consists of four dual- [13] Gray C, Skopp D, Wieneke B and Reckers W 2001
frame cameras and four dual-cavity lasers. The prototype High-speed PIV using high-frequency diode-pumped solid
state laser and multi-frame CCD Proc. Int. Cong. on
camera system is capable of capturing a sequence of high- Instrumentation in Aerospace Simulation Facilities
resolution images usable for 2D or 3D DIC at framing rates p 135
up to 200 MHz. The camera system is being used with [14] Bretthauer B, Meier G and Stasicki B 1991 An electronic
good success to capture high-resolution microscopic images Cranz–Schardin camera Rev. Sci. Instrum. 62 364–8
of the PSZ during high speed machining, which yield strain [15] Kaehler C and Kompenhans J 1999 Multiple plane stereo
PIV—technical realization and fluid-mechanical
measurements at spatial resolutions down to 2 µm with noise significance Proc. Int. Workshop on PIV p 281
less than 0.1%. [16] Hu H, Saga T, Kobayashi T, Taniguchi N and Yasuki M 2001
Dual-plane stereoscopic particle image velocimetry: system
set-up and its application on a lobed jet mixing flow Exp.
Acknowledgments Fluids 31 277–93
[17] Shinohara K, Sugii Y, Aota A, Hibara A, Tokeshi M,
We are pleased to acknowledge the financial support by Kitamori T and Okamoto K 2004 High-speed micro-PIV
the National Science Foundation (Grant numbers: DMI- measurements of transient flow in microfluidic devices
0116675 and DMI-0421394, Program Manager: George Meas. Sci. Technol. 15 1965–70
[18] Veledar O, Byrne P, Danaher S, Allen J, Thompson L and
Hazelrigg). The prototype camera system is available in McMillan J 2007 Simple techniques for generating
the Manufacturing Processes Research Lab at Wichita State nanosecond blue light pulses from light emitting diodes
University (http://engr.wichita.edu/vmadhavan/MPRL.htm). Meas. Sci. Technol. 18 131–7
11