Tof System Design Part 2 Optical Design For Time of Flight Depth Sensing Cameras

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

Vol 55, No 3—August 2021

ToF System Design—


Part 2: Optical Design
for Time of Flight Depth
Sensing Cameras
Tzu-Yu Wu, Senior Optical Design Engineer

Abstract Modulated/Pulsed
LED/Laser Source
Optics plays a key role in time of flight (ToF) depth sensing cameras, and the
optical design dictates the complexity and feasibility of the final system and Emitter
Target Trigger
its performance. 3D ToF cameras have certain distinct characteristics1 that 3D Data
drive special optics requirements. This article presents the depth sensing optical Output

system architecture—which consists of the imaging optics sub-assembly, the


ToF sensor on the receiver, and the illumination module on the transmitter— Sensor Control
d and Interface
and discusses how to optimize each sub-module to improve the sensor and
system performance.
Time-Resolved
ToF Sensor
Introduction
Pixel

ToF is an emerging 3D sensing and imaging technology that has found numer- Figure 1. The concept of ToF technology.
ous applications in areas such as autonomous vehicles, virtual and augmented
reality, feature identification, and object dimensioning. ToF cameras acquire depth Depth Sensing Optical System Architecture
images by measuring the time it takes the light to travel from a light source to Figure 2 shows the optical system architecture. It can be broken down into two
objects in the scene and back to the pixel array. The specific type of technology main sub-module categories: imaging module (also known as receiver or Rx)
that Analog Devices’ ADSD3100 backside illuminated (BSI) CMOS sensor imple- and illumination module (also known as transmitter or Tx). The following sec-
ments is called continuous wave (CW) modulation, which is an indirect ToF sensing tions introduce the function of each component, requirements distinct to the
method. In a CW ToF camera, the light from an amplitude modulated light source ToF system, and corresponding design examples.
is backscattered by objects in the camera’s field of view (FOV), and the phase
shift between the emitted waveform and the reflected waveform is measured. By Illumination Module
measuring the phase shift at multiple modulation frequencies, one can calculate a The illumination module consists of a light source, a driver that drives the light
depth value for each pixel. The phase shift is obtained by measuring the correlation source at a high modulation frequency, and a diffuser that projects the optical beam
between the emitted waveform and the received waveform at different relative from the light source to the designed field of illumination (FOI), as illustrated in
delays using in-pixel photon mixing demodulation.2 The concept of CW ToF is Figure 2.
shown in Figure 1.

         

VISIT ANALOG.COM
Diffuser FOI FOV

Light Source
(VCSEL + Driver)

Imaging Optics

ToF Imager

Illumination Module (Tx) Imaging Module (Rx)

Figure 2. An example of a ToF optical system architecture cross-section.

Light Source and Driver not much light returns to the sensor (which is the case for faraway or low
ToF modules normally use light sources that are narrow band with low tempera- reflectivity objects).
ture dependence of the wavelength, including vertical cavity surface emitting X Human perception
lasers (VCSELs) and edge emitting lasers (EELs). Light emitting diodes (LEDs) are
generally too slow for ToF modulation requirements. VCSELs have gained more While the human eye is insensitive in the near infrared (NIR) wavelength
popularity over recent years due to their lower cost, form factor, and reliability, range, light at 850 nm can be perceived by the human eye. On the other hand,
along with being easy to integrate into ToF modules. Compared with EELs (that 940 nm is invisible to the human eye.
emit from the side) and LEDs (that emit from the sides and top), VCSELs emit X Sunlight
beams perpendicular to their surface, which offers better yield in production and
lower fabrication cost. In addition, the desired FOI can be achieved by using a Although the solar emission is maximum in the visible region of the spec-
single engineered diffuser with the designed divergence and optical profile. The trum, the energy in the NIR region is still significant. Sunlight (and ambient
optimization of the laser driver, as well as the electrical design and layout of the light more generally) can increase depth noise and reduce the range of a ToF
printed circuit boards (PCBs) and light source are critically important to achieve camera. Fortunately, due to atmospheric absorption, there is a dip in sunlight
high modulation contrast and high optical power. irradiance in the 920 nm to 960 nm region, where the solar irradiance is less
than half compared to the 850 nm region (see Figure 3). In outdoor applica-
Illumination Wavelength (850 nm vs. 940 nm) tions, operating the ToF system at 940 nm provides better ambient light
While the ToF operational principle does not depend on the wavelength (rather immunity and leads to better depth sensing performance.
it depends on the speed of light) and therefore the wavelength should not affect the 1.50
accuracy, the choice of wavelength can affect the system-level performance in some Etr W × m−2 × nm−1
use cases. The following are some considerations when choosing the wavelength. 1.25
Global Tilt W × m−2 × nm−1
Direct and Circumsolar
Spectral Irradiance (W m-2 nm-1)

W × m−2 × nm−1
X Sensor quantum efficiency and responsivity:
1.00
Quantum efficiency (QE) and responsivity (R) are linked to each other.
0.75
■ QE measures the ability of a photodetector to convert photons into electrons.

# of electrons collected 0.50


QE = in % (1)
# of photons impinging on a photodetector
0.25
■ R measures the ability of a photodetector to convert optical power into
electric current
0
700 800 900 1000 1100 1200
current produced at photodetector
Responsivity = in A/W (2) Wavelength (nm)
optical power on a photodetector
Figure 3. Solar spectral irradiance in NIR.3
q
R = QE ×
hc/λ Radiant Intensity (Optical Power per Solid Angle)
where q is electron charge, h is plank constant, c is speed of light, and λ The light source generates a constant optical power distributed into a 3D space
is wavelength. within the FOI produced by the diffusing optics. As the FOI increases, the energy
sustained per steradian (sr)—that is, radiant intensity [W/sr]—decreases. It is
Typically, the QE of silicon-based sensors is about 2× better or more at 850 nm
important to understand the trade-offs between FOI and radiant intensity as
than at 940 nm. For example, ADI CW ToF sensors have 44% QE at 850 nm
they affect the SNR, and therefore the depth range, of the ToF system.
and 27% QE at 940 nm. For the same amount of illumination optical power,
higher QE and R lead to better signal-to-noise ratio (SNR), especially when

2  ToF System Design—Part 2: Optical Design for Time of Flight Depth Sensing Cameras
Table 1 lists a few examples of FOI and their corresponding radiant intensity nor-
malized to the radiant intensity of a 60° × 45° FOI. Note that the radiant intensity I(θ) R0
is calculated as optical power per rectangular solid angle.
dA

Table 1. Normalized Radiant Intensity θ

Case Horizontal FOI Vertical FOI Normalized Radiant Intensity


R(θ)
1 60° 45° 100%

2 52° 52° 100%


dA

3 60° 60° 76%


Figure 4. Irradiance distribution vs. intensity.

X Width of the profile


4 72° 58° 67%
The width of the profile determines the FOI of the illumination profile. It
5 78° 65° 56%
can be defined as full width half max or 1/e2 of the maximum intensity. To
Illumination Profile Specifications accommodate misalignment between the imaging lens to the imager and
To fully define the illumination profile, several characteristics should be clearly the tolerance of the diffuser, FOI is normally designed to be slightly larger
specified including the profile shape, profile width, optical efficiency (that is, than the FOV of the lens to avoid dark pixels.
enclosed energy within a certain FOV), and optical power drop-off outside the The width of the profile is the convolution of the intensity profile of the light
FOI. The illumination profile specification is normally defined in radiant intensity source to the diffuser response to a collimated beam. The wider the input
in angular space. Mathematically it is expressed as: divergence angle to the diffuser, the wider the width and slower the transi-
tion slope. A wider and slower transition slope results in more energy falling

I= (3) outside the FOI, which causes optical power loss. The acceptance criteria for

such loss can be specified using the following two requirements.
where dΦ is the power emitted into the solid angle dΩ. The FOI needs to match X Optical efficiency—enclosed energy within the imaging lens FOV
the aspect ratio of the imager, and hence is normally square or rectangular.
This specification defines how much energy will be received by the imaging
X Illumination profile shape inside FOI module and is specified by:
The most common radiant intensity profiles in ToF flood illumination have 2D integrated optical power
a batwing shape. They have a profile that varies in cos-n (θ) to compensate Optical inside the lens FOV
% × transmission (5)
for the drop-off (that is, relative illumination) of the imaging lens. Figure 5 efficiency = 2D integrated optical power of the diffuser
demonstrates an example of a batwing illumination profile. If one wishes to of the entire illumination profile
achieve constant irradiance on the pixel array of the imager from a flat tar- Figure 5c illustrates the concept of 2D integration of the illumination profile
get, one should also consider a cos3 (θ) drop-off factor in irradiance (E) between within FOV.
the target center and the target edge [W/m2], which is defined as:
X Optical power drop-off outside FOI
dΦ I(θ)cos(θ) I(θ)cos3(θ)
E= = = (4) Optical power drop-off outside the FOI =
dA R(θ)2 R 2
0
Total integration of illumination profile –
(6)
where E is irradiance, dA is the surface area illuminated by optical power integration of illumination profile within FOI
%
dΦ, R(θ) is the distance between the light source to dA defined in Figure 4, Total integration of illumination profile
and dΩ = dAcos(θ)/R(θ)2.

Illumination Profile
~ cos−n(Θ)

FOI (FWHM)

FOV of the
Imaging Lens

(a) Radiant Intensity (Cross-Section) (b) Radiant Intensity (3D) (c) Radiant Intensity (Top View)

Figure 5. An illumination profile example.

VISIT ANALOG.COM  3
In general, the optical efficiency can be improved by having a collimator lens The following examples demonstrate how the CRA and the focused light cone
between the light source and the diffuser to reduce the input angle to the sizes (effective f/#) across the field affect the RI. The lens system of Example 1 in
diffuser, or by choosing a light source with a smaller divergence angle. Figure 7 has larger CRAs and gradually decreasing imaging cones (that is,
increased f/#) as the field angle increases. The corresponding RI drops signifi-
Imaging Module cantly with the field angle as shown in the corresponding RI plot. Example 2 in
The imaging module consists of an imaging lens assembly, band-pass filter Figure 7 demonstrates that the RI can be well maintained by minimizing the
(BPF), and microlens array on the imager. The thickness and material of the CRA as well as keeping uniform f/# across the field.
backside optical stacks on the imager should be optimized for low back-reflec- X Stray light
tion. Figure 6 shows an illustration of the imaging module.
Stray light is any unintended light in a system that could be detected by the
sensor. Stray light can come from in or out of field sources that form a ghost
image (for example, lens flare) through even numbers of reflections. Stray
light can also emanate from opto-mechanical structures and any scattering
Imaging Lens
surfaces. ToF systems are particularly sensitive to stray light because the
multipath nature of stray light contributes different optical path lengths to a
Band-Pass Filter
pixel, which leads to depth measurement inaccuracies. Several strategies in
the design process need to be used to reduce stray light, such as optimization
of the anti-reflection (AR) coating and the mechanical aperture, darkening
Imager
the lens edges and mounting structures, and custom design of the BPF to
optimize for wavelength and CRA.
The following are some items that can impact stray light in a system:
Microlens Array
■ Vignetting

Ideally there should not be any vignetting in a ToF lens system. Vignetting
cuts off the imaging rays and is sometimes used as a technique to increase
Figure 6. Illustration of the imaging module. the image quality while trading off the brightness of the peripheral fields.
However, the cutoff rays often bounce inside the lens system and tend to
ToF Imaging Lens Design Considerations cause stray light issues.
Since the ToF camera collects light generated by active illumination, the efficiency ■ AR coating
and uniformity of the light collection on the pixel array greatly affect the overall
performance. The lens needs to have a strong collecting power, high transmis- AR coating on the optical elements reduces the reflectance of each surface
sion, and low stray light. The following are design considerations for ToF lenses, and can effectively reduce the impact of lens reflections on depth calcula-
which are distinct from traditional RGB camera lenses. tion. AR coatings should be carefully designed for the light source wavelength
range and the angle range for the incident angles on the lens surfaces.
X Light collecting efficiency
■ Number of lens elements
Light collection efficiency is proportional to 1/(f/#)2, where f/# = (focal length)/
(aperture size). The smaller the f/#, the better the efficiency. There are some Although adding more lens elements provides more freedom to achieve the
trade-offs with a small f/# optical system. As the aperture size increases, design specifications and better image quality in terms of resolution, it also
there tend to be more vignetting and aberrations, which make the optics increases the inevitable back reflections from the lens elements as well as
more challenging to design. A low f/# system also tends to have shallower increasing complexity and cost.
depth of field. ■ Band-pass filter (BPF)
X Relative illumination (RI) and chief ray angle (CRA)
The BPF cuts off ambient light contribution and is essential for ToF systems.
RI is defined as: The BPF design should be tailored to the following parameters to have the
Illumination at a single point on the sensor best performance.
RI = (7)
Maximum illumination within the field (a) Lens parameters such as f/# and CRA across the field

The sensor illuminance declines based on the (cos q)4 law, in a distortion and (b) Light source parameters such as bandwidth, nominal wavelength toler-
vignetting free lens system where q is the CRA incident angle on the sensor ance, and thermal shift
plane. The result is a relative darkening of the image toward the sensor bor-
(c) Substrate material properties to low incident angle drift vs. wavelength
der. The irradiance fall-off can be reduced by introducing negative distortion
or low thermal drift vs. wavelength
in the lens system.
X Microlens array
The max CRA at the sensor edge should be optimized based on the imager
microlens array specification. A smaller CRA helps narrow the bandwidth of A ToF backside illuminated (BSI) sensor normally has a layer of microlens
the BPF to achieve better ambient light immunity. array that converges rays incident to the image sensor and maximizes the

4  ToF System Design—Part 2: Optical Design for Time of Flight Depth Sensing Cameras
Example 1 Example 2

Band-Pass Filter
Band-Pass Filter Light from the Scene

Light from
the Scene

Lens System
Lens System

Sensor Plane
Sensor Plane

Relative Illumination of Example 1 Relative Illumination of Example 2

1.0 1.0

0.9 0.9

0.8 0.8

0.7 0.7
Relative Illumination

Relative Illumination

0.6 0.6

0.5 0.5

0.4 0.4

0.3 0.3

0.2 0.2

0.1 0.1

0 0
0 0.3 0.6 0.9 1.2 1.5 1.8 2.1 2.4 2.7 3.0 0 0.3 0.5 0.8 1.0 1.3 1.5 1.8 2.0 2.3 2.5
Y Field (mm) Y Field (mm)

Figure 7. Relative illumination examples.

number of photons that reach the pixel modulation region. The geometry of
the microlens is optimized to achieve the highest absorption within the pixel
CRA
region where photons are converted into electrons.
Image Height
In many lens designs, the CRA of the lens increases as the image height Imaging Lens
increases toward the edge of the sensor, as shown in Figure 8. This oblique Max RRA

incidence leads to absorption loss in the pixel and crosstalk between adja-
cent pixels when the CRA is too big. It is important to design or choose an
imaging lens such that the CRA of the lens matches the specifications of the Note:
CRA: Chief Ray Angle; RRA: Rim Ray Angle
microlens array it is designed for. For example, the optimal CRA that matches
with ADI ToF sensor ADSD3100 is at around 12° at the sensor horizontal and
Figure 8. Max CRA of an imaging lens.
vertical edges.

VISIT ANALOG.COM  5
Conclusion References
ToF optics have unique requirements to achieve optimal performance. This 1
P aul O’Sullivan and Nicolas Le Dortz. “Time of Flight System Design—Part 1:
article provides an overview of a 3D ToF camera optical architecture and design System Overview.” Analog Dialogue, Vol. 55, No. 3, July 2021.
guidelines for the illumination and imaging sub-modules to help design such 2
C yrus S. Bamji, Swati Mehta, Barry Thompson, Tamer Elkhatib, Stefan Wurster,
optical systems and/or choose sub-components. For the illumination sub-module,
Onur Akkaya, Andrew Payne, John Godbaz, Mike Fenton, Vijay Rajasekaran,
the key factors are power efficiency, reliability, and the ability of the light source to
Larry Prather, Satya Nagaraja, Vishali Mogallapu, Dane Snow, Rich McCauley,
be driven at high modulation frequency with high modulation contrast. Wavelength
Mustansir Mukadam, Iskender Agi, Shaun McCarthy, Zhanping Xu, Travis Perry,
selection consideration between 850 nm and 940 nm, as well as how to specify the
William Qian, Vei-Han Chan, Prabhu Adepu, Gazi Ali, Muneeb Ahmed, Aditya
illumination profiles are discussed in detail. For the imaging sub-module, the lens
Mukherjee, Sheethal Nayak, Dave Gampell, Sunil Acharya, Lou Kordus, and Pat
design considerations including f/#, CRA that matches with the microlens specifi-
O’Connor. “IMpixel 65nm BSI 320MHz demodulated TOF Image sensor with 3μm
cation, and stray light control are critical to system-level performance.
global shutter pixels and analog binning.” 2018 IEEE International Solid-State
Circuits Conference (ISSCC), February 2018.
3
“Reference Air Mass 1.5 Spectra.” National Renewable Energy Laboratory.

About the Author


Tzu-Yu Wu is a senior optical design engineer at Analog Devices. She leads ADI’s optics development for time of flight (ToF) technology
and has been working on optical designs for imaging lens, illumination optics, stray light analysis, and optimizations on the microlens
array and the optical stacks of the backside illuminated CMOS ToF sensor. Prior to joining ADI, she had worked at Canon U.S.A. to develop
advanced medical imaging systems such as ultra-miniature endoscopes and cardiovascular imaging catheters. She earned her Ph.D. and
M.S. degrees in optical sciences from the University of Arizona, and an M.S. degree in physics from National Taiwan University.
Her Ph.D. research focused on the development of high resolution imaging devices that provide rapid detection of early stage
cancer through minimally invasive procedures. She can be reached at tzu-yu.wu@analog.com.

For regional headquarters, sales, and distributors or ©2021 Analog Devices, Inc. All rights reserved. VISIT ANALOG.COM
to contact customer service and technical support, Trademarks and registered trademarks are
visit analog.com/contact. the property of their respective owners.

Ask our ADI technology experts tough questions, browse


FAQs, or join a conversation at the EngineerZone Online
Support Community. Visit ez.analog.com.

You might also like