Iii Iii 0 Iioi Did Iii Oii 100 1101 0ii Did Iii Dii Ii 1010 Ii Di Ii

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 22

III III0 IIOI DID III OII 100 1101 0II DID III DII II 1010 II DI II

US 20160227195A1
(19) United States
(12) Patent Application Publication (10) Pub. No.: US 2016/0227195 Al
Venkataraman et al. (43) Pub. Date: Aug.4, 2016

(54) SYSTEMS AND METHODS FOR MEASURING filed on Sep. 29,2010,now Pat. No.8,902,321,filed as
DEPTH USING IMAGES CAPTURED BY A application No. PCT/U52009/044687 on May 20,
CAMERA ARRAY INCLUDING CAMERAS 2009.
SURROUNDING A CENTRAL CAMERA
(60) Provisional application No. 61/054,694, filed on May
(71) Applicant: Pelican Imaging Corporation, Santa 20, 2008.
Clara, CA (US) Publication Classification
(72) Inventors: Kartik Venkataraman, San Jose, CA (51) Int. Cl.
(US); Amandeep S. Jabbi, San H04N 13/02 (2006.01)
Francisco, CA (US); Robert H. Mullis, H04N 13/00 (2006.01)
Santa Cruz, CA (US)
(52) U.S. Cl.
CPC .......H04N13/0242(2013.01); H04N13/0257
(73) Assignee: Pelican Imaging Corporation, Santa
(2013.01); H04N13/0022(2013.01); HO4N
Clara, CA(US)
2013/0081 (2013.01)
(21) Appl. No.: 15/095,930 (57) ABSTRACT
A camera array, an imaging device and/or a method for cap-
(22) Filed: Apr. 11, 2016
turing image that employ a plurality ofimagers fabricated on
a substrate is provided. Each imager includes a plurality of
Related U.S. Application Data
pixels. The plurality ofimagers include a first imager having
(63) Continuation of application No. 14/988,670, filed on a first imaging characteristics and a second imager having a
Jan.5,2016,which is a continuation ofapplication No. second imaging characteristics. The images generated by the
14/704,909, filed on May 5, 2015,now Pat. No. 9,235, plurality of imagers are processed to obtain an enhanced
898,which is a continuation ofapplication No. 14/475, image compared to images captured by the imagers. Each
466, filed on Sep. 2, 2014, now Pat. No. 9,049,391, imager may be associated with an optical element fabricated
which is a continuation ofapplication No. 12/935,504, using a wafer level optics(WLO)technology.

_______

U pstream
Pipehne
Processing
518

mage Pxe
CorreaUon
514

Con rirrnUon
and
MesurerneH1t

Address nd
Paraliax
Phase Offset
CornensatFon
Cahbraton

558
Patent Application Publication Aug. 4,2016 Sheet 1 of 8 US 2016/0227195 Al

r ir ir i __

2M
100

2A 2B H 2C

3A 3E3 3C 3M

NA NB NC NM

z Y 220

27

200

23OJ

240
Patent Application Publication Aug. 4,2016 Sheet 2 of 8 US 2016/0227195 Al

250

310
x

Jt __
23
t'n

L
Patent Application Publication Aug. 4,2016 Sheet 3 of 8 US 2016/0227195 Al

A
Patent Application Publication Aug. 4,2016 Sheet 4 of 8 US 2016/0227195 Al

1
U pstream 420
Rpehne
Processing
tQ.

mage Pixel
Correlation

ParaHax
Conflrrnaflon
nd
Measurement

C!)
w
U

Address and
Paraax
Phase Offset
CompensaUon 24
Cabraton
554

558

572
Super- 548 Address
ResouLion Converson __ j
526

- 546

Downstreni
Synthesized
Coor ___________________________
mage
Processing
42

J
Patent Application Publication Aug. 4,2016 Sheet 5 of 8 US 2016/0227195 Al

G B F) H BHG
_ HG
H P G R

F G 6A
Patent Application Publication Aug. 4,2016 Sheet 6 of 8 US 2016/0227195 Al

_ HR HG HR HG
B
HG H B G B

[ G
H R HG
[ R
HG
I
B
HG H B G B

_ HR HG HR HG
FHG. 6B

P
H B P B P

H P R P G

P
HG P G P

[ G P
H B P G

P
HR P R P

F G 6C
Patent Application Publication Aug. 4,2016 Sheet 7 of 8 US 2016/0227195 Al

H H H
P B P B P

_ H_HR H_HG
i I
_ H_H H _ B

P
HR H_HR F:)

F G 6D

P
H R _ B G

H B P G _
B
H G F R

H P R _ B

H
HG B R P

F G 6E
Patent Application Publication Aug. 4,2016 Sheet 8 of 8 US 2016/0227195 Al

Capture Luma Images, NearR


I mages and Chroma mages
710

Perforni Noniazation of Captum


mages
714

Perforni P~iraflax CornpensWon of


r1ages
720

Pertorrn Super-•resokjton
Processng to Obtain Super-
resolved mages
724

72.3
No
Yes - Lhflng ,

Con ton Better Than Preset


Parameters?

NormaUze near-AR mage Ahgn Superresoved NearR


vitIi Respe.c[[o E_uin rnage rn1ge and Lurna rnag
730 734

Periorm Near-R Denosng


of Lurna maqes
73B

Perform Foctis Recovery


742

Process SuperrescIuton of
Near-IR and Lurna mges
746

Construct Syntheszed rnage


US 2016/0227195 Al Aug. 4, 2016

SYSTEMS AND METHODS FOR MEASURING (SNR)ratio and low light sensitivity. The dynamic range is
DEPTH USING IMAGES CAPTURED BYA defined as the ratio of the maximum possible signal that can
CAMERA ARRAY INCLUDING CAMERAS be captured by a pixel to the total noise signal. Typically, the
SURROUNDING A CENTRAL CAMERA well capacity of an image sensor limits the maximum pos-
sible signal that can be captured by the image sensor. The
RELATED APPLICATION maximum possible signal in turn is dependent on the strength
ofthe incident illumination and the duration ofexposure(e.g.,
10001] This application is a continuation of U.S. patent
integration time, and shutter width). The dynamic range can
application Ser. No. 14/988,670,entitled"Systems and Meth-
be expressed as a dimensionless quantity in decibels(dB)as:
ods for Generating Depth Maps Using Light Focused on an
Image Sensor by a Lens Element Array", filed Jan. 5, 2016,
which is a continuation of U.S. patent application Ser. No.
full well capacity equation (1)
14/704,909, entitled Systems and Methods for Generating DR
= RMS noise
Depth Maps Using Light Focused on an Image Sensor by a
Lens Element Array", filed May 5, 2015, which is a continu-
ation ofU.S. patent application Ser. No. 14/475,466, entitled Typically,the noise level in the captured image influences the
"Capturing and Processing of Near-IR Images Including floor ofthe dynamic range. Thus, for an 8 bit image, the best
Occlusions Using Camera Arrays Incorporating Near-IR case would be 48 dB assuming the RMS noise level is 1 bit. In
Light Sources", filed Sep. 2, 2014, which application is a reality, however, the RMS noise levels are higher than 1 bit,
continuation of U.S. patent application Ser. No. 12/935,504, and this further reduces the dynamic range.
entitled "Capturing and Processing of Images Using Mono- 10006] The signal to noise ratio(SNR)ofa captured image
lithic Camera Array with Heterogeneous Imagers", which is,to a great extent, a measure ofimage quality. In general, as
issued on Dec. 2, 2014 as U.S. Pat. No. 8,902,321, which more light is captured by the pixel, the higher the SNR. The
application was a 35 U.S.C. 371 national stage application SNR of a captured image is usually related to the light gath-
corresponding to Application No.PCT/U52009/044687 filed ering capability of the pixel.
May 20, 2009, which claims priority to U.S. Provisional 10007] Generally, Bayer filter sensors have low light sensi-
PatentApplication No.61/054,694 entitled "Monolithic Inte- tivity. At low light levels, each pixel's light gathering capa-
grated Array of Heterogeneous Image Sensors," filed on May bility is constrained by the low signal levels incident upon
20, 2008, which is incorporated by reference herein in its each pixel. In addition, the color filters over the pixel further
entirety. constrain the signal reaching the pixel. IR (Infrared) filters
also reduce the photo-response from near-IR signals, which
FIELD OF THE INVENTION can carry valuable information.
10002] The present invention is related to an image sensor 10008] These performance constraints ofimage sensors are
including a plurality ofheterogeneous imagers, more specifi- greatly magnified in cameras designed for mobile systems
cally to an image sensor with a plurality of wafer-level imag- due to the nature of design constraints. Pixels for mobile
ers having custom filters, sensors and optics of varying con- cameras are typically much smaller than the pixels of digital
figurations. still cameras (DSC). Due to limits in light gathering ability,
reduced SNR,limits in the dynamic range, and reduced sen-
BACKGROUND sitivity to low light scenes, the cameras in mobile cameras
show poor performance.
10003] Image sensors are used in cameras and other imag-
ing devices to capture images. In a typical imaging device, SUMMARY
light enters through an opening (aperture) at one end of the
imaging device and is directed to an image sensor by an 10009] Embodiments provide a camera array, an imaging
optical element such as a lens. In most imaging devices, one device including a camera array and/or a method for captur-
or more layers of optical elements are placed between the ing image that employ a plurality of imagers fabricated on a
aperture and the image sensor to focus light onto the image substrate where each imager includes a plurality of sensor
sensor. The image sensor consists of pixels that generate elements. The plurality of imagers include at least a first
signals upon receiving light via the optical element. Com- imager formed on a first location ofthe substrate and a second
monly used image sensors include CCD (charge-coupled imager formed on a second location ofthe substrate. The first
device) image sensors and CMOS (complementary metal- imager and the second imager may have the same imaging
oxide-semiconductor) sensors. characteristics or different imaging characteristics.
10004] Filters are often employed in the image sensor to 10010] In one embodiment,the first imaging characteristics
selectively transmit lights ofcertain wavelengths onto pixels. and the second imager have different imaging characteristics.
A Bayer filter mosaic is often formed on the image sensor. The imaging characteristics may include, among others, the
The Bayer filter is a color filter array that arranges one ofthe size of the imager, the type of pixels included in the imager,
RGB color filters on each ofthe color pixels. The Bayer filter the shape of the imager, filters associated with the imager,
pattern includes 50% green filters, 25% red filters and 25% exposure time ofthe imager, aperture size associated with the
blue filters. Since each pixel generates a signal representing imager, the configuration of the optical element associated
strength of a color component in the light and not the full with the imager, gain of the imager, the resolution of the
range ofcolors, demosaicing is performed to interpolate a set imager, and operational timing of the imager.
of red, green and blue values for each image pixel. 10011] In one embodiment,the first imager includes a filter
10005] The image sensors are subject to various perfor- for transmitting a light spectrum. The second imager also
mance constraints.The performance constraints for the image includes the same type offilter for transmitting the same light
sensors include,among others,dynamic range,signal to noise spectrum as the first imager but captures an image that is
US 2016/0227195 Al Aug. 4, 2016
2

sub-pixel phase shifted from an image captured by the first 10026] FIG.7 is a flowchart illustrating a process of gener-
imager. The images from the first imager and the second ating an enhanced image from lower resolution images cap-
imager are combined using a super-resolution process to tured by a plurality ofimagers,according to one embodiment.
obtain images of higher resolution.
10012] In one embodiment, the first imager includes a first DETAILED DESCRIPTION
filter for transmitting a first light spectrum and the second
10027] A preferred embodiment ofthe present invention is
imager includes a second filter for transmitting a second light
now described with reference to the figures where like refer-
spectrum. The images from the first and second imagers are
ence numbers indicate identical or functionally similar ele-
then processed to obtain a higher quality image.
ments. Also in the figures, the left most digits of each refer-
10013] In one embodiment, lens elements are provided to
ence number corresponds to the figure in which the reference
direct and focus light onto the imagers. Each lens element
number is first used.
focuses light onto one imager. Because each lens element is
associated with one imager, each lens element may be 10028] Embodiments relate to using a distributed approach
to capturing images using a plurality of imagers of different
designed and confgured for a narrow light spectrum. Further,
imaging characteristics. Each imagermay be spatially shifted
the thickness ofthe lens element may be reduced, decreasing
from another imager in such a manner that an imager captures
the overall thickness of the camera array. The lens elements
an image that us shifted by a sub-pixel amount with respect to
are fabricated using wafer level optics(WLO)technology.
another imager captured by another imager. Each imager may
10014] In one embodiment,the plurality ofimagers include
also include separate optics with different filters and operate
at least one near-IR imager dedicated to receiving near-IR
with different operating parameters (e.g., exposure time).
(Infrared) spectrum. An image generated from the near-IR
Distinct images generated by the imagers are processed to
imager may be fused with images generated from other imag-
obtain an enhanced image. Each imager may be associated
ers with color filters to reduce noise and increase the quality
with an optical element fabricated using wafer level optics
of the images.
(WLO)technology.
10015] In one embodiment,the plurality ofimagers may be
associated with lens elements that provide a zooming capa- 10029] A sensor element or pixel refers to an individual
bility. Different imagers may be associated with lens of dif- light sensing element in a camera array.The sensor element or
ferent focal lengths to have different fields-of-views and pro- pixel includes, among others, traditional CIS(CMOS Image
vide different levels of zooming capability. A mechanism Sensor), CCD (charge-coupled device), high dynamic range
may be provided to provide smooth transition from one zoom pixel, multispectral pixel and various alternatives thereof.
level to another zoom level. 10030] An imager refers to a two dimensional array of
10016] In one or more embodiments, the plurality ofimag- pixels. The sensor elements of each imager have similar
ers is coordinated and operated to obtain at least one ofa high physical properties and receive light through the same optical
dynamic range image, a panoramic image, a hyper-spectral component. Further, the sensor elements in the each imager
image, distance to an object and a high frame rate video. may be associated with the same color filter.
10017] The features and advantages described in the speci- 10031] A camera array refers to a collection of imagers
fication are not all inclusive and, in particular, many addi- designed to function as a unitary component. The camera
tional features and advantages will be apparent to one of array may be fabricated on a single chip for mounting or
ordinary skill in the art in view ofthe drawings, specification, installing in various devices.
and claims. Moreover, it should be noted that the language 10032] An array ofcamera array refers to an aggregation of
used in the specification has been principally selected for two or more camera arrays. Two or more camera arrays may
readability and instructional purposes, and may not have been operate in conjunction to provide extended functionality over
selected to delineate or circumscribe the inventive subject a single camera array.
matter. 10033] Image characteristics ofan imager refer to any char-
acteristics or parameters of the imager associated with cap-
BRIEF DESCRIPTION OF DRAWINGS turing of images. The imaging characteristics may include,
among others, the size of the imager, the type of pixels
10018] FIG. 1 is a plan view of a camera array with a
plurality of imagers, according to one embodiment. included in the imager, the shape of the imager, filters asso-
ciated with the imager, the exposure time of the imager,
10019] FIG.2A is a perspective view ofa camera array with
lens elements, according to one embodiment. aperture size associated with the imager,the configuration of
the optical element associated with the imager, gain of the
10020] FIG.2B is a cross-sectional view of a camera array,
according to one embodiment. imager,the resolution ofthe imager,and operational timing of
the imager.
10021] FIGS.3A and 3B are sectional diagrams illustrating
changes in the heights oflens elements depending on changes
in the dimensions ofimagers, according to one embodiment. Structure of Camera Array
10022] FIG. 3C is a diagram illustrating chief ray angles 10034] FIG. 1 is a plan view of a camera array 100 with
varying depending on differing dimensions of the lens ele- imagers 1A through NM,according to one embodiment. The
ments. camera array 100 is fabricated on a semiconductor chip to
10023] FIG.4 is a functional block diagram for an imaging include a plurality of imagers 1A through NM. Each of the
device, according to one embodiment. imagers 1A through NM may include a plurality of pixels
10024] FIG. S is a functional block diagram of an image (e.g., 0.32 Mega pixels). In one embodiment, the imagers 1A
processing pipeline module, according to one embodiment. through NM are arranged into a grid format as illustrated in
10025] FIGS. 6A through 6E are plan views of camera FIG. 1. In other embodiments, the imagers are arranged in a
arrays having different layouts of heterogeneous imagers, non-grid format. For example, the imagers may be arranged
according to embodiments. in a circular pattern, zigzagged pattern or scattered pattern.
US 2016/0227195 Al Aug. 4, 2016

10035] The camera array may include two or more types of tings, integration time settings, digital processing settings or
heterogeneous imagers, each imager including two or more combinations thereof These deviations can be specified at a
sensor elements or pixels. Each one ofthe imagers may have low level (e.g., deviation in the gain)or at a higher level(e.g.,
different imaging characteristics. Alternatively, there may be difference in the ISO number, which is then automatically
translated to deltas for gain, integration time, or otherwise as
two or more different types ofimagers where the same type of
specified by context/master control registers) for the particu-
imagers shares the same imaging characteristics. lar camera array. By setting the master values and deviations
10036] In one embodiment, each imager 1A through NM from the master values, higher levels of control abstraction
has its own filter and/or optical element (e.g., lens). Specifi- can be achieved to facilitate simpler programming model for
cally, each of the imagers 1A through NM or a group of many operations. In one embodiment, the parameters for the
imagers may be associated with spectral color filters to imagers are arbitrarily fixed for a target application. In
receive certain wavelengths oflight. Example filters include a another embodiment, the parameters are configured to allow
traditional filter used in the Bayer pattern (R, G, B or their a high degree of flexibility and programmability.
complements C, M, Y), an IR-cut filter, a near-IR filter, a 10040] In one embodiment,the camera array is designed as
polarizing filter, and a custom filter to suit the needs ofhyper- a drop-in replacement for existing camera image sensors used
spectral imaging. Some imagers may have no filter to allow in cell phones and other mobile devices. For this purpose,the
reception ofboth the entire visible spectra and near-IR, which camera array may be designed to be physically compatible
increases the imager's signal-to-noise ratio. The number of with conventional image sensors of approximately the same
distinct filters may be as large as the number ofimagers in the resolution although the achieved resolution of the camera
camera array. Further, each ofthe imagers 1A through NM or array may exceed conventional image sensors in many pho-
a group of imagers may receive light through lens having tographic situations. Taking advantage of the increased per-
different optical characteristics (e.g., focal lengths) or aper- formance, the camera array of the embodiment may include
tures of different sizes. fewer pixels to obtain equal or better quality images com-
10037] In one embodiment,the camera array includes other pared to conventional image sensors.Alternatively,the size of
related circuitry. The other circuitry may include, among the pixels in the imager may be reduced compared to pixels in
others, circuitry to control imaging parameters and sensors to conventional image sensors while achieving comparable
sense physical parameters. The control circuitry may control results.
imaging parameters such as exposure times, gain, and black 10041] In order to match the raw pixel count of a conven-
level offset. The sensor may include dark pixels to estimate tional image sensor without increasing silicon area, the logic
dark current at the operating temperature. The dark current overhead for the individual imagers is preferably constrained
may be measured for on-the-fly compensation for any ther- in the silicon area. In one embodiment, much of the pixel
mal creep that the substrate may suffer from. control logic is a single collection offunctions common to all
10038] In one embodiment,the circuit for controlling imag- or most of the imagers with a smaller set of functions appli-
ing parameters may trigger each imager independently or in a cable each imager. In this embodiment, the conventional
synchronized manner. The start of the exposure periods for external interface for the imager may be used because the data
the various imagers in the camera array(analogous to opening output does not increase significantly for the imagers.
a shutter) may be staggered in an overlapping manner so that 10042] In one embodiment, the camera array including the
the scenes are sampled sequentially while having several imagers replaces a conventional image sensor ofM megapix-
imagers being exposed to light at the same time. In a conven- els. The camera array includes NxN imagers, each sensor
tional video camera sampling a scene at N exposures per including pixels of
second, the exposure time per sample is limited to 1/N sec-
onds. With a plurality ofimagers,there is no such limit to the
exposure time per sample because multiple imagers may be M
operated to capture images in a staggered manner. N2
10039] Each imager can be operated independently. Entire
or most operations associated with each individual imager
may be individualized.In one embodiment,a master setting is Each imager in the camera array also has the same aspect ratio
programmed and deviation (i.e., offset or gain) from such as the conventional image sensor being replaced. Table 1 lists
master setting is configured for each imager. The deviations example configurations of camera arrays according to the
may reflect functions such as high dynamic range, gain set- present invention replacing conventional image sensor.
TABLE 1

Conventional Image Camera array Including Imagers

Sensor No. of No. of Super-

Total Effective Total Horizontal Vertical Imager Resolution Effective


Mpixels Resolution Mpixels Imagers Imagers Mpixels Factor Resolution

8 3.2 8 5 5 0.32 3.2 3.2


8 4 4 0.50 2.6 3.2
8 3 3 0.89 1.9 3.2
5 2.0 5 5 5 0.20 3.2 2.0
5 4 4 0.31 2.6 2.0
5 3 3 0.56 1.9 2.0
3 1.2 3 5 5 0.12 3.2 1.2
3 4 4 0.19 2.6 1.2
3 3 3 0.33 1.9 1.2
US 2016/0227195 Al Aug. 4, 2016

10043] The Super-Resolution Factors in Table 1 are esti- 10050] Jn one embodiment,the imagers in the camera array
mates and the Effective Resolution values may differ based are spatially separated from each other by a predetermined
on the actual Super-Resolution factors achieved by process- distance. By increasing the spatial separation, the parallax
ing. between the images captured by the imagers may be
10044] The number ofimagers in the camera array may be increased. The increased parallax is advantageous where
determined based on, among other factors,(i) resolution,(ii) more accurate distance information is important. Separation
parallax,(iii)sensitivity, and(iv)dynamic range.A first factor between two imagers may also be increased to approximate
for the size of imager is the resolution. From a resolution the separation ofa pair ofhuman eyes. By approximating the
point of view, the preferred number of the imagers ranges separation of human eyes, a realistic stereoscopic 3D image
from 2x2 to 6x6 because an array size of larger than 6x6 is may be provided to present the resulting image on an appro-
likely to destroy frequency information that cannot be recre- priate 3D display device.
ated by the super-resolution process. For example, 8 Mega- 10051] Jn one embodiment, multiple camera arrays are pro-
pixel resolution with 2x2 imager will require each imager to vided at different locations on a device to overcome space
have 2 Megapixels. Similarly, 8 Megapixel resolution with a constraints. One camera array may be designed to fit within a
5x5 array will require each imager to have 0.32 Megapixels. restricted space while another camera array may be placed in
10045] A second factor that may constrain the number of another restricted space ofthe device. For example, if a total
imagers is the issue ofparallax and occlusion. With respect to of20imagers are required but the available space allows only
an object captured in an image,the portion ofthe background a camera array of lxi0imagers to be provided on either side
scene that is occluded from the view ofthe imager is called as ofa device,two camera arrays each including 10imagers may
"occlusion set." When two imagers capture the object from be placed on available space at both sides ofthe device. Each
two different locations, the occlusion set of each imager is camera array may be fabricated on a substrate and be secured
different. Hence, there may be scene pixels captured by one to a motherboard or other parts of a device. The images
imager but not the other. To resolve this issue of occlusion, it collected from multiple camera arrays may be processed to
is desirable to include a certain minimal set of imagers for a generate images of desired resolution and performance.
given type ofimager. 10052] A design for a single imager may be applied to
10046] A third factor that may put a lower bound on the different camera arrays each including other types ofimagers.
number of imagers is the issue of sensitivity in low light Other variables in the camera array such as spatial distances,
conditions. To improve low light sensitivity, imagers for color filters and combination with the same or other sensors
detecting near-JR spectrum may be needed. The number of may be modified to produce a camera array with differing
imagers in the camera array may need to be increased to imaging characteristics. Jn this way, a diverse mix of camera
accommodate such near-JR imagers. arrays may be produced while maintaining the benefits from
economies of scale.
10047] A fourth factor in determining the size ofthe imager
is dynamic range. To provide dynamic range in the camera
Wafer Level Optics Jntegration
array, it is advantageous to provide several imagers of the
same filter type (chroma or luma). Each imager of the same 10053] Jn one embodiment, the camera array employs
filter type may then be operated with different exposures wafer level optics(WLO)technology. WLO is a technology
simultaneously. The images captured with different expo- that molds optics on glass wafers followed by packaging of
sures may be processed to generate a high dynamic range the optics directly with the imager into a monolithic inte-
image. grated module. The WLO procedure may involve, among
10048] Based on these factors, the preferred number of other procedures,using a diamond-turned mold to create each
imagers is 2x2 to 6x6. 4x4 and 5x5 configurations are more plastic lens element on a glass substrate.
preferable than 2x2 and 3x3 configurations because the 10054] FJG. 2A is a perspective view of a camera array
former are likely to provide sufficient number of imagers to assembly 200 with wafer level optics 210 and a camera array
resolve occlusion issues, increase sensitivity and increase the 230, according to one embodiment. The wafer level optics
dynamic range. At the same time, the computational load 210 includes a plurality of lens elements 220, each lens ele-
required to recover resolution from these array sizes will be ment 220 covering one of twenty-five imagers 240 in the
modest in comparison to that required in the 6x6 array.Arrays camera array 230. Note that the camera array assembly 200
larger than 6x6 may, however, be used to provide additional has an array of smaller lens elements occupy much less space
features such as optical zooming and multispectral imaging. compared to a single large lens covering the entire camera
10049] Another consideration is the number of imagers array 230.
dedicated to luma sampling. By ensuring that the imagers in 10055] FJG.2B is a sectional view ofa camera array assem-
the array dedicated to near-JR sampling do not reduce the bly 250,according to one embodiment.The camera assembly
achieved resolution,the information from the near-JR images 250 includes a top lens wafer 262, a bottom lens wafer 268, a
is added to the resolution captured by the luma imagers. For substrate 278 with multiple imagers formed thereon and spac-
this purpose, at least 50% of the imagers may be used for ers 258,264 270.The camera array assembly 250 is packaged
sampling the luma and/or near-JR spectra. Jn one embodi- within an encapsulation 254. A top spacer 258 is placed
ment with 4x4 imagers, 4 imagers samples luma, 4 imagers between the encapsulation 254 and the top lens wafer 262.
samples near-JR, and the remaining 8 imagers samples two Multiple optical elements 288 are formed on the top lens
chroma (Red and Blue). Jn another embodiment with 5x5 wafer 262.A middle spacer 264 is placed between the top lens
imagers,9 imagers samples luma,8imagers samples near-JR, wafer 262 and a bottom lens wafer 268.Another set ofoptical
and the remaining 8 imagers samples two chroma (Red and elements 286 is formed on the bottom lens wafer 268. A
Blue). Further, the imagers with these filters may be arranged bottom spacer 270 is placed between the bottom lens wafer
symmetrically within the camera array to address occlusion 268 and the substrate 278. Through-silicon vias 274 are also
due to parallax. provided to paths for transmitting signal from the imagers.
US 2016/0227195 Al Aug. 4, 2016

The top lens wafer 262 may be partially coated with light sharpness of image captured at the imager and reduces lon-
blocking materials 284(e.g., chromium)to block oflight. The gitudinal chromatic aberration.
portions ofthe top lens wafer 262 not coated with the block- 10060] Other advantages of smaller lens element include,
ing materials 284 serve as apertures through which light among others,reduced cost,reduced amount ofmaterials,and
passes to the bottom lens wafer 268 and the imagers. In the the reduction in the manufacturing steps. By providing n2
embodiment ofFIG. 2B,filters 282 are formed on the bottom lenses that are 1/n the size in x andy dimension(and thus 1/n
lens wafer 268. Light blocking materials 280 (e.g., chro- thickness),the wafer size for producing the lens element may
mium) may also be coated on the bottom lens 268 and the also be reduced. This reduces the cost and the amount of
substrate 278 to function as an optical isolator. The bottom materials considerably. Further, the number oflens substrate
surface of the surface is covered with a backside redistribu- is reduced,which results in reduced number ofmanufacturing
tion layer("RDL")and solder balls 276. steps and reduced attendant yield costs. The placement accu-
racy required to register the lens array to the imagers is
10056] In one embodiment,the camera array assembly 250
includes 5x5 array of imagers. The camera array 250 has a typically no more stringent than in the case ofa conventional
width W of 7.2 mm,and a length of 8.6 mm. Each imager in imager because the pixel size for the camera array according
the camera array may have a width S of 1.4 mm. The total to the present invention may be substantially same as a con-
height tl ofthe optical components is approximately 1.26mm ventional image sensor.
and the total height t2 the camera array assembly is less than 10061] In one embodiment, the WLO fabrication process
2 mm. includes: (i) incorporating lens element stops by plating the
lens element stops onto the substrate before lens molding,and
10057] FIGS. 3A and 3B are diagrams illustrating changes (ii) etching holes in the substrate and performing two-sided
in the height t ofa lens element pursuant to changes in dimen- molding oflenses through the substrate. The etching ofholes
sions in an x-y plane. A lens element 320 in FIG.3B is scaled in the substrate is advantageous because index mismatch is
by 1/n compared to a lens element 310 in FIG. 3A. As the not caused between plastic and substrate. In this way, light
diameter L/n of the lens element 320 is smaller than the absorbing substrate that forms natural stops for all lens ele-
diameter L by a factor of n, the height tin ofthe lens element ments (similar to painting lens edges black) may be used.
320 is also smaller than the height t ofthe lens element 310 by 10062] In one embodiment,filters are part ofthe imager. In
a factor of n. Hence, by using an array of smaller lens ele- another embodiment, filters are part of a WLO subsystem.
ments, the height of the camera array assembly can be
reduced significantly. The reduced height ofthe camera array Imaging System and Processing Pipeline
assembly may be used to design less aggressive lenses having
better optical properties such as improved chief ray angle, 10063] FIG. 4 is a functional block diagram illustrating an
reduced distortion, and improved color aberration. imaging system 400, according to one embodiment. The
imaging system 400 may include, among other components,
10058] FIG. 3C illustrates improving a chief ray angle the camera array 410, an image processing pipeline module
(CRA)by reducing the thickness ofthe camera array assem- 420 and a controller 440. The camera array 410 includes two
bly. CRA1 is the chief ray angle for a single lens covering an or more imagers, as described above in detail with reference
entire camera array. Although the chief ray angle can be to FIGS. 1 and 2. Images 412 are captured by the two or more
reduced by increasing the distance between the camera array imagers in the camera array 410.
and the lens, the thickness constraints imposes constraints on 10064] The controller 440 is hardware, software, firmware
increasing the distance. Hence, the CRA1 for camera array or a combination thereof for controlling various operation
having a single lens element is large, resulting in reduced parameters of the camera array 410. The controller 440
optical performance. CRA2 is the chief ray angle for an receives inputs 446 from a user or other external components
imager in the camera array that is scaled in thickness as well and sends operation signals 442 to control the camera array
as other dimensions. The CRA2 remains the same as the 410. The controller 440 may also send information 444 to the
CRA1 of the conventional camera array and results in no image processing pipeline module 420 to assist processing of
improvement in the chief ray angle. By modifying the dis- the images 412.
tance between the imager and the lens element as illustrated in 10065] The image processing pipeline module 420 is hard-
FIG. 3C, however, the chief ray angle CRA3 in the camera ware, firmware, software or a combination for processing the
array assembly may be reduced compared to CRA1 or CRA2, images received from the camera array 410. The image pro-
resulting in better optical performance. As described above, cessing pipeline module 420 processes multiple images 412,
the camera arrays according to the present invention has for example, as described below in detail with reference to
reduced thickness requirements, and therefore, the distance FIG. 5. The processed image 422 is then sent for display,
ofthe lens element and the camera array may be increased to storage, transmittal or further processing.
improve the chief ray angle. 10066] FIG.5 is a functional block diagram illustrating the
10059] In addition, the lens elements are subject to less image processing pipeline module 420, according to one
rigorous design constraints yet produces better or equivalent embodiment.The image processing pipeline module 420 may
performance compared to conventional lens element cover- include, among other components, an upstream pipeline pro-
ing a wide light spectrum because each lens element may be cessing module 510, an image pixel correlation module 514,
designed to direct a narrow band of light. For example, an a parallax confirmation and measurement module 518, a par-
imager receiving visible or near-IR spectrum may have a lens allax compensation module 522, a super-resolution module
element specifically optimized for this spectral band oflight. 526,an address conversion module 530,an address and phase
For imagers detecting other light spectrum, the lens element offset calibration module 554, and a downstream color pro-
may have differing focal lengths so that the focal plane is the cessing module 564.
same for different spectral bands oflight. The matching ofthe 10067] The address and phase offset calibration module
focal plane across different wavelengths oflight increases the 554 is a storage device for storing calibration data produced
US 2016/0227195 Al Aug. 4, 2016

during camera array characterization in the manufacturing the scene taking into account the positions ofthe imagers. The
process or a subsequent recalibration process. The calibration measurement of the parallax may be accomplished at the
data indicates mapping between the addresses of physical same time by keeping track ofthe various pair-wise measure-
pixels 572 in the imagers and the logical addresses 546, 548 ments and calculating an "actual" parallax difference as a
of an image. least squares(or similar statistic) fit to the sample data. Other
10068] The address conversion module 530 performs nor- methods for detecting the parallax may include detecting and
malization based on the calibration data stored in the address tracking vertical and horizontal high-frequency image ele-
and phase offset calibration module 554. Specifically, the ments from frame-to-frame.
address conversion module 530 converts "physical" 10071] The parallax compensation module 522 processes
addresses of the individual pixels in the image to "logical" images including objects close enough to the camera array to
addresses 548 of the individual pixels in the imagers or vice induce parallax differences larger than the accuracy of the
versa. In order for super-resolution processing to produce an phase offset information required by super resolution pro-
image of enhanced resolution, the phase difference between cess. The parallax compensation module 522 uses the scan-
corresponding pixels in the individual imagers needs to be line based parallax information generated in the parallax
resolved. The super-resolution process may assume that for detection and measurement module 518 to further adjust
each pixel in the resulting image the set ofinput pixels from mapping between physical pixel addresses and logical pixel
each ofthe imager is consistently mapped and that the phase addresses before the super-resolution process. There are two
offset for each imager is already known with respect to the cases that occur during this processing. In a more common
position of the pixel in the resulting image. The address case, addressing and offsetting adjustment are required when
conversion module 530 resolves such phase differences by the input pixels have shifted positions relative to the image-
converting the physical addresses in the images 412 into wise-corresponding pixels in other imagers. In this case, no
logical addresses 548 of the resulting image for subsequent further processing with respect to parallax is required before
processing. performing super-resolution. In a less common case, a pixel
or group of pixels are shifted in such a way that exposes the
10069] The images 412 captured by the imagers 540 are
provided to the upstream pipeline processing module 510. occlusion set. In this case, the parallax compensation process
The upstream pipe processing module 510 may perform one generates tagged pixel data indicating that the pixels of the
or more of Black Level calculation and adjustments, fixed occlusion set should not be considered in the super-resolution
noise compensation, optical PSF (point spread function) process.
deconvolution, noise reduction, and crosstalk reduction. 10072] After the parallax change has been accurately deter-
After the image is processed by the upstream pipeline pro- mined for a particular imager,the parallax information 524 is
cessing module 510, an image pixel correlation module 514 sent to the address conversion module 530. The address con-
performs calculation to account for parallax that becomes version module 530 uses the parallax information 524 along
more apparent as objects being captured approaches to the with the calibration data 558 from the address and phase
camera array. Specifically, the image pixel correlation mod- offset calibration module 554 to determine the appropriate X
ule 514 aligns portions ofimages captured by different imag- andY offsets to be applied to logical pixel address calcula-
ers to compensate for the parallax. In one embodiment, the tions.The address conversion module 530 also determines the
image pixel correlation module 514 compares the difference associated sub-pixel offset for a particular imager pixel with
between the average values of neighboring pixels with a respect to pixels in the resulting image 428 produced by the
threshold and flags the potential presence of parallax when super-resolution process. The address conversion module 530
the difference exceeds the threshold. The threshold may takes into account the parallax information 524 and provides
change dynamically as a function ofthe operating conditions logical addresses 546 accounting for the parallax.
of the camera array. Further, the neighborhood calculations 10073] After performing the parallax compensation, the
may also be adaptive and reflect the particular operating con- image is processed by the super-resolution module 526 to
ditions of the selected imagers. obtain a high resolution synthesized image 422 from low
resolution images, as described below in detail. The synthe-
10070] The image is then processed by the parallax confr- sized image 422 may then be fed to the downstream color
mation and measurement module 518 to detect and meter the
processing module 564 to perform one or more ofthe follow-
parallax. In one embodiment, parallax detection is accom-
ing operations:focus recover, white balance,color correction,
plished by a running pixel correlation monitor. This operation
gamma correction, RGB to YUV correction, edge-aware
takes place in logical pixel space across the imagers with
sharpening, contrast enhancement and compression.
similar integration time conditions. When the scene is at
10074] The image processing pipeline module 420 may
practical infinity, the data from the imagers is highly corre-
include components for additional processing of the image.
lated and subject only to noise-based variations. When an
For example,the image processing pipeline module 420 may
object is close enough to the camera, however, a parallax
include a correction module for correcting abnormalities in
effect is introduced that changes the correlation between the
images caused by a single pixel defect or a cluster of pixel
imagers.Due to the spatial layout ofthe imagers,the nature of
defects. The correction module may be embodied on the same
the parallax-induced change is consistent across all imagers.
chip as the camera array, as a component separate from the
Within the limits of the measurement accuracy, the correla-
camera array or as a part ofthe super-resolution module 526.
tion difference between any pair of imagers dictates the dif-
ference between any other pair ofimagers and the differences
Super-Resolution Processing
across the other imagers. This redundancy of information
enables highly accurate parallax confirmation and measure- 10075] In one embodiment, the super-resolution module
ment by performing the same or similar calculations on other 526 generates a higher resolution synthesized image by pro-
pairs of imagers. If parallax is present in the other pairs, the cessing low resolution images captured by the imagers 540.
parallax should occur at roughly the same physical location of The overall image quality ofthe synthesized image is higher
US 2016/0227195 Al Aug. 4, 2016
7

than images captured from any one of the imagers individu- 10081] An issue of separating the spectral sensing elements
ally. In other words,the individual imagers operate synergis- into different imagers is parallax caused by the physical sepa-
tically, each contributing to higher quality images using their ration of the imagers. By ensuring that the imagers are sym-
ability to capture a narrow part ofthe spectrum without sub- metrically placed, at least two imagers can capture the pixels
sampling. The image formation associated with the super- around the edge ofa foreground object. In this way,the pixels
resolution techniques may be expressed as follows: around the edge of a foreground object may be aggregated to
increase resolution as well as avoiding any occlusions.
yk — Wkx+nk,Vk-1 .p equation (2)
Another issue related to parallax is the sampling ofcolor. The
where Wk represents the contribution ofthe HR scene(x)(via issue ofsampling the color may be reduced by using parallax
blurring, motion,and sub-sampling)to each ofthe LR images information in the polychromatic imagers to improve the
(yk) captured on each of the k imagers and nk is the noise
accuracy of the sampling of color from the color filtered
contribution. imagers.
10076] FIGS. 6A through 6E illustrate various configura- 10082] In one embodiment, near-IR imagers are used to
tions of imagers for obtaining a high resolution image determine relative luminance differences compared to a vis-
through a super-resolution process, according to embodi- ible spectra imager. Objects have differing material reflectiv-
ments ofthe present invention. In FIGS.6A through 4E,"R" ity results in differences in the images captured by the visible
represents an imager having a red filter, "G" represents a spectra and the near-IR spectra. At low lighting conditions,
imager having a green filter,"B"represents an imager having the near-IR imager exhibits a higher signal to noise ratios.
a blue filter, "P" represents a polychromatic imager having Therefore,the signals from the near-IR sensor may be used to
sensitivity across the entire visible spectra and near-IR spec- enhance the luminance image. The transferring of details
trum,and"I"represents an imager having a near-IR filter. The from the near-IR image to the luminance image may be per-
polychromatic imager may sample image from all parts ofthe formed before aggregating spectral images from different
visible spectra and the near-IR region (i.e., from 650 nm to imagers through the super-resolution process. In this way,
800 nm). In the embodiment of FIG. 6A,the center columns edge information about the scene may be improved to con-
and rows ofthe imagers include polychromatic imagers. The struct edge-preserving images that can be used effectively in
remaining areas of the camera array are filled with imagers the super-resolution process. The advantage ofusing near-IR
having green filters, blue filters, and red filters. The embodi- imagers is apparent from equation (2) where any improve-
ment of FIG. 6A does not include any imagers for detecting ment in the estimate for the noise (i.e., n) leads to a better
near-IR spectrum alone. estimate of the original HR scene (x).
10083] FIG.7 is a flowchart illustrating a process of gener-
10077] The embodiment of FIG. 6B has a configuration
ating an HR image from LR images captured by a plurality of
similar to conventional Bayer filter mapping. This embodi-
imagers, according to one embodiment. First, luma images,
ment does not include any polychromatic imagers or near-IR
near-IR images and chroma images are captured 710 by imag-
imagers. As described above in detail with reference to FIG.
ers in the camera array. Then normalization is performed 714
1, the embodiment of FIG. 6B is different from conventional
on the captured images to map physical addresses of the
Bayer filter configuration in that each color filter is mapped to
imagers to logical addresses in the enhanced image. Parallax
each imager instead of being mapped to an individual pixel.
compensation is then performed 720 to resolve any differ-
10078] FIG.6C illustrates an embodiment where the poly- ences in the field-of-views ofthe imagers due to spatial sepa-
chromatic imagers form a symmetric checkerboard pattern. rations between the imagers. Super-resolution processing is
FIG.6D illustrates an embodiment where four near-IR imag- then performed 724 to obtain super-resolved luma images,
ers are provided. FIG. 6E illustrates an embodiment with super-resolved near-IR images, and super-resolved chroma
irregularmapping ofimagers. The embodiments ofFIGS.6A images.
through 6E are merely illustrative and various other layouts of 10084] Then it is determined 728 ifthe lighting condition is
imagers can also be used. better than a preset parameter. If the lighting condition is
10079] The use of polychromatic imagers and near-IR better than the parameter, the process proceeds to normalize
imagers is advantageous because these sensors may capture 730 a super-resolved near-IR image with respect to a super-
high quality images in low lighting conditions. The images resolved luma image. A focus recovery is then performed
captured by the polychromatic imager or the near-IR imager 742.In one embodiment,the focus recovery is performed 742
are used to denoise the images obtained from regular color using PSF (point spread function) deblurring per each chan-
imagers. nel. Then the super-resolution is processed 746 based on
10080] The premise ofincreasing resolution by aggregating near-IR images and the luma images.A synthesized image is
multiple low resolution images is based on the fact that the then constructed 750.
different low resolution images represent slightly different 10085] If it is determined 728 that the lighting condition is
viewpoints ofthe same scene. Ifthe LR images are all shifted not better than the preset parameter, the super-resolved near-
by integer units of a pixel, then each image contains essen- IR images and luma images are aligned 734. Then the super-
tially the same information. Therefore, there is no new infor- resolved luma images are denoised 738 using the near-IR
mation in LR images that can be used to create the HR image. super-resolved images. Then the process proceeds to per-
In the imagers according to embodiments, the layout of the forming focus recovery 742 and repeats the same process as
imagers may be preset and controlled so that each imager in a when the lighting condition is better than the preset param-
row or a column is a fixed sub-pixel distance from its neigh- eter. Then the process terminates.
boring imagers. The wafer level manufacturing and packag- Image Fusion of Color Images with Near-IR Images
ing process allows accurate formation ofimagers to attain the 10086] The spectral response ofCMOS imagers is typically
sub-pixel precisions required for the super-resolution pro- very good in the near-IR regions covering 650 nm to 800 nm
cessing. and reasonably good between 800nm and 1000 nm.Although
US 2016/0227195 Al Aug. 4, 2016

near-JR images having no chroma information, information different exposures. The near-JR imagers are used to capture
in this spectral region is useful in low lighting conditions low-light aspects ofthe scene and the luma imagers are used
because the near-JR images are relatively free of noise. to capture the high illumination aspects of the scene. This
Hence, the near-JR images may be used to denoise color results in a total of 17 possible exposures. Jfexposure for each
images under the low lighting conditions. imager is offset from an adjacent imager by a factor of 2, for
10087] Jn one embodiment,an image from a near-JR imager example, a maximum dynamic range of or 102 dB can be
is fused with another image from a visible light imager. captured. This maximum dynamic range is considerably
Before proceeding with the fusion,a registration is performed higher than the typical 48 dB attainable in a conventional
between the near-JR image and the visible light image to camera with 8 bit image outputs.
resolve differences in viewpoints. The registration process 10093] At each time instant, the responses(under-exposed,
may be performed in an offline, one-time, processing step. over-exposed or optimal)from each of the multiple imagers
After the registration is performed, the luminance informa- are analyzed based on how many exposures are needed at the
tion on the near-JR image is interpolated to grid points that subsequent time instant. The ability to query multiple expo-
correspond to each grid point on the visible light image. sures simultaneously in the range of possible exposures
10088] After the pixel correspondence between the near-JR accelerates the search compared to the case where only one
image and the visible light image is established, denoising exposure is tested at once. By reducing the processing time
and detail transfer process may be performed. The denoising for determining the adequate exposure, shutter delays and
process allows transfer ofsignal information from the near-JR shot-to-shot lags may be reduced.
image to the visible light image to improve the overall SNR of 10094] Jn one embodiment, the HDR image is synthesized
the fusion image. The detail transfer ensures that edges in the from multiple exposures by combining the images after lin-
near-JR image and the visible light image are preserved and earizing the imager response for each exposure. The images
accentuated to improve the overall visibility of objects in the from the imagers may be registered before combining to
fused image. account for the difference in the viewpoints of the imagers.
10089] Jn one embodiment, a near-JR flash may serve as a 10095] Jn one embodiment, at least one imager includes
near-JR light source during capturing of an image by the HDR pixels to generate HDR images. HDR pixels are spe-
near-JR imagers. Using the near-JR flash is advantageous, cialized pixels that capture high dynamic range scenes.
among other reasons, because(i)the harsh lighting on objects Although HDR pixels show superior performances compared
ofinterest may be prevented,(ii) ambient color ofthe object to other pixels, HDR pixels show poor performance at low
may be preserved, and (iii) red-eye effect may be prevented. lighting conditions in comparison with near-JR imagers. To
10090] Jn one embodiment, a visible light filter that allows improve performance at low lighting conditions, signals from
only near-JR rays to pass through is used to further optimize the near-JR imagers may be used in conjunction with the
the optics for near-JR imaging. The visible light filter signal from the HDR imager to attain better quality images
improves the near-JR optics transfer function because the across different lighting conditions.
light filter results in sharper details in the near-JR image. The 10096] Jn one embodiment, an HDR image is obtained by
details may then be transferred to the visible light images processing images captured by multiple imagers by process-
using a dual bilateral filter as described, for example, in Eric ing,as disclosed,for example,in Paul Debevec et al.,"Recov-
P. Bennett et al., "Multispectral Video Fusion," Computer ering High Dynamic Range Radiance Maps from Photo-
Graphics (ACM SJGGRAPH Proceedings) (Jul. 25, 2006), graphs," Computer Graphics (ACM SJGGRAPH
which is incorporated by reference herein in its entirety. Proceedings),(Aug. 16, 1997), which is incorporated by ref-
erence herein in its entirety. The ability to capture multiple
Dynamic Range Determination by Differing Exposures at exposures simultaneously using the imager is advantageous
Jmagers because artifacts caused by motion ofobjects in the scene can
be mitigated or eliminated.
10091] An auto-exposure (AE) algorithm is important to
obtaining an appropriate exposure for the scene to be cap- Hyperspectral Jmaging by Multiple Jmagers
tured. The design of the AE algorithm affects the dynamic
range of captured images. The AE algorithm determines an 10097] Jn one embodiment, a multi-spectral image is ren-
exposure value that allows the acquired image to fall in the dered by multiple imagers to facilitate the segmentation or
linear region of the camera array's sensitivity range. The recognition ofobjects in a scene. Because the spectral reflec-
linear region is preferred because a good signal-to-noise ratio tance coefficients vary smoothly in most real world objects,
is obtained in this region. Jf the exposure is too low, the the spectral reflectance coefficients may be estimated by cap-
picture becomes under-saturated while if the exposure is too turing the scene in multiple spectral dimensions using imag-
high the picture becomes over-saturated. Jn conventional ers with different color filters and analyzing the captured
cameras, an iterative process is taken to reduce the difference images using Principal Components Analysis (PCA).
between measured picture brightness and previously defined 10098] Jn one embodiment, half ofthe imagers in the cam-
brightness below a threshold. This iterative process requires a era array are devoted to sampling in the basic spectral dimen-
large amount oftime for convergence, and sometimes results sions (R, G, and B) and the other half of the imagers are
in an unacceptable shutter delay. devoted to sampling in a shifted basic spectral dimensions
10092] Jn one embodiment, the picture brightness of (R', G', and B'). The shifted basic spectral dimensions are
images captured by a plurality of imagers is independently shifted from the basic spectral dimensions by a certain wave-
measured. Specifically, a plurality of imagers are set to cap- length (e.g., 10 nm).
turing images with different exposures to reduce the time for 10099] Jn one embodiment, pixel correspondence and non-
computing the adequate exposure. For example, in a camera linear interpolation is performed to account for the sub-pixel
array with 5x5 imagers where 8 luma imagers and 9 near-JR shifted views ofthe scene. Then the spectral reflectance coef-
imagers are provided, each of the imagers may be set with ficients ofthe scene are synthesized using a set oforthogonal
US 2016/0227195 Al Aug. 4, 2016
;
sJ

spectral basis functions as disclosed, for example, in J. P. 5. filters is used to produce a signature ofthe ambient and local
Parkkinen, J. Hallikainen and T. Jaaskelainen,"Characteristic light sources in a scene. By using the small imager,the expo-
Spectra of Munsell Colors," J. Opt. Soc. Am.,A 6:318 (Au- sure and white balance characteristics may be determined
gust 1989), which is incorporated by reference herein in its more accurately at a faster speed. The spectral bandpass fil-
entirety. The basis functions are eigenvectors derived by PCA ters may be ordinary color filters or diffractive elements of a
of a correlation matrix and the correlation matrix is derived bandpass width adequate to allow the number of camera
from a database storing spectral reflectance coefficients mea- arrays to cover the visible spectrum of about 400 nm. These
sured by, for example, Munsell color chips (a total of 1257) imagers may run at a much higher frame rate and obtain data
representing the spectral distribution of a wide range of real (which may or may not be used for its pictorial content) for
world materials to reconstruct the spectrum at each point in processing into information to control the exposure and white
the scene. balance ofother larger imagers in the same camera array. The
10100] At first glance, capturing different spectral images small imagers may also be interspersed within the camera
of the scene through different imagers in the camera array array.
appears to trade resolution for higher dimensional spectral
sampling. However,some ofthe lost resolutionmay berecov- Optical Zoom Implemented Using Multiple Imagers
ered. The multiple imagers sample the scene over different 10104] In one embodiment, a subset ofimagers in the cam-
spectral dimensions where each sampling grid ofeach imager era array includes telephoto lenses. The subset of imagers
is offset by a sub-pixel shift from the others. In one embodi- may have other imaging characteristics same as imagers with
ment,no two sampling grid ofthe imager overlap. That is, the non-telephoto lenses. Images from this subset ofimagers are
superposition of all the sampling grids from all the imagers combined and super-resolution processed to form a super-
forms a dense, possibly non-uniform, montage of points. resolution telephoto image.In another embodiment,the cam-
Scattered data interpolation methods may be used to deter- era array includes two or more subsets of imagers equipped
mine the spectral density at each sample point in this non- with lenses of more than two magnifications to provide dif-
uniform montage for each spectral image, as described, for fering zoom magnifications.
example, in Shiaofen Fang et al., "Volume Morphing Meth- 10105] Embodiments of the camera arrays may achieve its
ods for Landmark Based 3D Image Deformation" by SPIE final resolution by aggregating images through super-resolu-
vol. 2710, proc. 1996 SPIE Intl Symposium on Medical tion. Taking an example of providing 5x5 imagers with a 3x
Imaging, page 404-415, Newport Beach, Calif. (February optical zoom feature, if 17 imagers are used to sample the
1996), which is incorporated by reference herein in its luma(G)and 8 imagers are used to sample the chroma(R and
entirety. In this way,a certain amount ofresolution lost in the B), 17 luma imagers allow a resolution that is four times
process of sampling the scene using different spectral filters higher than what is achieved by any single imager in the set of
may be recovered. 17 imagers.Ifthe number ofthe imager is increased from 5x5
10101] As described above,image segmentation and object to 6x6, an addition of 11 extra imagers becomes available. In
recognition are facilitated by determining the spectral reflec- comparison with the 8 Megapixel conventional image sensor
tance coefficients of the object. The situation often arises in fitted with a 3x zoom lens, a resolution that is 60% of the
security applications wherein a network ofcameras is used to conventional image sensor is achieved when 8 of the addi-
track an object as it moves from the operational zone of one tional 11 imagers are dedicated to sampling luma(G)and the
camera to another. Each zone may have its own unique light- remaining 3 imagers are dedicated to chroma(R and B)and
ing conditions(fluorescent,incandescent, D65,etc.)that may near-IR sampling at 3x zoom. This considerably reduces the
cause the object to have a different appearance in each image chroma sampling (or near-IR sampling) to luma sampling
captured by different cameras. If these cameras capture the ratio. The reduced chroma to luma sampling ratio is some-
images in a hyper-spectral mode,all images may be converted what offset by using the super-resolved luma image at 3X
to the same illuminant to enhance object recognition perfor- zoom as a recognition prior on the chroma (and near-IR)
mance. image to resample the chroma image at a higher resolution.
10102] In one embodiment, camera arrays with multiple 10106] With 6x6 imagers, a resolution equivalent to the
imagers are used for providing medical diagnostic images. resolution of conventional image sensor is achieved at lx
Full spectral digitized images of diagnostic samples contrib- zoom. At 3x zoom, a resolution equivalent to about 60% of
ute to accurate diagnosis because doctors and medical per- conventional image sensor outfitted with a 3x zoom lens is
sonnel can place higher confidence in the resulting diagnosis. obtained by the same imagers. Also, there is a decrease in
The imagers in the camera arrays may be provided with color luma resolution at 3x zoom compared with conventional
filters to provide full spectral data. Such camera array may be image sensors with resolution at 3x zoom. The decreased
installed on cell phones to capture and transmit diagnostic luma resolution, however, is offset by the fact that the optics
information to remote locations as described, for example,in of conventional image sensor has reduced efficiency at 3X
Andres W. Martinez et al., "Simple Telemedicine for Devel- zoom due to crosstalk and optical aberrations.
oping Regions: Camera Phones and Paper-Based Microflu- 10107] The zoom operation achieved by multiple imagers
idic Devices for Real-Time, Off-Site Diagnosis," Analytical has the following advantages. First, the quality of the
Chemistry (American Chemical Society) (Apr. 11, 2008), achieved zoom is considerably higher than what is achieved
which is incorporated by reference herein in its entirety. Fur- in the conventional image sensor due to the fact that the lens
ther, the camera arrays including multiple imagers may pro- elements may be tailored for each change in focal length. In
vide images with a large depth of field to enhance the reli- conventional image sensors, optical aberrations and field cur-
ability of image capture of wounds, rashes, and other vature must be corrected across the whole operating range of
symptoms. the lens, which is considerably harder in a zoom lens with
10103] In one embodiment, a small imager (including, for moving elements than in a fixed lens element where only
example, 20-500 pixels) with a narrow spectral bandpass aberrations for a fixed focal length need to be corrected.
US 2016/0227195 Al Aug. 4, 2016
10

Additionally,the fixed lens in the imagers has a fixed chiefray image can be created from a baseline scale at 3x down to lx.
angle for a given height, which is not the case with conven- Each image in this set is a version of the baseline 3x zoom
tional image sensor with a moving zoom lens. Second, the image but at a reduced level of detail. Rendering an image at
imagers allow simulation of zoom lenses without signifi- a desired zoom level is achieved using the mipmap by (i)
cantly increasing the optical track height. The reduced height taking the image at lx zoom, and computing the coverage of
allows implementation of thin modules even for camera the scene for the desired zoom level (i.e., what pixels in the
arrays with zooming capability. baseline image needs to be rendered at the requested scale to
10108] The overhead required to support a certain level of produce the output image),(ii) for each pixel in the coverage
optical zoom in camera arrays according to some embodi- set, determine if the pixel is in the image covered by the 3x
ments is tabulated in Table 2. zoom luma image,(iii)ifthe pixel is available in the 3x zoom
luma image,then choose the two closest mipmap images and
TABLE 2 interpolate (using smoothing filter) the corresponding pixels
from the two mipmap images to produce the output image,
No. of No. of Luma No. ofChroma and (iv)ifthe pixel is unavailable in the 3x zoom luma image,
Imagers in Imagers at different Imagers at different
Camera Zoom levels Zoom Levels then choose the pixel from the baseline lx luma image and
scale up to the desired scale to produce the output pixel. By
array lx 2X 3X lx 2X 3X using mipmaps, smooth optical zoom may be simulated at
25 17
any point between two given discrete levels (i.e., lx zoom and
36 16 3x zoom).

Capturing Video Images


10109] In one embodiment, the pixels in the images are
mapped onto an output image with a size and resolution 10113] In one embodiment,the camera array generates high
corresponding to the amount of zoom desired in order to frame image sequences. The imagers in the camera array can
provide a smooth zoom capability from the widest-angle view operate independently to capture images. Compared to con-
to the greatest-magnification view. Assuming that the higher ventional image sensors, the camera array may capture
magnification lenses have the same center of view as the images at the frame rate up to N time(where N is the number
lower magnification lenses, the image information available of imagers). Further, the frame period for each imager may
is such that a center area ofthe image has a higher resolution overlap to improve operations under low-light conditions. To
available than the outer area. In the case of three or more increase the resolution, a subset ofimagers may operate in a
distinct magnifications, nested regions of different resolution synchronized manner to produce images ofhigher resolution.
may be provided with resolution increasing toward the center. In this case, the maximum frame rate is reduced by the num-
10110] An image with the most telephoto effect has a reso- ber ofimagers operating in a synchronized manner. The high-
lution determined by the super-resolution ability ofthe imag- speed video frame rates can enables slow-motion video play-
ers equipped with the telephoto lenses. An image with the back at a normal video rate.
widest field of view can be formatted in at least one of two 10114] In one example,two luma imagers(green imagers or
following ways. First, the wide field image may be formatted near-IR imagers), two blue imagers and two green imagers
as an image with a uniform resolution where the resolution is are used to obtain high-definition 1080p images. Using per-
determined by the super-resolution capability of the set of mutations offour luma imagers (two green imagers and two
imagers having the wider-angle lenses. Second,the wide field near-IR imagers or three green imagers and one near-IR
image is formatted as a higher resolution image where the imager)together with one blue imager and one red imager,the
resolution ofthe central part ofthe image is determined by the chroma imagers can be upsampled to achieve 120 frames/sec
super-resolution capability of the set of imagers equipped for 1080p video. For higher frame rate imaging devices, the
with telephoto lenses. In the lower resolution regions, infor- number offrame rates can be scaledup linearly. For Standard-
mation from the reduced number of pixels per image area is Definition (480p) operation, a frame rate of 240 frames/sec
interpolated smoothly across the larger number of"digital" may be achieved using the same camera array.
pixels. In such an image, the pixel information may be pro- 10115] Conventional imaging devices with a high-resolu-
cessed and interpolated so that the transition from higher to tion image sensor (e.g., 8 Megapixels) use binning or skip-
lower resolution regions occurs smoothly. ping to capture lower resolution images (e.g., lO8Op3O,
10111] In one embodiment,zooming is achieved by induc- '72Op3O and 48Op3O). In binning, rows and columns in the
ing a barrel-like distortion into some, or all, ofthe array lens captured images are interpolated in the charge, voltage or
so that a disproportionate number ofthe pixels are dedicated pixel domains in order to achieve the target video resolutions
to the central part of each image. In this embodiment, every while reducing the noise. In skipping, rows and columns are
image has to be processed to remove the barrel distortion. To skipped in order to reduce the power consumption of the
generate a wide angle image, pixels closer to the center are sensor. Both of these techniques result in reduced image
sub-sampled relative to outer pixels are super-sampled. As quality.
zooming is performed, the pixels at the periphery of the 10116] In one embodiment, the imagers in the camera
imagers are progressively discarded and the sampling of the arrays are selectively activated to capture a video image. For
pixels nearer the center ofthe imager is increased. example, 9 imagers (including one near-IR imager) may be
10112] In one embodiment,mipmap filters are built to allow used to obtain lO8Op (1920x1080 pixels) images while 6
images to be rendered at a zoom scale that is between the imagers(including one near-IR imager)may be used to obtain
specific zoom range of the optical elements (e.g., lx and 3x '72Op (1280x720 pixels)images or 4 imagers (including one
zoom scales of the camera array). Mipmaps are a precalcu- near-IR imager)may be used to obtain 480p(720x480 pixels)
lated optimized set of images that accompany a baseline images. Because there is an accurate one-to-one pixel corre-
image. A set of images associated with the 3x zoom luma spondence between the imager and the target video images,
US 2016/0227195 Al Aug. 4, 2016
11

the resolution achieved is higher than traditional approaches. camera, the images obtained from the camera array of the
Further, since only a subset of the imagers is activated to present invention do not suffer from the extreme loss of
capture the images, significant power savings can also be resolution. The camera array according to the present inven-
achieved. For example,60% reduction in power consumption tion, however, produces sparse data points for refocusing
is achieved in 1080p and 80% of power consumption is compared to the plenoptic camera. Jn order to overcome the
achieved in 480p. sparse data points, interpolation may be performed to refocus
10117] Using the near-JR imager to capture video images is data from the spare data points.
advantageous because the information from the near-JR 10123] Jn one embodiment,each imager in the camera array
imager may be used to denoise each video image. Jn this way, has a different centroid. That is, the optics of each imager are
the camera arrays ofembodiments exhibit excellent low-light designed and arranged so that the fields of view for each
sensitivity and can operate in extremely low-light conditions. imager slightly overlap but for the most part constitute dis-
Jn one embodiment, super-resolution processing is per- tinct tiles ofa larger field ofview.The images from each ofthe
formed on images from multiple imagers to obtain higher tiles are panoramically stitched together to render a single
resolution video imagers. The noise-reduction characteristics high-resolution image.
of the super-resolution process along with fusion of images
10124] Jn one embodiment, camera arrays may be formed
from the near-JR imager results in a very low-noise images. on separate substrates and mounted on the same motherboard
10118] Jn one embodiment, high-dynamic-range (HDR) with spatial separation. The lens elements on each imager
video capture is enabled by activating more imagers. For may be arranged so that the corner ofthe field ofview slightly
example, in a 5x5 camera array operating in 1080p video encompasses a line perpendicular to the substrate. Thus, if
capture mode,there are only 9 cameras active. A subset ofthe four imagers are mounted on the motherboard with each
16 cameras may be overexposed and underexposed by a stop imager rotated 90 degrees with respect to another imager,the
in sets oftwo or four to achieve a video output with a very high fields of view will be four slightly overlapping tiles. This
dynamic range. allows a single design of WLO lens array and imager chip to
be used to capture different tiles of a panoramic image.
Other Applications for Multiple Jmagers
10125] Jn one embodiment, one or more sets ofimagers are
10119] Jn one embodiment, the multiple imagers are used arranged to capture images that are stitched to produce pan-
for estimating distance to an object in a scene. Since infor- oramic images with overlapping fields of view while another
mation regarding the distance to each point in an image is imager or sets of imagers have a field of view that encom-
available in the camera array along with the extent in x and y passes the tiled image generated. This embodiment provides
coordinates ofan image element,the size ofan image element different effective resolution for imagers with different char-
may be determined. Further, the absolute size and shape of acteristics. For example, it may be desirable to have more
physical items may be measured without other reference luminance resolution than chrominance resolution. Hence,
information. For example,a picture ofa foot can be taken and several sets ofimagers may detect luminance with their fields
the resulting information may be used to accurately estimate of view panoramically stitched. Fewer imagers may be used
the size of an appropriate shoe. to detect chrominance with the field of view encompassing
10120] Jn one embodiment, reduction in depth of field is the stitched field of view of the luminance imagers.
simulated in images captured by the camera array using dis- 10126] Jn one embodiment,the camera array with multiple
tance information. The camera arrays according to the present imagers is mounted on a flexible motherboard such that the
invention produce images with greatly increased depth of motherboard can be manually bent to change the aspect ratio
field. The long depth of field, however, may not be desirable ofthe image. For example,a set ofimagers can be mounted in
in some applications. Jn such case, a particular distance or a horizontal line on a flexible motherboard so that in the
several distances may be selected as the "in best focus" dis- quiescent state ofthe motherboard,the fields ofview of all of
tance(s)for the image and based on the distance (z)informa- the imagers are approximately the same. Jf there are four
tion from parallax information, the image can be blurred imagers, an image with double the resolution of each indi-
pixel-by-pixel using, for example, a simple Gaussian blur. Jn vidual imager is obtained so that details in the subject image
one embodiment, the depth map obtained from the camera that are halfthe dimension of details that can be resolved by
array is utilized to enable a tone mapping algorithm to per- an individual imager. Jf the motherboard is bent so that it
form the mapping using the depth information to guide the forms part of a vertical cylinder, the imagers point outward.
level, thereby emphasizing or exaggerating the 3D effect. With a partial bend,the width ofthe subject image is doubled
10121] Jn one embodiment, apertures of different sizes are while the detail that can be resolved is reduced because each
provided to obtain aperture diversity. The aperture size has a point in the subject image is in the field of view oftwo rather
direct relationship with the depth of field. Jn miniature cam- than four imagers.Atthe maximum bend,the subject image is
eras, however, the aperture is generally made as large as four times wider while the detail that can be resolved in the
possible to allow as much light to reach the camera array. subject is further reduced.
Different imagers may receive light through apertures of dif-
ferent sizes. For imagers to produce a large depth offield, the Offline Reconstruction and Processing
aperture may be reduced whereas other imagers may have
large apertures to maximize the light received. By fusing the 10127] The images processed by the imaging system 400
images from sensor images ofdifferent aperture sizes,images may be previewed before or concurrently with saving of the
with large depth of field may be obtained without sacrificing image data on a storage medium such as a flash device or a
the quality of the image. hard disk. Jn one embodiment, the images or video data
10122] Jn one embodiment, the camera array according to includes rich light field data sets and other useful image
the present invention refocuses based on images captured information that were originally captured by the camera
from offsets in viewpoints. Unlike a conventional plenoptic array. Other traditional file formats could also be used. The
US 2016/0227195 Al Aug. 4, 2016
12

stored images or video may be played back or transmitted to camera is the portion of a scene visible to a second
other devices over various wired or wireless communication camera in the plurality ofcameras that is occluded from
methods. the view of the first camera; and
10128] In one embodiment, tools are provided for users by the image processor is further configured to measure par-
a remote server. The remote server may function both as a allax using images captured by the plurality of cameras
repository and an offline processing engine for the images or by ignoring pixels in the images captured by the plurality
video. Additionally, applets mashed as part of popular photo- of cameras that are in the occlusion set of the primary
sharing communities such as Flikr, Picasaweb,Facebook etc. camera.
may allow images to be manipulated interactively, either indi- 3. The camera array ofclaim 1, wherein each camera in the
vidually or collaboratively. Further, software plug-ins into plurality of cameras comprises:
image editing programs may be provided to process images optics comprising at least one lens element and at least one
generated by the imaging device 400 on computing devices aperture; and
such as desktops and laptops. a sensor comprising a two dimensional array of pixels and
10129] Various modules described herein may comprise a control circuitry for controlling imaging parameters.
general-purpose computer selectively activated or recorifig- 4. The camera array ofclaim 1, wherein the image proces-
ured by a computer program stored in the computer. Such a sor is further configured to generate at least one higher reso-
computer program may be stored in a computer readable lution image using images captured by the plurality of cam-
storage medium,such as, but is not limited to,any type ofdisk eras and parallax measurements to compensate for parallax in
including floppy disks, optical disks, CD-ROMs, magnetic- the captured images.
optical disks, read-only memories (ROMs), random access 5. The camera array ofclaim 4, wherein the image proces-
memories(RAMs),EPROMs,EEPROMs, magnetic or opti- sor is configured to select at least one distance as a focal plane
cal cards, application specific integrated circuits (ASICs), or and to apply blurring to pixels in at least one higher resolution
any type of media suitable for storing electronic instructions, image with depths in the depth map that are not proximate a
and each coupled to a computer system bus. Furthermore,the focal plane.
computers referred to in the specification may include a 6. The camera array of claim 1, wherein the camera array
single processor or may be architectures employing multiple comprises a 3x3 array of cameras.
processor designs for increased computing capability. 7. The camera array of claim 1, wherein the camera array
10130] While particular embodiments and applications of comprises a 4x4 array of cameras.
the present invention have been illustrated and described
8. The camera array of claim 1, wherein the camera array
herein, it is to be understood that the invention is not limited
comprises a 5x5 array of cameras.
to the precise construction and components disclosed herein
9. The camera array of claim 1, wherein each camera
and that various modifications, changes, and variations may
includes a spectral filter configured to pass a specific spectral
be made in the arrangement, operation, and details of the
band of light selected from the group consisting of a Bayer
methods and apparatuses of the present invention without
filter, one or more Blue filters, one or more Green filters, one
departing from the spirit and scope of the invention as it is
or more Red filters, one or more shifted spectral filters, one or
defined in the appended claims.
more near-IR filters, and one or more hyper-spectral filters.
What is claimed is:
10. The camera array of claim 1, wherein at least two
1. A camera array, comprising:
cameras include a Red filter, at least two cameras include a
a plurality of cameras configured to capture images of a
Green filter, and at least two cameras include a Blue filter.
scene;
11. The camera array of claim 1, wherein at least two
an image processer;
cameras are near-IR cameras.
wherein one ofthe plurality ofcameras includes a primary
camera and the plurality of cameras forms: 12.The camera array ofclaim 11,wherein the camera array
at least one camera above the primary camera; further comprises a near-IR light source configured to illumi-
nate the scene during image capture.
at least one camera below the primary camera;
at least one camera to the left ofthe primary camera; and 13. The camera array of claim 12, wherein:
at least one camera to the right of the primary camera; the at least one camera above the primary camera is a
wherein cameras having the same imaging characteristics near-IR camera;
are located in locations selected from the group consist- the at least one camera below the primary camera is a
ing of: near-IR camera;
locations including above the primary camera and below the at least one camera to the left ofthe primary camera is
the primary camera; and a near-IR camera; and
locations including to the left ofthe primary camera and the at least one camera to the right ofthe primary camera is
to the right ofthe primary camera; a near-IR camera.
wherein the image processer is configured to measure par- 14. The camera array of claim 1, wherein:
allax using images captured by the plurality of cameras the plurality of cameras comprises cameras having differ-
by detecting parallax-induced changes that are consis- ent imaging characteristics; and
tent across the images taking into account the position of control circuitry that configures the cameras having differ-
the cameras that captured the images; and ent imaging characteristics to operate with at least one
wherein the image processer is further configured to gen- difference in operating parameters.
erate a depth map using measured parallax. 15. The camera array of claim 14, wherein the at least one
2. The camera array of claim 1, wherein: difference in operating parameters includes at least one imag-
images captured by the plurality ofcameras include differ- ing parameter selected from the group consisting ofexposure
ent occlusions sets, where the occlusion set of a first time, gain, and black level offset.
US 2016/0227195 Al Aug. 4, 2016
13

16. The camera array of claim 14, wherein the plurality of changes that are consistent across all of the images tak-
cameras comprises a distribution of cameras selected from ing into account the position of the cameras that cap-
the group consisting of: a symmetric distribution of cameras tured the images; and
of different types; and an irregular distribution of cameras of wherein the parallax confrmation and measurement mod-
different types. ule is further confgured to generate a depth map using
17. The camera array ofclaim 1, wherein the camera array measured parallax.
is a monolithic camera array assembly comprising: 20. The camera array of claim 19, wherein the parallax
confirmation and measurement module is further configured
a lens element array forming the optics ofeach camera;and
to ignore pixels in the images captured by the plurality of
a single semiconductor substrate on which all ofthe pixels cameras that are in the occlusion set of the primary camera.
for each camera are formed. 21. A camera array, comprising:
18. The camera array of claim 1, wherein the plurality of a plurality of cameras configured to capture images of a
cameras is formed on separate semiconductor substrates. scene, where each camera comprises:
19. A camera array, comprising: optics comprising at least one lens element and at least
a plurality of cameras configured to capture images of a one aperture; and
scene, where each camera comprises: a sensor comprising a two dimensional array of pixels
optics comprising at least one lens element and at least and control circuitry for controlling imaging param-
one aperture; and eters;
a sensor comprising a two dimensional array of pixels a controller configured to control operation parameters of
and control circuitry for controlling imaging param- the plurality of cameras; and
eters; an image processing pipeline module comprising a paral-
lax confirmation and measurement module;
a controller configured to control operation parameters of
a near-JR light source configured to illuminate the scene
the plurality of cameras; and during image capture;
an image processing pipeline module comprising a paral- wherein one ofthe plurality ofcameras includes a primary
lax confirmation and measurement module; camera and the plurality ofcameras includes at least two
wherein one ofthe plurality ofcameras includes a primary near-JR cameras;
camera and the plurality of cameras forms: wherein the two near-JR cameras are located in locations
at least one camera above the primary camera; selected from the group consisting of:
at least one camera below the primary camera; locations including above the primary camera and below
at least one camera to the left ofthe primary camera; and the primary camera; and
at least one camera to the right of the primary camera; locations including to the left ofthe primary camera and
wherein cameras having the same imaging characteristics to the right of the primary cameral;
are located in locations selected from the group consist- wherein the image processing pipeline module comprises a
ing of: parallax confrmation and measurement module config-
locations including above the primary camera and below ured to measure parallax using images captured by the
the primary camera; and plurality of cameras by detecting parallax-induced
locations including to the left ofthe primary camera and changes that are consistent across all of the images tak-
to the right ofthe primary cameral; ing into account the position of the cameras that cap-
wherein images captured by the plurality of cameras tured the images; and
include different occlusions sets, where the occlusion set wherein the parallax confrmation and measurement mod-
of a first camera is the portion of a scene visible to a ule is further confgured to generate a depth map using
second camera in the plurality of cameras that is measured parallax.
occluded from the view of the first camera; 22. The camera array of claim 21, wherein the parallax
wherein the image processing pipeline module comprises a confirmation and measurement module is further configured
parallax confrmation and measurement module confg- to ignore pixels in the images captured by the plurality of
ured to measure parallax using images captured by the cameras that are in the occlusion set of the primary camera.
plurality of cameras by detecting parallax-induced * * * * *

You might also like