Download as pdf or txt
Download as pdf or txt
You are on page 1of 278

PH 301

ENGINEERING OPTICS
Lecture_Optical Systems_25-26
Optical systems: Telescopes, Microscopes (Bright field, Dark field
Confocal, Phase contrast, Digital holographic)
Projection systems, Spectrometers
Optical Instruments
An optical instrument is defined as a device used for observing,
measuring, & recording optical data or information.

Most optical instruments are those intended to enhance visual


capability.

Magnifying Glass
A magnifying glass or Loupe (French, imperfect gem) is the simplest of
optical instruments intended for the enhancement of visual capability.

 A magnifier is any positive lens with a focal length of less than 250
mm.

 A healthy human eye is capable of focusing from infinity, down to a


minimum distance of about 250 mm.

 This same average eye is capable of resolving a repeating high-


contrast target with equal width black & white lines when each line
subtends an angle of 1 arcmin or more.
 Most often, when viewing an object it is our intent to distinguish as
much detail on that object as possible. To that end we first bring the
object as close as possible to the eye.

 When that closest distance is 250 mm, the smallest resolved element
on the object – a detail that subtends an angle of 1 min

tanθ = 0.0003
will have an actual size of
250 mm × 0.0003 = 0.075 mm

 If this resolved element is a part of a repeating pattern of equal


thickness parallel black & white lines, then each cycle (1 black line +
1 white line) will have a thickness of 0.150 mm.

 Frequency of this finest resolvable pattern will then be


1/0.150 mm = 6.67 cycles/mm
 Approximate magnification M provided by lens is calculated by
dividing its focal length into 250. A 50 mm lens will provide a
magnification of

M = 250/50 = 5X

 This formula applies to the case where object is placed at the focal
plane of magnifier lens & virtual image being viewed appears at
infinity.

 A reasonable maximum limit for magnification by a simple magnifier


would be about 20 to 25X.
Eyepiece

Eyepiece is quite similar in function to the magnifier. It differs primarily


in that the eyepiece is generally used in conjunction with other optics to
form a complete instrument, such as a telescope or microscope.

 Eyepiece serves two functions simultaneously:


- Projects final image to the viewer’s eye, &
- Forms an image of the system aperture stop, which will be exit
pupil of that instrument.

 Eyepieces become increasingly complex as the field of view to be


covered is increased.

 Complexity is reflected in no. of elements that are required & glass


types that are used in design.
Family of common eyepiece designs, showing the increase in complexity as a
function of the field of view that must be covered.
 In most applications, eyepiece must be made axially adjustable to
permit focus to accommodate for differences in the eyesight of
viewers. A normal adjustment range from + 3 to - 4D will satisfy most
requirements.

 Amount of eyepiece travel (in mm) can be calculated as,

Axial travel per diopter = EFL2/1000

 For a 28-mm EFL eyepiece, each diopter of adjustment will require:

Axial travel per diopter = 282/1000 = 0.784 mm/D

 Moving eyepiece closer to image being viewed will result in a


diverging output beam, which corresponds to a negative diopter
setting.

 To achieve + 3 to - 4 D focus image, 28 mm eyepiece will have to be


moved from - 3.2 to + 2.4 mm relative to infinity or zero-diopter
setting.
Microscope Eyepiece or Ocular

Huygenian Ramsden

Kellner

Simple eyepieces are constructed of two Plano-convex lenses (A,B). More


corrected lenses consist of three or more lenses, with at least one as an
achromatic doublet (C). All eyepieces also have an internal aperture, that is used
to reduce aberrations but that limits field of view.
Earliest eyepiece is Huygenian eyepiece. It has aperture (or diaphragm)
placed between two lenses.

Ramsden eyepiece has aperture placed before first lens (B). Both
eyepiece designs suffer from chromatic aberration. Ramsden yields a
better image.

Kellner eyepiece, a variation of Ramsden, replaces Eye Lens with an


achromatic doublet. It has good chromatic correction, & is reasonably
inexpensive. Modern eyepieces are a variation of Kellner design. With
increased use of achromatic doublets & triplets. Chromatic aberration is
eliminated & field flattened.

Power (P, magnification of an eyepiece is defined as D/focal length;


where D = the closest distance of distinct vision, or 250 mm.

Magnification of a compound microscope = Pobj × Peye

Eyepieces are usually 6.3X, 8X, 10X, 12X, 16X, 20X.

As magnification of eyepiece increases, the field of view decreases.


Microscope
 A reasonable maximum limit for magnification by a simple magnifier
would be about 20 to 25X.

 When a higher magnification is required, a compound magnifier is


used, which is commonly referred to as a microscope.

 It consists of two lens assemblies: objective lens & eyepiece.

 Objective lens will form a magnified image of object, while eyepiece


will be used to view that magnified image, thus providing additional
magnification.

 Most often a microscope is designed by selecting a wide variety of


commercially available components. While custom microscope design
is an active & challenging area of optical design.
Viewer
Exit pupil

Eyepiece
efl = 25 mm
Mag = 10×
Compound microscope
Image/Field stop

Total microscope magnification


= (10X) × (4X) = 40X

f/# = 1/(2NA) = 1/(2×0.12) = 1/0.24


= f/4.17

Objective lens
efl = 31 mm
Mag = 4×
NA (0.12)
Working distance (25 mm)
Object (4.0 mm dia)
 At field stop this f/# will be increased by magnification factor, making
it 4 × 4.17 = f/16.67.

 Since we are using a 10× (25 mm EFL) eyepiece, we can calculate the
exit pupil diameter to be 25/16.67 = 1.5 mm.

 Microscope’s field of view: 16 mm diameter field stop will limit actual


filed of view to 16/4 = 4.0 mm diameter at object surface.

 When viewing 16 mm diameter field stop through 25 mm EFL


eyepiece, it will have an apparent angular half field of view whose
tangent is 8/25 = 0.32.

 We can see that 4.0 mm diameter object, which would subtend a half-
field angle with a tangent of 2/250 = 0.008 when viewed at a distance of
250 mm with naked eye, will now subtend a half angle whose tangent
is 0.32 when viewed through microscope.

 New apparent object size is magnified by a factor of 0.32/0.008 = 40X.


Compound
Microscope
Telescope
Telescope
 Prefix tele-at a distance

Basic optical parameters of an astronomical telescope made up of a 1000 mm f/10


objective lens, & a 28 mm eyepiece.
Telescope
 Telescope is used to view large objects that are at great distances
from us when it is not practical to reduce the distance to the object.

Dimensional parameters:

 A practical limit on the size of an objective lens, based largely on cost,


would be about 100 mm diameter, with a speed of f/10, making focal
length 1000 mm.

 Telescope magnification is calculated by dividing the eyepiece focal


length (1000 mm) into the objective focal length (28 mm).

M = 1000 mm/28 mm = 35.7X

 Angular field of view that will be seen is limited by the diameter of


field stop in eyepiece, (supplier data, 23.3 mm). Resulting field of view
for objective lens will be:
 23.3 
tan 1  1
  tan (0.0233)  1.34 deg
 1000 
Telescope
 Exit pupil diameter: It will be equal to the objective lens diameter
divided by telescope magnification,

100 mm/35.7X = 2.8 mm

Optical performance:
 For an astronomical telescope objective lens it is reasonable to strive
for near-diffraction-limited performance. To determine the radius of
diffraction-limited blur spot (Airy disk):

Airy disk radius R  1.22    f /#

 For this objective lens, R  1.22  0.00056 10  0.0068 mm

 Image formed by objective lens will be viewed using 28 mm EFL (9X)


eyepiece.
Lens picture & on-axis performance data for a 1000 mm, f/10 achromatic cemented
doublet.
Observing the moon

Field of view as seen through a telescope with a 1.34 deg field of view.
 Looking through eyepiece, moon’s disk will subtend an angle of about
20 deg to eye.

 Without telescope, moon subtends an angle of 0.55 deg to naked eye.

 Based on visual resolution limit of 1 arcmin per element, we can


conclude that, with naked eye, we will resolve 0.55 deg × 60 = 33
elements across moon’s diameter (2160 miles).

 Thus, we can conclude that the smallest element on the moon’s


surface that we can resolve with our naked eye will be 2160 miles/33 =
65 miles.

 With the help of our new telescope, that number will be reduced to
just under 2 miles.
 Certainly naked eye can see a dark crater on the bright moon’s
surface (high contrast), even if that crater is less than 65 miles in
diameter.

 But, if there are two 50 mile diameter craters that are separated by 50
miles (center to centre), naked eye will not be able to determine that
there are indeed two separate craters, i.e., they will not be resolved.

 However, if we look at these two craters with our new telescope, we


will clearly resolve two separate & distinct craters.

 Additional telescope magnification will be introduced if we choose an


eyepiece with a shorter focal length.
Basic optical parameters of a 10× compact terrestrial telescope, consisting of a 400
mm, f/8 objective lens, a roof Pechan derotation prism, & a 40 mm EFL eyepiece.
 Ideal configuration for viewing & recording celestial objects today
involves replacing the eyepiece with a modern digital camera
designed specifically for astrophotography.

 Such a camera can be connected directly to a laptop computer where


objects imaged by objective lens can be viewed on computer screen &
then captured & stored for later examination & fine tuning.

 Application: viewing sporting event from a distance of some 400 ft.


Binoculars
 A pair of binoculars is made up of two identical terrestrial telescopes,
linked together such that their optical axes are parallel.

 Distance between two exit pupils is made to be adjustable to


accommodate individual differences in eye separation (interpupillary
distance, or IPD).

 Average value of IPD is about 64 mm, with an adjustment of +/-10 mm


adequate to satisfy most requirements.

 It is critical that two optical axes be parallel as they enter viewer’s


eyes. A maximum misalignment tolerance of one arcminute will be
accommodated by most users with little difficulty.

 Dual optical paths result in more natural, relaxed viewing, with an


enhanced stereo effect relative to that provided by a single telescope,
i.e., a monocular.
Optical layout of a pair of 7 × 50 Porro prism binoculars
Riflescope
Riflescope is a low-power telescope designed specifically to improve
the sighting accuracy of a rifle or similar weapon.
Objective lens

Image

Relay lens

Optical system of the riflescope contains a relay lens to correct image orientation,
& it requires greater-than-usual eye relief.
Parallax problems
result from the image
from the objective not
being coplanar with
the reticle.

If image is not coplanar with reticle (i.e. focal plane of objective image is
either in front of or behind reticle), then putting eye at different points
behind eyepiece causes reticle to appear to be at different points on the
target.

This optical effect causes parallax-induced aiming errors that can make a
telescopic sight user miss a small target at a distance for which the
telescopic sight was not parallax-adjusted. This is known as parallax shift
where the reticle seems to "float" around over the target whenever there
are small movements of the user's head & eyes.
PH 301
ENGINEERING OPTICS
Lecture_Optical Systems_25
Light Microscopes

 Bright field microscope

 Dark field microscope

 Phase contrast microscope

 Fluorescent microscope
Light Microscopes

 Bright field microscope gives a magnified image of dark specimen


with the colourless background.

 Dark field microscope excludes unscattered beam from the image.


Field around the specimen is generally dark.

 Phase contrast microscope takes advantage of objects that alter


phase of incident light.

 Fluorescence microscope takes advantage of inherently


“fluorescent material” of biological object that can be fluorescently
labeled.
Scattering of Light
Scattering: small particles suspended in a medium of a different index of
refraction diffuse a portion of the incident radiation in all directions.

With scattering, there is no energy transformation, but a change in the


spatial distribution of the energy. Scattering, along with absorption,
causes attenuation problems with radar & other measuring devices.

There are three different types of scattering:


Rayleigh scattering,
Mie scattering, &
Non-selective scattering.

Rayleigh scattering mainly consists of scattering from atmospheric


gases. This occurs when particles causing scattering are smaller in size
than the wavelengths of radiation in contact with them. As the
wavelength decreases, the amount of scattering increases.

Sky appears blue. This is because blue light is scattered around four
times as much as red light, & UV light is scattered about 16 times as
much as red light.
Rayleigh Scattering

Mie scattering is caused by pollen, dust,


smoke, water droplets, & other particles in
lower portion of atmosphere. It occurs
when particles causing scattering are
larger than the wavelengths of radiation in
contact with them. Mie scattering is
responsible for the white appearance of
the clouds
Non-selective scattering occurs in the lower portion of atmosphere when
particles are much larger than the incident radiation. This is not
wavelength dependent & is primary cause of haze.
Dark Field Microscopy

 Most commonly employed light microscope allows light to pass


through object -> Bright Field Microscopy

 Limitation of such microscope is that transparent & semitransparent


objects are not readily visible (needs staining)

 Visibility -> Contrast between object & background

 Contrast can be greatly improved by creating a dark background.


Principle

 If aperture of condenser is opened completely & darkfield stop is


inserted below condenser, light rays reaching object, forms a hollow
cone.

 If a stop of suitable size is selected, all direct rays from condenser can
be made pass outside object.

 Any object within this beam of light will reflect some light into
objective & become visible.

 Method of illumination of object -> object become self illuminous


against dark background -> Dark-field illumination.
Phase Object

 Object is completely transparent but has an optical thickness which


varies from point to point.

 It introduces phase differences between disturbances which pass


through different parts of it.

 Consequently, disturbances immediately behind object, & in conjugate


image plane, have same amplitude at all points but will show
variations in phase from point to point.

 Human eye is sensitive to intensity only & cannot detect changes of


phase so that field of view appears uniformly bright.
Phase Contrast Microscopy

 Phase contrast microscopy is a special adaptation of light microscopy


& help obtain a clear picture of living or unstained cells.

 Adaptors convert minute difference in phase changes in transmitted


light due to refractive indices of all cell organelles into perceptible
shades of grey

 This allows organelles of living cell to become visible with fair


contrast in them.
Principle

 Different wavelength of light rays detect differences in colours.

 Different shades of grey are distinguished to our eyes due to


differences in amplitude of light rays.

 Phase contrast microscope converts invisible small phase changes


caused by cell component into visible intensity changes.

 Phase changes are caused by biological material through which light


ray passes. If a material is absorbent, it causes the ray to undergo a
change in amplitude, which is distinguished by our eyes. Ex. Light
passing through window glass & without them.
Consider three different materials & their effects on light.

1. Transparent & non-absorbent material with higher refractive index,

2. Transparent & non-absorbent but thicker, &

3. Transparent & absorbent.

Interference

1. Light rays undergo a phase change depending upon refractive index


of transparent material.

2. Phase change is in direct proportion to thickness of material.

3. Light rays undergo change in amplitude when it passes through an


absorbent material.
 More refractive index & thickness, more change in phase.

 If biological material absorbs light ray they show contrast but


living cell generally do not absorb light ray.

 Cells & their components don’t show phase change.

 Value of phase change is ¼ of wavelength of light. But this phase


change is not distinguished by our eyes.

Principle behind phase contrast microscope is to convert


undistinguishable phase change into distinguishable phase change in
terms of variation of contrast, with the help of two adaptors –
annular diaphragm & annular phase plate.
Phase contrast is obtained with the help of annular diaphragm by separating
central & direct rays from diffracted rays.
Images of E-coli
Phase Imaging
Consider a transparent object with amplitude transmittance,

t A ( x, y)  exp[i ( x, y)]
Expanding exp[iφ(x,y)],
i ( x , y ) 1 2 1 3 1 4
e  1  i ( x, y)   ( x, y)  i ( x, y)   ( x, y)  .....
2 6 24
For mathematical simplicity, assuming object has a magnitude of
unity & finite extent of entrance & exit pupils are neglected. Also,
there is a necessary condition to achieve linearity between phase-shift
& intensity. The condition is that the variable part of the object-
induced phase-shift, ∆φ(x,y), should be small compared with 2π
radians. Applying approximation to amplitude transmittance,

t A ( x, y)  e i0 e i  e i0 [1  i ]

I  1  i ( x, y )  1
2
Phase-changing plate consists of a glass substrate on which a small
transparent dielectric dot is deposited.

Dot is centered on optical axis in focal plane & has a thickness & index
of refraction such that it should change phase of focused light by either
π/2 radians or 3π/2 radians relative to phase retardation of diffracted
light. If phase retardation is by π/2 radians, intensity in image plane
becomes,

I  exp[i ( / 2)]  i ( x, y )  i{1   ( x, y )}  1  2 ( x, y )


2 2

Image intensity has become linearly related to variations of phase-shift


∆φ(x,y). This situation is referred to as Positive Phase Contrast.

If phase retardation is by 3π/2 radians, intensity in image plane,

I  exp[i (3 / 2)]  i ( x, y )   i{1   ( x, y )}  1  2 ( x, y )


2 2

This case is referred to as Negative Phase Contrast.


Spatial
frequency
exp[iφ(x,y)] plane
L L

φ(x,y)

f f f f
input output
plane plane

Arrangement for phase contrast


Optical Microscopes
Lexicon

 Micro: very small


microgram, micrometer, microsecond, micron, microdot,
microbiology, microorganism, microsurgery, microeconomics,
microelectronics, microwave, microchip, microcomputer,
microprocessor, microphone, microfilm, ………

 Microscope: an instrument for magnifying very small objects

 Microscopic: so small as to be visible only with a microscope

 Microscopy: the use of a microscope


Microscope
One or more lens that makes an enlarged image of an object.
Oldest published image known to have been made
with a microscope: Bees by Francesco Stelluti, 1630.
 Optical microscope, often referred to as “light microscope”, is
a type of microscope which uses visible light & a system of
lenses to magnify images of small samples.
 Optical microscopes are the oldest design of microscope &
were possibly designed in their present compound form in 17th
century.
 Aim: to improve RESOLUTION & CONTRAST.
 Microscopes which do not use visible light are:
Scanning Electron Microscope (SEM)
Transmission Electron Microscope (TEM)
Atomic Force Microscope (AFM)
 Inventor of microscope: Galileo Galilei
Galileo developed a compound microscope with a convex
& concave lens in 1609.

 Giovanni Faber coined the name “microscope”.


Greek words:
micron meaning “small”
skopein meaning “to look at”
 Optical & electron microscopy involve diffraction, reflection, or
refraction of electromagnetic radiation/electron beams
interacting with specimen, & subsequent collection of this
scattered radiation or another signal in order to create an
image.

 This process may be carried out by wide-field irradiation of


sample (e.g., standard light microscopy & transmission
electron microscopy) or by scanning of a fine beam over
sample (e.g., confocal laser scanning microscopy &
scanning electron microscopy).

 Scanning probe microscopy involves interaction of a


scanning probe with surface of object of interest.
Lenses & Bending of Light
 Lenses focus light rays at a specific place, called focal
point.

 Strength of lens is related to focal length.


Short focal length → more magnification

 Light is refracted (bent) when passing from one medium to


another.

 Refractive index: a measure of how greatly a substance


slows velocity of light.

 Direction & magnitude of bending is determined by


refractive indices between the two media forming the
interface.
Eyepiece Lens
Usually has a power of 10 X.

Eyepiece Lens × Objective Lens = Total Magnification

Objective Lens: Low power = 4x


Medium power = 10x
High power = 40x
Microscope Resolution
 Ability of a lens to separate or distinguish small objects
that are close together.

 Wavelength of light used is major factor in resolution

Shorter wavelength → Greater resolution

Properties of Microscope Objectives


Objective Lens
Types of Microscope
 Simple microscope

 Compound microscope

 Stereoscopic microscope

 Electron microscope

 Phase-Contrast microscope

 Digital holographic microscope


Simple Microscope
Similar to magnifying glass & has only one lens.
Compound Microscope
Lets light pass through an object & then through two or more
lenses.
Stereoscopic Microscope
Gives a three-dimensional view of an object.
(Ex. Insects & leaves).
Electron Microscope
 Uses a magnetic field to bend beams of electrons; instead of using
lenses to bend beams of light.
 Wavelength of electron beam is much shorter than light, resulting in
much higher resolution.
Scanning Electron Microscope
 SEM uses electrons reflected from surface of a specimen to
create image.

 Sample is scanned with a beam of electrons in a raster scan


pattern.

 Electrons interact with atoms that make up the sample


producing signals that contain information about sample’s
surface topography, composition, & other properties such as
electrical conductivity.
SEM
Transmission Electron Microscope
 Electrons scatter when they pass through thin sections of a
specimen.

 Transmitted electrons (those that do not scatter) are used to


produce image.

 Denser regions in specimen, scatter more electrons & appear


darker.
Phase-Contrast Microscope
 Enhances contrast between intracellular structures having slight
differences in refractive index.

 Excellent way to observe living cells.


Phase-contrast
microscope
Phase-contrast microscope
Bright-Field Microscope
 Produces a dark image against a brighter background.

 It uses several objective lenses – parfocal microscopes


remain in focus when objectives are changed.

 Total magnification:
product of magnifications of ocular lens
& objective lens
Dark-Field Microscope
 Produces a bright image of object against a dark
background.

 It is used to observe living, unstained preparations.


Digital Holography
Digital Holographic Microscope
 Holography was invented by Dennis Gabor to improve electron
microscope.

 Basic concept of DHM is to magnify hologram image by adopting


an optical lens system so that microscope fringes can be
resolved.

 DHM, unlike other microscopy, doesn’t record projected image of


object, rather light wavefront information originating from object
is digitally recorded as a hologram.

 Imaging lens in traditional microscopy is replaced by a computer


algorithm.
Applications of DHM
DHM has capability of non-invasively visualizing & quantifying
biological tissues.

Biomedical applications of DHM:


 To perform cell counting & to measure cell viability directly in cell
culture chamber.

 To study apoptotic process (programmed cell death) in different cell


types. Refractive index changes taking place during apoptotic
process are easily measured with DHM.

 Cell cycle analysis: Phase shift induced by cells has been shown to
be correlated to cell dry mass, which can be combined with other
parameters obtainable by DH, such as, cell volume & refractive
index, to provide a better understanding of cell cycle.
 Morphology analysis of cells: to study cell morphology using
neither staining nor labeling.

 DHM is used for automated plant stem cell monitoring.

 To study undisturbed processes in nerve cells as no labeling is


required. Swelling & shape changing of nerve cells caused by
cellular imbalance is easily studied.

 To measure 3-D motion of human red blood cells moving in a


microtube flow. Phase shift images are used to study red blood cell
dynamics.

 Red blood cell volume & hemoglobin concentration are measured by


combining information from absorption & phase shift images to
facilitate complete blood cell count.

 By combining several images calculated from same hologram, but at


different focal planes, an increased depth of field is obtained.
Advantages
 Simplicity of microscope: It requires a laser, a pinhole, & a CCD
camera, but no lenses at all (no aberration correction required).

 Simplicity of sample preparation in biology: no sectioning or


staining are required, so that living cells can be viewed.

 Maximum information: a single hologram contains all information


about 3-D structure of object.

 Speed: changes in specimen can ultimately be followed at capture


video rate of CCD chip.

 Maximum resolution of order of λ of laser can easily be obtained, &


can be further improved by at least a factor of two or three with
setup of immersion holography.

 Compared to OCT, DHM requires only a pair of particle hologram


images to get complete 3D flow information.
Digital Holography

Laser M
λ = 532 nm

BE BS

CCD
M

SLM L BS

BE: beam expander, BSs: beam splitters, SLM: spatial light modulator, RPM:
random phase mask, CCD: charge coupled device, L: lens
Phase Holographic Imaging’s The HolomonitorTM M3 (Sweden)
www.phiab.se
Resolutions Optics’s Desktop System (Canada)
www.resolutionoptics.com
Submersible system is a product with all functionality of 3D imaging
technology encased in a waterproof housing. It allows quickly & easily
observation of micro-organisms & particles up to a depth of 5 kilometers.

Resolutions Optics’s Submersible System (Canada)


www.resolutionoptics.com
Digital Holographic Microscope DHMT1000 [Lyncee tec, Switzerland]
www.lynceetec.com
BE BS RP1 RP2
M

HeNe laser /2 /4

M
M

BS CCD

Phase-shifting digital holography. BE: beam expander, BS: beam splitter,


RP: retardation plate, M: mirror, CCD: charge-coupled device.
Digital Hologram of Onion Peel (10X)
Intensity of Numerical Reconstruction with DH
Onion Peel (10X)
Phase of Numerical Reconstruction with DH
Onion (10X)
3-D presentation of Numerical Reconstruction’s
Phase with DH of Onion (10X)
Digital Hologram of Onion Peel (20X)
Intensity of Numerical Reconstruction with DH of
Onion Peel (20X)
Phase of Numerical Reconstruction with DH of
Onion (20X)
Digital Hologram of E.coli (20X)
Intensity of Numerical Reconstruction with DH of
E. coli (20X)
Phase of Numerical Reconstruction with DH of
E. coli (20X)
Digital Hologram of E.coli (40X)
Intensity of Numerical Reconstruction with DH of
E. coli (40X)
Phase of Numerical Reconstruction with DH of
E. coli (40X)
PH 301
ENGINEERING OPTICS
Lecture_Optical Detectors_29
Optical Detectors: Photographic emulsion, Thermal detectors,
Photodiodes, Photomultiplier tubes,
Detector arrays, Charge-coupled device
(CCD), Complementary metal-oxide
semiconductor (CMOS)
Photographic Emulsion

 Word emulsion is customarily used in a photographic context.


Gelatin or gum arabic layers sensitized with dichromate used in
dichromated colloid processes carbon & gum bichromate are
sometimes called emulsions.

 Photographic emulsion is a fine suspension of insoluble light-


sensitive crystals in a colloid solution, usually consisting of gelatin.

 Light-sensitive component is one or a mixture of silver halides: silver


bromide, chloride, & iodide.
 Silver bromide (AgBr), a soft, pale-yellow, insoluble salt well-
known (along with other silver halides) for its unusual sensitivity
to light.

 This property has allowed silver halides to become the basis of


modern photographic materials.
 Photographic emulsion is usually 10 to 30 µm thick & is composed
of silver halide grains dispersed within gelatin.

 Grains are 1 µm or greater in diameter; large grains facilitate greater


sensitivity, small grains enable finer resolution.

 Grains consist of silver, bromine, & iodine ions arranged in a crystal


lattice. Sulfur-containing compounds are often added in order to
form specks silver sulfide, which increase photosensitivity.
Chemical properties of photographic film

 Film base is usually plastic such as tri-acetate or polyester which


is coated with a light sensitive emulsion.

 Photographic emulsion is not a true emulsion, it is a dispersion of


small solid particles in a liquid medium which is then allowed to
cool & set.

 Light sensitive crystals are prepared by combination of silver-Ag-


& a halogen. Due to very low solubility of silver halides mixing
aqueous solutions of silver ions & halide ions will result in the
precipitation of silver halide crystals.
Silver nitrate (AgNO3) + Potassium bromide -----> Silver bromide
(AgBr) + Potassium nitrate (KNO3)

Or

Ag+ (silver ion in solution) + Br- (bromide ion in solution) -------->


Ag+Br- (silver bromide crystal)

Silver bromide is a lattice crystal containing millions of pairs of ions.


Formation of Latent Image
Step 1: Light Activation
Energy is released when photon strikes silver halide crystal freeing
electrons from bromide ion. Bromide ion is released from crystal as
bromine & is absorbed by gelatin.

Step 2: Movement of electrons


Free electrons move through crystal to a 'sensitivity speck' caused by
imperfections in crystal structure or created during sensitizing process
during manufacture.

Step 3: Deposition of Silver Ions


Negatively charged speck attracts positive silver ions which are
neutralized to form silver atoms. If enough silver atoms form at a single
point then a latent image is created. Latent image is not visible, even
under a microscope so the only way to tell if it is present is to chemically
develop the film to reveal the image.
Structure of a photographic film or plate
Development of Latent Image
Development
Developing agent supplies electrons to latent image thus attracting &
neutralizing silver ions to produce metallic silver which form a visible
image. Developing agents: Metol, Phenidone, Hydroquinone

Stop
When predetermined development time is reached, film is moved from
developer to a ‘stop bath’ which neutralizes developer & prevents any
further development of image. Stop bath: 1% solution acetic acid

Fixing
After development, emulsion still contains unexposed & undeveloped
silver halides. Film will look cloudy or milky if exposed to light. Fixer,
commonly sodium thiosulphate, converts unexposed silver halide to
soluble salts, which can be washed out.

Washing
Processed film is washed thoroughly to remove any chemical residue
after being dried.
Pictorial representation of photographic process

Exposure Latent image

After development After fixing

 Energy incident per unit area on a photographic emulsion during exposure


process is called exposure. [mJ/cm2]

E(x,y) = I(x,y) T
 Intensity transmittance: Ratio of intensity transmitted by a
developed transparency to intensity incident on that transparency,
averaged over a region that is large compared with a single grain
but small compared with fine structure in original exposure pattern,
is called intensity transmittance.

 Photographic density [F. Hurter & V. C. Driffield, 1890]: Logarithm of


reciprocal of intensity transmittance of a photographic
transparency should be proportional to silver mass per unit area of
that transparency.
 Hurter-Driffield curve

H-D curve for a typical emulsion

A film with a large value of γ is


called a high-contrast film, while
a film with a low γ is called a
low-contrast film.
Photodiodes
 A photodiode is a semiconductor device that converts light into an
electrical current.

 Current is generated when photons are absorbed in photodiode.

 A small amount of current is also produced when no light is present.

 Photodiodes may contain optical filters, built-in lenses, & may have
large or small surface areas.

 Photodiodes usually have a slower response time as their surface


area increases.
 Photodiodes are similar to regular semiconductor diodes except that
they may be either exposed (to detect vacuum UV or X-rays) or
packaged with a window or optical fiber connection to allow light to
reach the sensitive part of the device.

 Many diodes designed for use specifically as a photodiode use a PIN


junction rather than a p-n junction, to enhance response.

 A photodiode is designed to operate in reverse bias.


 When a photon of sufficient energy strikes diode, it creates an
electron-hole pair. This mechanism is known as inner photoelectric
effect.

 If absorption occurs in junction's depletion region, these carriers are


swept from junction by the built-in electric field of depletion region.
Thus holes move toward anode, & electrons toward cathode, & a
photocurrent is produced.
 Total current through photodiode is sum of dark current (current
generated in absence of light) & photocurrent, so dark current must
be minimized to maximize the sensitivity of device.
I-V characteristic of a photodiode

Linear load lines - response of external circuit: I = (Applied bias voltage-Diode


voltage)/total resistance. Points of intersection with the curves represent actual
current & voltage for a given bias, resistance & illumination.
Materials commonly used to produce photodiodes include
Working of a photodiode
 When light illuminates PN junction, covalent bonds are ionized,
which generates hole & electron pairs.

 Photocurrents are produced due to generation of electron-hole


pairs. Electron hole pairs are formed when photons of energy more
than 1.1eV hits the diode.

 When photon enters depletion region of diode, it hits atom with high
energy. This results in release of electron from atom structure. After
the electron release, free electrons & hole are produced.

 An electron will have - ve charge & holes will have a + ve charge.


Depletion energy will have built in electric filed, due to which,
electron hole pairs moves away from junction.

 Holes move to anode & electrons move to cathode to produce photo


current. Photon absorption intensity & photon energy are directly
proportional to each other. When energy of photons is less,
absorption will be more.
PH 301
ENGINEERING OPTICS
Lecture_Optical Detectors_30
Photodiode Array
 A 1-D array of hundreds or thousands of photodiodes can be used
as a position sensor.

 Photodiode arrays (PDAs) allow high speed parallel read out since
the driving electronics may not be built in like a traditional CMOS or
CCD sensor.

A 2 × 2 cm photodiode array chip with more than 200 diodes.


Types of Photodiode
Types of photodiodes based on its construction & functions:

1. PN Photodiode
2. Schottky Photodiode
3. PIN Photodiode
4. Avalanche Photodiode

These diodes are widely used in applications where detection of


presence of light, color, position, intensity is required.
PIN junction consists of three differently doped regions.

There is an intrinsic or undoped layer sandwiched between a p- & an


n-doped region.

Typically this kind of junction is fabricated from amorphous silicon


with a band gap of about 1.8 eV.
Photomultiplier Tube
A photomultiplier tube, useful for light detection of very weak signals, is
a photoemissive device in which absorption of a photon results in
emission of an electron.

These detectors work by amplifying electrons generated by a


photocathode exposed to a photon flux.
 Photomultipliers are extremely sensitive detectors of light including
visible light, UV & near IR.

 Advantage of photomultipliers: Extreme sensitivity. They are able to


multiply the signal produced by incident light by figures up to 100
million.

 In addition to their very high levels of gain, photomultipliers also


exhibit a low noise level, high frequency response & a large
collection area.

 Despite all advances in photodiode technology, photomultipliers are


still used (particle physics, astronomy, medical imaging, & motion
picture film scanning) in virtually all cases when low levels of light
need to be detected.
Photomultiplier tube construction

Photomultipliers are contained within a glass tube that maintains a


vacuum within device. There are three main electrodes within a
photomultiplier:

1. Photocathode
2. Dynodes
3. Anode

 Within the envelope of photomultiplier, there is one photocathode,


one anode, but there are several dynodes.

 Anode & dynode are traditional metallic electrodes with coated


surfaces, but photocathode is actually a thin deposit on the entry
window.

 Dynode is an electrode in a vacuum tube that serves as an electron


multiplier through secondary emission.
Photomultiplier operation
 Photons enter photomultiplier tube & strike photocathode. When this
occurs, electrons are produced as a result of photoelectric effect.

 Generated electrons are directed towards an area of photomultiplier


called electron multiplier. This area serves to increase or multiply no.
of electrons by a process known as secondary emission.
 Electron multiplier is made up from a no. of electrodes, called
dynodes. These dynodes have different voltages on them, each one is
more positive voltage than previous one to provide the required
environment to produce electron multiplication effect.

 This is operated by pulling electrons progressively towards more


positive areas in following way:
Electrons leave photocathode with the energy received from
incoming photon. They move towards first dynode & are
accelerated by electric field & arrive with much greater energy
than they left the cathode.
 When they strike first dynode low energy electrons are released, &
these are in turn attracted by greater positive field of next dynode,
& these electrons are similarly accelerated by greater positive
potential of 2nd dynode, & this process is repeated along all
dynodes until electrons reach anode where they are collected.

 Geometry of dynode chain is carefully designed so that a cascade


effect occurs along its length with an ever increasing no. of
electrons being produced at each stage.

 When anode is reached, accumulation of charge results in a sharp


current pulse for the arrival of each photon at photocathode.
Photomultiplier Use
 Photomultiplier tubes require high voltage (typically in range of 1-2
kV) for their operation.

 Anode is the most positive electrode.

 Dynodes are held at intermediate voltages that are normally


generated using a resistive potential divider.

 It is necessary to ensure that photomultiplier is mounted & used


with care. Stray magnetic fields can affect their operation as
electron stream can be bent & operation of device can be impaired.

 It is also necessary to screen a photomultiplier tube from excessive


light levels while in operation. High light levels can destroy a
photomultiplier because it can become over-excited.
Advantages compared to photomultipliers
1. Excellent linearity of output current as a function of incident light
2. Spectral response from 190 nm to 1100 nm (silicon), longer
wavelengths with other semiconductor materials
3. Low noise
4. Ruggedized to mechanical stress
5. Low cost
6. Compact & light weight
7. Long lifetime
8. High quantum efficiency, typically 60-80%
9. No high voltage required

Disadvantages compared to photomultipliers


1. Small area
2. No internal gain (except avalanche photodiodes, but their gain is
typically 102-103 compared to 105-108 for the photomultiplier)
3. Much lower overall sensitivity
4. Photon counting only possible with specially designed, usually
cooled photodiodes, with special electronic circuits
5. Response time for many designs is slower
6. Latent effect
Charge-coupled Device (CCD)

 CCDs are used for high resolution imaging. They are particularly
useful in Astronomy, where scientists have taken advantage of
extreme sensitivity to light.

 This aspect of the device has many practical applications including


laboratory research where detection of low light levels is needed.

 Images taken by a CCD need to be corrected for certain factors,


including dark noise, readout noise, & saturation, among others.

 Correction is done through collection of dark frames & flat fields,


which can be subtracted from an image using image data reduction
software.
 A CCD is a device used in digital photography that converts an
optical image into electrical signal.

 CCD chips can detect faint amounts of light & are capable of
producing high resolution images needed in scientific research &
applications thereof.

 Theoretically, CCDs are linear-producing accurate images,


transmitting the value they detect in a 1:1 ratio.
Basic Theory of a CCD
 CCD is a special integrated circuit consisting of a flat, 2D array of
small light detectors referred to as pixels.

 CCD chip is an array of Metal-Oxide-Semiconductor capacitors


(MOS capacitors), each capacitor represents a pixel.

 Each pixel acts like a bucket for electrons.

 A CCD chip acquires data as light or electrical charge. During an


exposure, each pixel fills up with electrons in proportion to the
amount of light that enters it.

 CCD takes this optical or electronic input & converts it into an


electronic signal, which is then processed by some other
equipment &/or software to either produce an image or to give user
valuable information.
Specifications of a CCD camera
(Santa Barbara Instrumentation Group, Model: ST-8300M/C)

CCD Kodak KAF-8300


Pixel array 3326 × 2504 pixels
Total pixels 8.3 Megapixels
Pixel size 5.4 × 5.4 microns
Shutter type Electromechanical
Exposure 0.1 to 3600 seconds
Dimensions 4 × 5 × 2 inches
Dark Frames: A dark frame is an image taken, theoretically, with no
exposure to light.
Shutter remains closed, but light may still leak in to a certain degree.
In order to obtain a decent dark frame, it may be necessary to take
one’s images in a so-called “dark room”.
 As one can see, the image has a sort of salt and pepper” look to it.
This is due to a few factors, one of which is the fact that some
pixels are “hot pixels”.

 Dark current: A pixel in an ideal CCD, in the absence of light, would


maintain a steady value. When exposed to light, the pixel’s value
would increase in response to light but then as soon as the light
went away the pixel would maintain its value again.

 In reality, CCD pixels suffer from the affects of dark current


whereby the
pixel‟s value slowly increases, or brightens, over time.

 “Hot Pixels” are those where dark current is higher than average.
Averaging Images

 It is important when dealing with such high resolution images


dependent on sensitive electronics.

 Every five second dark frame is not identical due to the fact that
electronics operate a little differently each time & pixels may exhibit
some degree of variance.

 Some pixels may become saturated, that is too many photons hit it &
signal can no longer increase. In order to obtain a more accurate dark
frame, one should average a series of images. To do this, one must
collect an odd no. of images, preferably about 15-19, all at a certain
integration time.

 Once images are collected, software is used to obtain the mean value
from collection.

 Once an averaged, or mean, dark image is obtained, it is called a


Master Frame & can be used again & again, being subtracted from
actual images of the same integration time.

 But averaging is not only used for dark imaging; it is also useful when
taking images of evenly distributed light, or flats.
Flat Frames

 Flat frames, or flat fields, are images of evenly distributed light. A


good light source to provide uniform distribution at a certain
wavelength is an LED, or light emitting diode.

 Light should be reflected off of a white surface aimed at by the CCD


camera. It is difficult to acquire a perfect flat image because of the
sensitivity of the CCD & shadows cast by a lens over CCD.

 A good flat should portray how each pixel responds to light. Flats can
be used to correct vignetting, or obstruction of light paths by parts of
the instrument, as well as effects of dust particles on a lens.

 Flats can be averaged & subtracted from an image just like darks in
order to remove the effects of vignetting & dust as well as the effects
of a shutter from an image, particular one of fast exposure time.

 By taking a series of flats at different exposure times, subtracting out


the appropriate darks, & combining the flats with IRAF, one can create
a “shutter map” which can be useful in correcting shutter effects.
PH 301
ENGINEERING OPTICS
Lecture_Optical Detectors_31
Advantages of CCD

 Quantum efficiency (QE) ~ 80%


 Low noise
 High dynamic range (~ 50K)
 High photometric precision
 Very linear behaviour
 Immediate digital conversion of data
 Low voltages required (5V-15V)
 Geometrically stable (good for astronomy)
 Rapid clocking
PH 301
ENGINEERING OPTICS
Lecture_Optical Detectors_32
Infrared was discovered in 1800 by Sir William Herschel as a form of
radiation beyond red light. Infra is Latin prefix for below.

There are four basic laws of IR radiation: Kirchhoff’s law of thermal


radiation, Stefan-Boltzmann law, Planck’s law, & Wien’s displacement
law.

Division of IR radiation
Region Wavelength range (μm)
Near infrared (NIR) 0.78 – 1
Short wavelength IR (SWIR) 1–3
Medium wavelength IR (MWIR) 3–6
Long wavelength IR (LWIR) 6 – 15
Very long wavelength IR (VLWIR) 15 – 1000
Thermal Detectors
 Thermal detector is a resistive element which measures
electromagnetic radiation by absorbing it & converting it into heat.

 Thermal detectors contain a small active element on which radiation


is focused.

 By blackening & insulating the element & by minimizing size of


element temp change & detector response are maximized.

 Temp change is approximately inversely proportional to exposed


surface area of element.

 As intensity of radiation increases the temp change on element of


detector increases.
Types of Thermal Detectors

1. Thermopiles : Thermocouple
2. Thermistors : Pyroelectric detector
3. Pneumatic devices : Golay Cells

Merits
 Used for wide wavelength range
 Linearity in response is seen

Demerits
 Slow response time
 Lower sensitivity
Thermal detectors are of low manufacturing costs in comparison to
photon detectors. Quality of these detectors was greatly improved after
introduction of micromachining technology.

Uncooled arrays of these detectors working in IR region (thermovision


cameras) are commercially available.

Electrical scheme for a thermal detector


Thermopile Detectors
 An electronic device that converts thermal energy into electrical
energy.

 It is composed of several thermocouples connected usually in


series, or less commonly, in parallel.

 It operates by measuring temp differential from their junction point


to point in which thermocouple output voltage is measured.

 Thermopile detectors consist of an array


of thermocouple junctions linked in
series as differential pairs. These
differential pairs form hot & cold
junctions.
Circuit of thermocouple
Thermocouple

 Two dissimilar metals like bismuth & antimony.


 Two ends are called Hot junction & Cold junction.
 Surface at junction of wires is coated with black metallic oxide.
 IR radiation falls on hot junction by heat source change in temp at
junction between metallic wires causes an electric potential to
develop between wires.
 Potential difference between unjoined ends of wires is amplified &
measured.
 Cold junction is not exposed to IR.
Differential Temperature Thermopile (Two Thermocouple)

 With two sets of thermocouple pairs connected in series.


 Two top thermocouple junctions are at temp T1 while two bottom
thermocouple junctions are at temp T2.
 Output voltage from thermopile ∆V is directly proportional to temp
differential ∆T or T1 - T2, across thermal resistance layer & no. of
thermocouple junction pairs.
 Thermopile voltage output is also directly proportional to heat flux q,
through thermal resistance layer.
Thermopile (several thermocouples)

 Thermopiles do not respond to absolute temp, but generate an


output voltage proportional to a local temp difference or temp
gradient.

 Thermopiles are used to provide an output in response to temp as


part of a temp measuring device, such as IR thermometers widely
used by medical professionals to measure body temp, or in thermal
accelerometers to measure temp profile inside sealed cavity of
sensor.
Applications of Thermopile Detectors
 Non-contact temp measurements in process control & industry
 Hand-held non-contact temp measurements
 Thermal line scanners
 Tympanic thermometers IR radiometry refringent leak detection
 Automotive exhaust gas analysis of HC, CO2, & CO.
 Commercial building HVAC & lighting control
 Security human presence & detection
 Blood glucose monitoring
 Horizon sensors for satellites, aircraft, & hobbyist applications
 Medical gas analysis such as blood alcohol breathalyzers,
incubator CO, CO2, & anaesthetic
 Aircraft flame & fire detection
 Hazard detection including flame & explosion.
Applications

 Thermopiles are also used to generate electrical energy from, for


instance, heat from electrical components, solar wind, radioactive
materials, laser radiation or combustion.

 The process is also an example of Peltier effect (electric current


transferring heat energy) as the process transfers heat from hot to
cold junctions.
Thermistors

 Thermistors are devices that have an electric resistance that is


higher temp dependent.
 Materials used are sintered oxide Cobalt, Manganese, & Nickel
 A constant potential is applied across thermistor & difference in
current flow between an illuminated thermistor & a non-illuminated
thermistor is measured using a differential operational amplifier.
 As temp of mixture increases, its electrical resistance decreases.
 It should be operated at a frequency of less than 12Hz.
Pyroelectric Detectors
 It contains a noncentrosymmetric crystal, which exhibits an internal
electric field along polar axis.
 Pyroelectric effect depends on rate of change of detector temp
rather than on temp itself.
 These detectors operate with a much faster response time & make
choice of Fourier Transform Spectrometers.
 Materials used in pyroelectric detectors are:
- Triglycine sulfate (TGS)
- Deuterated triglycine sulfate (DTGS)
- Lithium niobate (LiNbO3)
- Lithium tantalate (LiTaO3)
Bolometers

 It is a device for measuring power of incident electromagnetic


radiation via heating of a material with a temp-dependent electrical
resistance. It was invented in 1878 by American Astronomer Samuel
Pierpont Langley.
 Used to detect as well as measure microwave energy radiation &
heat.
 It works by using a temp-sensitive resistive element where
resistance changes through temp.
 Most frequently used resistive elements are Barretter & Thermistor.
Bolometer working

 A bolometer includes an absorptive part that is made up of a slight


metal layer. Connection of this part can be done through a thermal
reservoir with the help of a thermal link.

 Once radiation hits absorptive part, then its temp will change. So
compared with reservoir temp, this temp is high because of
radiation absorption.

 Thermal time constant of intrinsic can be equivalent to heat


capacity ratio among absorptive element as well as reservoir.

 Therefore, temp change is measured directly through a resistive


thermometer that is connected to absorptive part. Sometimes,
absorptive parts resistance is used for computing change in temp.
Bolometer circuit
 Arrangement of circuit can be done in a bridge form, where one arm
includes temp-sensitive resistor.

 Arrangement of this resistor can be done in a microwave energy field


where power can be measured.

 This resistor absorbs measurand power because heat generates within


it. This generated heat can change resistance of an element. Change in
resistance can be measured by bridge circuit.
Golay Cell
1947

 Golay cell consists of a small metal cylindrical closed by a rigid


blackened metal plate.
 Pneumatic chamber is filled with xenon gas.
 At one of a cylinder a flexible silvered diaphragm & at other end IR
transmitting window is present.
 When IR radiation is passed through IR transmitting window the
blackened plate absorbs the heat, which causes expansion in gas.

 Resulting pressure of gas will cause deformation diaphragm. This


motion of diaphragm detects how much IR radiation falls on metal
plate.

 Light is made to fall on diaphragm which reflects light on photocell.


 Response time is 20ms.
Metal Bolometers: These have a linear change in resistance with temp.

This coefficient always decreases with temp, & burnout does not occur.
The coefficient is approximately equal to the inverse of temp, & is
therefore never high.

Semiconductor Bolometers: These have an exponential change of


resistance with temp.

Value of β depends upon the particular material. These detectors can


burn out. Two basic types exist: (1) those that are used at low
temperatures & (2) those that are used at about room temperature .
Superconducting bolometers: These make use of extremely large
thermal coefficient of resistance at transition temperature.

Originally they needed to be controlled very carefully, or a small


change in ambient conditions (on the order of 0.01 K) could cause an
apparent signal of appreciable magnitude.

Carbon Bolometers: These are a form of semiconductor bolometer


that have been largely superseded by germanium bolometers. They
are made of small slabs of carbon resistor material, connected to a
metal heat sink by way of a thin mylar film. Although their
responsivities are comparable to germanium bolometers, their noise
is several orders of magnitude higher.
Thermographic camera
 A thermographic camera also called IR camera or thermal imaging
camera or thermal imager is a device that creates an image using IR
radiation. They are sensitive to wavelengths 1 μm to 14 μm.

 Practice of capturing & analyzing data such camera provide is called


thermography.
The hotter an object is, the more IR radiation it produces. Thermal
cameras can see this radiation & convert it to an image that we can
see with our eyes.
 All objects emit a certain amount of black body radiation as a function
of their temp.

 The higher an object’s temp, the more IR radiation is emitted. It even


works in total darkness because ambient light level does not matter.

 A major difference with optical cameras is that focusing lenses can’t be


made of glass, as glass blocks long-wave IR light. Spectral range of
thermal radiation is from 7 - 14 μm.

 Special materials such as Germanium, Calcium fluoride, Crystalline


silicon or special type of chalcogenide glasses are used.

 Except for Calcium fluoride all these materials are quite hard & have
high refractive index (Ge, n = 4) which leads to very high Fresnel
reflection from uncoated surfaces (up to more than 30%).

 For this reason most of lenses for thermal cameras have antireflective
coatings. Higher cost of these special lenses make thermographic
cameras costly.
 For use in temp measurement, brightest (warmest) parts of image
are customarily colored white, intermediate temperatures reds &
yellows, & dimmest (coolest) parts black.

 A scale should be shown next to a false color image to relate


colors to temperatures.
 Their resolution is considerably lower than that of optical
cameras, mostly only 160 × 120 or 320 × 240 pixels, although more
expensive cameras can achieve a resolution of 1280 × 1024 pixels.

 In uncooled detectors, temp differences at sensor pixels are


minute; a 1°C difference at scene induces just a 0.03°C difference
at sensor.

 Pixel response time is also fairly slow, at the range of tens of


milliseconds.
Cooled IR Detectors
 Typically contained in a vacuum-sealed case & cryogenically
cooled. Cooling is necessary for operation of used semiconductor
materials. Typical operating temps range from 4 K to just room
temp.

 Modern cooled detectors operate in 60 K to 100 K range (-213 to -


173°C).

 Without cooling, these sensors will be blinded or flooded by their


own radiation.

 Drawback of such cameras are that they are expensive both to


produce & to run. Cooling is both energy-intensive & time-
consuming. The camera may take several minutes to cool down
before it can begin working.

 Materials: Indium antimonide (3-5 μm), Indium arsenide, Mercury


cadmium telluride, lead sulfide, lead selenide.
Uncooled IR Detectors

 They use a sensor operating at ambient temp, or a sensor


stabilized at a temp close to ambient using small temp control
elements. Modern uncooled detectors all use sensors that work by
the change of resistance, voltage or current when heated by IR
radiation. These changes are then measured & compared to
values at the operating temp of sensor.

 They can be stabilized to an operating temp to reduce image


noise, but they are not cooled to low temps & do not require bulky,
expensive, energy consuming cryogenic coolers. This makes IR
cameras smaller & less costly. However, their resolution & image
quality tend to be lower than cooled detectors.

 Uncooled detectors are mostly based on pyroelectric &


ferroelectric materials or microbolometer technology. Materials
are used to form pixels with highly temp-dependent properties,
which are thermally insulated from environment & read
electronically.
Thermal Imager
Drone with IR camera
IR Night Vision Camera

Intensifier tubes absorb whatever light they can & amplify it.
PH 301
ENGINEERING OPTICS
Lecture_Display Devices_33
Cathode Ray Tube
 It is a high vacuum tube that contains one or more electron guns, in
which cathode rays produce image on a fluorescent screen.

 It modulates, accelerates, & deflects electron beam onto the screen


to create image.

 TV, Computer terminals,….


CRT Imaging Process
– Low-voltage emission of electrons
– High-voltage anode attracts electrons
– Electrons strike phosphors, causing them to glow brightly
– Color CRTs use three electron guns
– Projection CRTs use single-color phosphors
– Response of CRT is linear for wide grayscales
CRT Imaging Process
CRT performance
Advantages:
– CRTs can scan multiple resolutions
– Wide, linear grayscales are possible
– Precise color shading is achieved
– CRTs have no native pixel structure

Drawbacks:
– Brightness limited by tube size
– Resolution (spot size) linked to brightness
– Heavy, bulky displays for small screen sizes
CRT is getting old
– Technology is over 100 years old
– Monochrome CRTs used from 1910s
– Color CRTs developed in early 1950s
– Monochrome tubes were used in front projectors
in 1980s – 90s (7”, 8”, 9”)
– Manufacturing has largely moved to China
• High-volume, low-margin product
• Thomson TTE, TCL, & others make them
Liquid Crystal Display

Flat-panel displays based on


backlit arrays of LCDs.
Liquid Crystal Display
 LCD is a flat-panel display or other electronically modulated optical
device that uses light-modulating properties of liquid crystals.

 Liquid crystals do not emit light directly, instead using a backlight or


reflector to produce images in colour or monochrome.

 LCDs are available to display arbitrary images (as in a general-


purpose computer display) or fixed images with low information
content, which can be displayed or hidden, such as preset words,
digits, & 7-segment displays, as in a digital clock.

 Arbitrary images are made up of a large number of small pixels, while


other displays have larger elements.
Applications: Computer monitors, televisions, instrument panels,
aircraft cockpit displays, indoor & outdoor signage, digital cameras,
watches, calculators, mobile telephones.

LCD screens are also used on consumer electronic products such as


DVD players, video game devices & clocks.

LCD screens have replaced heavy, bulky CRTs in nearly all


applications.

LCD screens are available in a wider range of screen sizes than CRT &
plasma displays, with LCD screens available in sizes ranging from tiny
digital watches to huge, big-screen television sets.
 Since LCD screens do not use phosphors, they do not suffer image
burn-in when a static image is displayed on a screen for a long time
(e.g., the table frame for an aircraft schedule on an indoor sign).

 LCDs are, susceptible to image persistence.

 LCD screen is more energy-efficient & can be disposed of more


safely than a CRT can.

 Its low electrical power consumption enables it to be used in battery-


powered electronic equipment more efficiently than CRTs can be.

 By 2008, annual sales of televisions with LCD screens exceeded


sales of CRT units worldwide, & CRT became obsolete for most
purposes.
Different types of liquid crystal

Nematic LC Smectic LC Cholesteric LC


Molecular arrangements in a twisted nematic liquid crystal. Lines between
direction of alignment layers indicate the direction polish of molecular
alignment at various depths within the cell. Only a small column of molecules is
shown.
Structure of an electrically controlled light crystal cell.
LCD display technology

• Liquid-crystal displays are transmissive


• LC pixels act as light shutters
• Current LCD benchmarks:
– Sizes to 82” (prototypes)
– Resolution to 1920 × 1080 pixels
– Brightness > 500 nits
• Power draw < plasma in same size
• Weight < plasma in same size
LCD Imaging Process
LCD Imaging Process

Sharp Approach Samsung Approach

LG Philips Approach
LED TV
 Alignment of LCs are altered by applying an electric current to small,
specific areas of LC layer. LED can control how the layer transmits
light that flows from TV’s backlighting.

 In this way, an LED TV can generate on-screen imagery. While a


traditional LCD TV relies on same technology, an LED TV utilizes a
more advanced form of backlighting.

 LEDs glow during exposure to electric current. Current flows between


LED anodes, which are +ve charged electrodes, & LED cathodes,
which are -ve charged electrodes.

 Traditional LCD TV utilizes fluorescent lamps for backlighting. These


lamps function by using mercury vapory to create UV rays, which in
turn cause the phosphor coating of the lamps to glow.

 LEDs have advantages over fluorescent lamps, including requiring


less energy & being able to produce brighter on-screen colors.
LED TV
 Not all LED TVs utilize LEDs in the same way.

 As of 2011, there are two primary forms of LED lighting technology;


full-array LED backlighting & edge-lit LED backlighting. Also known
as local-dimming technology, full-array technology employs arrays
or banks of LEDs that cover the entire back surfaces of LED TV
screens.

 In contrast, edge-lit technology employs LEDs only around edges


of LED TV screens. Unlike an edge-lit LED TV, an LED TV with full-
array technology can selectively dim specific groups of LEDs,
allowing for superior contrast ratio & superior overall picture
quality.
LED TV
 As with any TV, an LED TV needs energy in order for its
components to function. Specifically, an LED TV needs electric
current for stimulating LCs in its LCD panel & for activating its
LED backlighting.

 In comparison to standard LCD TVs, LED TVs consume less


energy, qualifying many of them for the EPA's Energy Star
energy-efficiency standard.

 An LED TV will typically consume between 20 & 30 percent less


energy than an LCD TV with the same screen size.

09.11.2021
Plasma Display Panel Technology

• Plasma displays are emissive


• Current PDP benchmarks:
– Sizes to 103”
– Resolution to 1920 × 1080
– Brightness > 100 nits (FW), 1000 nits peak
• Power draw 15%-20% > same size LCD
• Weight 20%-25% > same size LCD

Candela per square metre (cd/m2) is derived SI unit of Luminance. Nit (nt) is a
non-SI unit of Luminance (1 nt = 1 cd/m2).
Plasma Imaging Process

Three-step charge/discharge cycle


– Uses Neon-Xenon gas mixture
– 160-250V AC discharge in cell stimulates UV radiation
– UV stimulation causes color phosphors to glow & form picture
elements
– Considerable heat & EMI are released
Plasma Imaging Process
PDP Rib Structure (Simple)
Deep Cell Structure (Advanced)

• Waffle-like structure
• Higher light output
• Less light leakage between rib barriers
• Developed by Pioneer
Plasma Tube Structure (Future?)

• Phosphors, electrodes, & Ne/Xe


gas combined into long tubes
• Reduces cost of larger screens
• Flexible displays?
• Developed by Fujitsu
Real-World Plasma Benchmarks

• A review sample 50-inch plasma monitor measured from 93 nits (full


white) to 233 nits (small area), with ANSI (average) contrast measured
at 572:1 and peak contrast at 668:1
• Typical black level .21 nits (closer to CRT)
• Native Resolution – 1366 × 768
• Power consumption – 411.3 watts over a 6-hour interval (total of 2.089
kWh)
Real-World Plasma Benchmarks
Color rendering
– Gamut is smaller than REC 709
coordinates
– Green somewhat undersaturated
– Red, blue are very close to ideal
coordinates

Technology enhancements:
– Wider color gamuts (films,
phosphors)
– Improved lifetime (gas mixtures)
– Higher resolution (1920 × 1080 @
50”)
– Resistance to burn-in (change in
gas mixture)
Quantum Dots
 Quantum dots represent 3-D confinement (an electron confined in a
3D quantum box of dimensions from nanometers to tens of
nanameters).

 These dimensions are smaller than de Broglie wavelength.

 A 10 nm cube of GaAs would contain about 40,000 atoms.

 QD is often described as an artificial atom because electron is


dimensionally confined just like in an atom (where an electron is
confined near the nucleus) & similarly has only discrete energy
levels.

 Electrons in a QD represent a zero-dimensional electron gas.

 Recent efforts have focused on producing quantum dots in different


geometric shapes to control shapes of potential barrier confining
electrons (& holes).
 A simple case of a QD is a box of dimensions lx, ly, & lz. Energy levels
for an electron have only discrete values,
 2
 
2
h  nx   y   nz  
2
n
2

En       
 
8me  l x   l y   l z  
 
where quantum nos. lx, ly, & lz each assuming integral values 1, 2, 3,
characterize quantization along x, y, & z axes respectively.

Consequently, density of states for a zero-dimensional electron gas


(for a QD) is a series of δ functions at each allowed confinement
state energies.
D ( E )   ( E  E n )
En

D(E) has discrete (nonzero) values only at discrete energies. Discrete


values of D(E) produce sharp absorption & emission spectra for QDs,
even at room temp.
 QDs have large surface-to-volume ratio of atoms, which can vary as
much as 20%. Consequence: surface-related phenomena.

 Strong confinement regime represents; when size of QD (e.g., radius


R of a spherical dot) is smaller than exciton Bohr radius aB.

 Energy separation between sub-bands (various quantized levels of


electrons & holes) is much larger than exciton binding energy.

 Electrons & holes are largely represented by energy states of their


respective sub-bands.

 As QD size increases, energy separation between various sub-bands


becomes comparable to & eventually less than exciton binding
energy. Latter represents case of a weak confinement regime where
size of quantum dot is much larger than exciton Bohr radius.

 Electron-hole binding energy in this case is nearly same as in bulk


semiconductor.
Quantum Dots in Displays
P. Palomaki, Photonics Spectra 52 (May 2018) 42-47.

Composition & function of a QD


Spectral output of blue LED + YAG phosphor (top row) compared to blue LED +
QDs (bottom row). Spectra generated by the phosphor/QDs are then passed
through a color filter to generate the final LCD spectrum. QD down-conversion
approach produces a larger color gamut with less wasted light.
Comparison of CdSe & InP QD technologies
Summary of successful current implementation & next-generation implementation
methods of QDs in displays.
PH 301
ENGINEERING OPTICS
Lecture_Display Devices-2_34
Organic LED Display
 It is an LED, in which emissive electroluminescent layer is a film of
organic compound that emits light in response to an electric current.

 Organic layer is situated between two electrodes; at least one of these


electrodes is transparent.

 Uses: TV screens, computer monitors, smartphones, handheld game


consoles, etc.

 A major area of research is the development of white OLED devices for


use in solid state lighting applications.

 There are two main families of OLED:


- based on small molecules &
- employing polymers.

 Adding mobile ions to an OLED creates a light emitting


electrochemical cell which has a slightly different mode of operation.
 OLED display - driven with a passive-matrix (PMOLED) or active-
matrix (AMOLED) control scheme.

 PMOLED scheme: each row (& line) in display is controlled


sequentially, one by one.

 AMOLED scheme: uses a thin-film transistor backplane to directly


access & switch each individual pixel on or off higher resolution &
larger display).

 It works without a backlight because it emits visible light.

 In low ambient light conditions (such as a dark room), an OLED


screen can achieve a higher contrast ratio than an LCD, regardless of
whether LCD uses cold cathode fluorescent lamps or an LED
backlight.

OLED lighting panels


TV Goes 8K
Better imaging: Medicine & Entertainment

 1950s: TV signal transmission with vacuum-tube technology

 2000s: First full high-definition television (HDTV) format with


resolutions of 1920 × 1080 pixels.

 Doubling resolution: 3840 × 2160 pixels (4K) & adding high dynamic
range (HDR)

 New generation: 7680 × 4320 pixels (8K)

How much of that enhanced resolution can the human eye see?

Measure optical acuity!

[Jeff Hecht, Opt. Phot. News 31 (May 2020) 40-47]


From CRT to Flat Panels

 TV became a mass medium after World War II (Black & White – until
mid 1960s).

 Color TV: US was leader with Federal Communications Commission


adopting National Television System Committee (NTSC) color
standards in 1953.

 NTSC (6 MHz) broadcast thirty 525-line frames every second,


although CRT showed only 486 lines.

 Countries with 50 Hz power grids adopted PAL (Phase Alternating


Line) & SECAM standards, which transmitted 625-line frames 25
times a second.
From CRT to Flat Panels
 Color broadcast equipment & sets were expensive, & color quality
was almost an oxymoron. Colors drifted so much that sets came
with color tuning knobs – not that they stopped mud-covered
football fields from appearing a cartoonish bright green on screen.

Critics dubbed NTSC “Never The Same Color” or “Not True


Skin Colors”.

Color quality improved after introduction of solid-state electronics in


1970s, but CRT technology remained a bottleneck.

Cost, bulk, & weight of CRT TVs scaled steeply with screen size,
making direct-view CRT screens larger than 35 to 40 inches impractical
even in 1990s.
Larger size required projecting bright image from a small screen CRT
or other display onto a large translucent or reflective screen. Rear-
projection sets became popular in early 2000s, & eventually reached
sizes upto 100 inches (2.5 m), but their screen brightness remained
limited.
Digital Transition
 1996: Screen formats: 1280 × 720 pixels (HD)
1920 × 1080 pixels (Full HD)

Both formats used 16:9 screen shape

Digital TV went on sale in August 1998 in US.


Price: US$ 5,000 (55 inch screen) to US$ 12,000 (72 inch screen)

 Doubled resolution: 3840 × 2160 pixels (Ultra HD) – 4K


4K + HDR (High Dynamic Range)

HDR increases the range of color brightness, from blacker blacks to


higher-intensity colors, so images seem much more vivid, especially
those chosen to highlight the effect in showroom.

 Next Gen: 7680 × 4320 pixels (8K) + HDR


YEAR

Seven-year cycle of TV resolution upgrades: from 525-line standard definition


(SD) to High-definition (HD), Full HD, Ultra-high definition (UHD) & 8K.
 Screen improvement: enhancing range of colors

 For LCD displays, this is done by changing backlight that illuminates


the arrays of tiny liquid crystal light modulators that create image.

 Early LCD displays were illuminated uniformly by lamps.

 White-light LEDs offered an improvement in backlight quality


because they can be driven individually to change the distribution of
light across screen, giving darker blacks & brighter colors.

 Next upgrade came from switching to arrays of red, green, & blue
LEDs, which produce purer colors that are closer to edges of
chromaticity diagram, increasing range of color gamut.

 Manufacturers describe LCDs with LED backlighting as “LED”


displays, although it’s the LCDs that are modulating light.

 Next Gen is coating blue LEDs with films containing quantum dots,
which produce very narrow slices of red & green spectrum.
Chromaticity diagram showing the difference between full HD, 4K, & 8K color
gamuts, with 8K (yellow triangle) offering broadest color range.
8K Generation

8K screens display a resolution of


7680 × 4320 pixels for a total of 33.18
million pixels, resulting in images
that are 16 times more detailed than
full HD.
8K Generation
 8K displays: latest generation, combines HDR & expanded color
gamut with screens displaying 7680 × 4320 pixels, four times no. on a
4K screen & 16 times no. on a full HDTV. Pixels are so close together
that we see smooth & lose pixelated effect.

 8K electronics use artificial intelligence to classify parts of image as


hair, sky, faces or other things, & then apply appropriate upscaling
algorithms.

 HDR – natural documentary

 Conventional metric for human visual acuity would suggest no


difference should be visible between 4K & 8K.

 In 1862, Dutch ophthalmologist Herman Snellen designed the now-


standard eye chart so that, when viewed at proper distance, letters
would subtend an angle of five arcminutes, & lines in letters would be
one arcminute wide.

One arcminute remains standard definition of optical acuity.


8K Generation
 To compare eye’s resolution on chart to the display of pixels on a
screen, assume viewer sits at a distance where screen subtends a
45° angle in horizontal field.

 For a modern 16:9 display, that’s about 2.5 times the screen height,
or 6.6 feet from a 65-inch screen. The 45° field of view equals 2700
arcminutes, so if the eye’s resolution is one arcminute, it could
resolve 2700 pixels on that screen, between 1920 pixel width of full
HDTV & 3840 pixel width of 4K.

 So if visual acuity was that simple, our eyes could not tell the
difference between 4K & 8K screens.

 It is based on “a line of assumptions”. Eye-chart resolution “is all


based on seeing letters that are black & white targets, with sharp
edges & no gray scale”. TV shows continually changing color
images of various brightness.

TV screen is very different from an eye chart.


 Ocular acuity varies across field of view.

 Acuity has a sharp peak in central fovea, where photosensors are


packed most closely, & drops off on all sides.

 Vision in central fovea is limited by optics of eye because each


neuron carries input from one photosensor.

 However, each neuron on periphery of eye receives input from many


photosensors, so peripheral vision is limited by neural processing,
which throws away spatial information needed for high-resolution
vision by mixing multiple inputs.

 Hyperacuity: Sometimes our eyes can see anomalies in shape that


are beyond our normal visual acuity.
Examples of hyperacuity

 Vernier effect: we can spot a slight offset in straight lines that is


only a fraction of their width. This is how Vernier scale, which lets us
measure distances that subtend angles smaller than usual one
arcminute.

 Stair-step or “jaggie effect”: we can see pixels along diagonal


lines.

 Moire patterns: wave-like patterns appear when two window


screens or very fine meshes appear to cross at a slight angle.

 Our eyes see at least four pixels per arcminute – as in smartphone


“retina” screens – to avoid pixelation.
(Left) Tennis court image shows examples of hyperacuity: anomalies inlcude
jagged white lines & wavelike moire patterns in net. (Right) a higher resolution
image showing smoother lines.
LG’s 88-inch 8K OLED set
PH 301
ENGINEERING OPTICS
Lecture_Consumer Devices_32
Nexon Computer Museum
Jeju Island, South Korea
Compact Disc’s & Digital Versatile Disc’s

 Data is stored digitally


A series of ones & zeros read by laser light reflected from
disc

 Strong reflections correspond to constructive interference


These reflections are chosen to represent zeros

 Weak reflections correspond to destructive interference


These reflections are chosen to represent ones
CD’s & Thin Film Interference

 A CD has multiple tracks


Tracks consist of a sequence of pits of varying length formed in a
reflecting information layer

 Pits appear as bumps to laser beam


Laser beam shines on metallic layer through a clear plastic coating
Tracks of a CD act as a diffraction grating, producing a separation of colors
of white light. Nominal track separation on a CD is 1.6 μm, corresponding to
about 625 tracks per mm. For λ = 600 nm, this would give a first order
diffraction maximum at about 22º.
Reading a CD
 As disk rotates, laser reflects off sequence of bumps & lower areas
into a photodector.
– Photodector converts fluctuating reflected light intensity into an
electrical string of zeros & ones

 Pit depth is made equal to one-quarter of wavelength of light.


Reading a CD

 When laser beam hits a rising or falling bump edge, part of the beam
reflects from top of bump & part from lower adjacent area.
This ensures destructive interference & very low intensity when
reflected beams combine at the detector

 Bump edges are read as ones.

 Flat bump tops & intervening flat plains are read as zeros.
DVD’s
 DVD: developed by Panasonic, Philips, Sony, & Toshiba in 1995.

 Laser & Optics:


 DVD’s use shorter wavelength lasers

 Track separation, pit depth, & minimum pit length are all smaller.

 Therefore, DVD can store about 30 times more information than a CD.
Laser & Optics
 All three common optical disc media (CD, DVD, & Blu-ray) use light
from laser diodes, for its spectral purity & ability to be focused
precisely.

 DVD uses light of 650 nm wavelength (red), as opposed to 780 nm


for CD. This shorter wavelength allows a smaller pit on the media
surface compared to CDs (0.74 µm for DVD versus 1.6 µm for CD),
accounting in part for DVD's increased storage capacity.

 Blu-ray Disc, successor to DVD format, uses a wavelength of 405 nm


(violet), & one dual-layer disc has a 50 GB storage capacity.
Laser Printer
Gray Starkweather, 1969

 Printers are classified into Impact & Non-impact printers.

 Laser printers use electrostatic printing process. It is a type of


printer that utilizes a laser beam to produce an image on drum.

 It uses a non-impact (keys don’t strike the paper), photocopier


technology.

 Commercially IBM introduced 1st laser printer in 1975 to use it with


mainframe computers.

 In 1984, Hewlett-Packard (HP) revolutionized laser printing


technology with its 1st LaserJet, a compact, fast, & reliable printer.
Laser Printing Process

Cleaning
Before a new page is printed, any remaining
from previous page are cleared away. The
drum is swept free with a rubber blade, & a
fluorescent lamp removes any electrical
charge remaining on the drum.

Conditioning
Entire drum is uniformly charged by primary
corona wire. This charge conditions the drum
for next step.
Laser Printing Process

1. Millions of bytes (characters) of data stream into the printer PC.

2. An electronic circuit in printer (effectively, a small computer in its own right)


figures out how to print this data so it looks correct on the page.
Laser Printing Process
3. Electronic circuit activates corona wire (a high-voltage wire that
gives a static electric charge to anything nearby).

4. Corona wire charges up photoreceptor drum so drum gains a + ve


charge spread uniformly across its surface.

5. Simultaneously, circuit activates laser to make it draw image of page


onto the drum. Laser beam doesn't actually move, it bounces off a
moving mirror that scans it over drum. Where laser beam hits drum,
it erases +ve charge & creates an area of -ve charge. Gradually, an
image of entire page builds up on drum: where page should be
white, there are areas with a +ve charge; where page should be
black, there are areas of -ve charge.

6. An ink roller touching photoreceptor drum coats it with tiny particles


of powdered ink (toner). Toner has been given a +ve charge, so it
sticks to parts of photoreceptor drum that have a –ve charge. No ink
is attracted to parts of drum that have a +ve charge. An inked image
of page builds up on drum.
Laser Printing Process

7. A sheet of paper from a hopper on the other side of printer feeds up


toward the drum. As it moves along, paper is given a strong +ve
charge by another corona wire.

8. When paper moves near drum, its +ve charge attracts negatively
charged toner particles. Image is transferred from drum onto paper
but, for the moment, toner particles are just resting lightly on
paper's surface.

9. Inked paper passes through two hot rollers (fuser unit). Heat &
pressure from rollers fuse the toner particles permanently into
fibers of paper.

10. Printout emerges from side of the copier. Thanks to the fuser unit,
paper is still warm. It's literally hot off the press!
Quick Response codes

A QR code...

 is a matrix or a 2D barcode, first designed for the


automotive industry (1994) to hold information.

 can hold up to 7,089 characters whereas a typical


barcode can only hold a maximum of 20 digits.

 uses four standardized encoding modes (numeric,


alphanumeric, byte/binary, & kanji) to store data.
Working of a QR code

 Consists of black squares arranged in a square grid on a white


background.

 Read by an imaging device such as a digital camera.

 Processed using Reed-Solomon error correction until the image


can be appropriately interpreted.
Structure of a QR code
Preliminaries of a QR code
Capacity of a QR code depends on the version, error correction level, &
type of encoded data.

Versions
Depends on sizes of QR code:
 21 × 21 pixel size is version 1,
 25 × 25 pixel size is version 2, & so on
 177 × 177 size is version 40

Data modes Error correction


QR code can encode in four data QR codes include error correction:
modes:  L, allows the code to be read even if
 numeric, (1 2 3 4 5...) 7% of it is unreadable.
 alphanumeric (8 9 a b c d e …)  M, provides 15% error correction,
 binary (0 1)  Q, provides 25%,
 Japanese (kanji)  H, provides 30%.
Number of symbol characters & input data
capacity for QR Code
Version Error Number of Number Data capacity
correction data of data
level Code bits Numeric Alphanumeric Byte Kanji
words

1 L 19 152 41 25 17 10
M 16 128 34 20 14 8
Q 13 104 27 16 11 7
H 9 72 17 10 7 4

2 L 34 272 77 47 32 20
M 28 224 63 38 26 16
Q 22 176 48 29 20 12
H 16 128 34 20 14 8

3 L 55 440 127 77 53 32
M 44 352 101 61 42 26
Q 34 272 77 47 32 20
H 26 208 58 35 24 15
Error correction characteristics for QR Code

Version Total Error Number of Value Number of Error correction


number of correction error of p error code per block
code level correction correction (c,k,r)
words code words blocks

1 26 L 7 3 1 (26,19,2)
M 10 2 1 (26,16,4)
Q 13 1 1 (26,13,6)
H 17 1 1 (26,9,8)

2 44 L 10 2 1 (44,34,4)
M 16 0 1 (44,28,8)
Q 22 0 1 (44,22,11)
H 28 0 1 (44,16,14)

3 70 L 15 1 1 (70,55,7)
M 26 0 1 (70,44,13)
Q 36 0 2 (35,17,9)
H 44 0 2 (35,13,11)

c = total number of code words, k = number of data code words, r = error correction capacity
Data bits required for a particular data mode
Version 1-9 10-26 27-40
Numeric (bits) 10 12 14

Alphanumeric 9 11 13
(bits)

Binary (bits) 8 16 16
Japanese (bits) 8 10 12

Bit strings for a particular data mode


Bit string Data mode

0001 Numeric mode

0010 Alphanumeric mode

0100 Binary mode

1000 Japanese mode

You might also like