Professional Documents
Culture Documents
Visionsystemsdesign201801 DL PDF
Visionsystemsdesign201801 DL PDF
www.vision-systems.com
LED strobe
controllers
When should you invest
Embedded
camera
Brings 3D mapping to
tunnel boring machines
High-speed
cameras
Target machine vision
applications
Deep
learning
Enhances
machine
vision
January 2018
VOL. 23 | NO. 1
Vision and Automation Solutions for
Engineers and Integrators Worldwide A P EN N W EL L P U B L I C AT I O N
C ove r St o r y
A universal, pre-trained classifier based on deep
learning algorithms enables identification of a
wide range of OCR fonts. (See page 15).
f e ature s
15 IN T EGR AT ION INSIGH T S d ep ar tment s
How deep learning is enhancing machine vision
3 My View
Developers increasingly apply deep learning and
4 Online@vision-systems.com
artificial neural networks to improve object detection
5 Snapshots
and classification. Read the latest news from our website
Johannes Hiltner 9 Technology Trends
ELECTRONICS INSPECTION
18 IN T EGR AT ION INSIGH T S Infrared imaging microscopes aid in
detecting integrated-circuit defects
When to invest in LED strobe controllers DEEP-SEA IMAGING
LED strobe controllers offer many benefits for Imaging system monitors for pollution
and coral health in the Bay of Biscay
applications involving low contrast, motion, mark 3D IMAGING
quality assessment, OCR and ambient lighting issues. 3D imaging enables high-speed
precision bore hole inspection
Alan L. Lockard
29 Vision+Automation Products
32 Ad Index/Sales Offices
21 INDUS T RY S OLU T IONS PROFIL E
Robert Wenighofer
Andrew Wilson
online @ www.vision-systems.com
3D Time of Flight company 10 ways deep learning Machine vision standard
Odos Imaging acquired by technology is being directs on evaluation of
Rockwell Automation deployed today image processing systems
Odos Imaging, a de- As the demands for speed A new standard puts
veloper of 3D Time of and accuracy in machine forward procedures for
Flight cameras for indus- vision and image process- assessing classificatory
trial imaging applications, ing applications grow, performance during the
has been acquired by Rockwell Automa- so too must the software that is used in acceptance of machine vision systems,
tion, Inc. http://bit.ly/VSD-1801-01 them. http://bit.ly/VSD-1801-03 while also providing examples from in-
dustrial inspection technology, which
Euresys acquires OEM Lucid Vision Labs machine serves as guidance.
imaging IP and hardware vision camera company http://bit.ly/VSD-1801-05
company Sensor to Image founded by industry veteran
Euresys has announced Lucid Vision Labs, Inc., High-speed imaging sheds
the acquisition of Sensor a company that designs light on secrets of flight
to image GmbH, a com- and manufactures ma- Using high-speed imag-
pany specializing in OEM chine vision cameras and ing and computation-
imaging IP and hardware for industrial components, has been founded by former al fluid dynamics, the
vision products, including cameras and Founder, President and VP of Engineering unique mechanisms in-
frame grabbers. at Point Grey Research Inc. Rod Barman. volved in mosquito flight have been re-
http://bit.ly/VSD-1801-02 http://bit.ly/VSD-1801-04 vealed. http://bit.ly/VSD-1801-06
tdtsefifeeffieptleeseeeetseefeeeee
sl pulsed ptwer laser
pselatastspecaalazedtantdevelppangptcustptazangtt
andttanuuacturangtlaser patterr gererattrstt
uprttachanetvasapntapplacatapnss
Thanks for a great 2017!
We wish you a Happy New Year.
putputtppwer
tr tr tr tr
Discover more:
Thank you!
www.alysium.com
tFF tFF tFF
eeeteeeteeeeeeeeteeeeeeteeefieffifieeee
ueatures
pfeeeteepeetpppptppptfieptpppeetfeteetpw seeeeetefitefeeeeeee
aeeeefifieeptteefieeseetefeeeteffifipteefit seeeeefieeeteefieeetfieptpeefieeeteeeetseest
eeeeepteeaeetefieeeeeeee pfeyteyeeetfeteetpp%
structuredtlaghttandtt
lasertbeattshapangtsplutapns
ppp9tt2eptaaeefe
lfieseeeptlcpthpttttpptcfiefipfi
eeee@eeeefiseeett ttppptptps2227tt
osela.com
Fully self-driving
cars are here,
says Waymo
Waymo’s (Mountain View, CA, USA; www.waymo.com) fully self-
driving vehicles are now test driving in fully autonomous mode on
public roads without a driver after having worked on the technology
for more than eight years.
A subset of vehicles from Waymo—formerly the Google self-driv-
ing car project—will operate in fully autonomous mode in the Phoe-
nix metro region. Eventually, the fleet will cover a region larger than
the size of Greater London, with more vehicles being added over time.
Waymo explains in a press release that it has prepared for this next tests, practicing rare and unusual cases,” wrote Waymo via Medium.
phase by putting its vehicles through the world’s longest and tough- “We’ve multiplied all this real world experience in simulation, where
est ongoing driving test. our software drives more than 10 million miles every day. In short: we’re
“Since we began as a Google project in 2009, we’ve driven more than building our vehicles to be the most experienced driver on the road.”
3.5 million autonomous miles on public roads across 20 U.S. cities. At Waymo’s vehicles, according to the company, are equipped with
our private test track, we’ve run more than 20,000 individual scenario the necessary features, including sensors and continued on page 6
• DEEP-SEA I M AG I N G
ELECTRONICS INSPECTION
• 3D I M AG I N G
The system automatically generates and presents the operator with a display of the four
bores, along with an evaluation of crucial measurement criteria.
continued from page 9 Vuk Bartulovic, Pres- PC provides the user interface for data acqui-
To help manufacturers gain insight into the
ident of Novacam, says “By the time they sition control and data-analysis software. It process, BoreInspect can be configured to
come to us, many of our clients have spent also typically houses client-selected 3D- visu- provide deviation maps highlighting the devi-
a long time looking for a way to measure, in alization and metrology software. ation of the measured bore dimensions from
a non-contact manner, internal diameters High-density point clouds generated from the CAD model, to reveal micron-level sur-
face defects such as this 19.7-micrometer
of bores, cylinders, and other such hard-to- its scans enable rapid and detailed charac-
deep depression.
reach spaces. These bore diameters range terization of inner surfaces and features as
from a few millimeters to as large as truck well as defect detection. During operation, ents the operator with a display of the four
engine cylinder bores. Typically, they want the scanner spins the probe at 1,800 rpm, bores, along with an evaluation of crucial
to acquire full internal diameter dimensions, acquiring 30,000 3D surface measurements measurement criteria. A deviation map,
detect defects, and measure roughness. It is a per second. In just over 3 seconds, 100,000 highlighting the deviation of the measured
revelation to them that an instrument exists surface measurements on the inside of the bore dimensions from the CAD model,
that can provide all this functionality … and bore are collected, providing a high-fidelity helps reveal micron-level surface defects.
that micron-precision data on their bore ID 3D model of the bore ID. “Regardless of the bore diameter, the
can be acquired in a matter of seconds and in For the automated valve body bore accuracy of the non-contact 3D measure-
a non-contact manner.” metrology application, the rotational scan- ments is the same,” explains Bartulovic.
The system has been deployed for auto- ner enters each one of the four valve body “Defining scanning sequences is easy and
mated 3D metrology of valve-body assembly bores in an automated sequence and the each sequence may be saved for later recall
bores. Following a user-created inspection system automatically generates and pres- and execution. While the user may manip-
sequence, all four bores in the valve body ulate and analyze the 3D point cloud data
a)
are automatically scanned, and the resultant in an interactive manner, the BoreInspect
3D surface point-cloud analyzed and dis- Defects
system also generates automated reports.”
played using PolyWorks from InnovMetric The system is suitable for integration into
(Québec, QC, Canada; www.innovmetric. automated continuous flow manufactur-
com), a third-party 3D data visualization and Cross-holes ing, according to Vuk. “Its compact, modu-
metrology software. According to user crite- lar, and rugged design facilitates integration,
ria, selected aspects of each bore are com- even in high-throughput industrial metrol-
Bore hole circumference (15.5 mm)
pared to actual measurement data and non- ogy applications that would have the scanner
compliance visually highlighted. mounted on a stage suitable for the applica-
Comprised of three main hardware com- b) tion, such as a robot arm, gantry, or motor-
ponents, BoreInspect includes a rotational controlled stage, and the optical fiber con-
scanner that spins a non-contact optical probe nection to the system’s interferometer can be
while a robot or motion stage advances it into hundreds of meters long.”
Defect
the bore hole. The side-facing probe directs a Cross-holes
Currently, Novacam’s rotational scanners
low-coherence light beam at the surface and feature probes with diameters between 1
collects reflected light signals. The second and 18 mm. Standard probes inspect bores
main component, a profilometer, provides with diameters ranging between 6 and 28
the light source to the rotational scanner via Along with a 3D point cloud, 3D height data
mm, however Novacam builds custom sys-
a fiber-optic cable and processes the optical (top), and a light intensity image (bottom), can tems for bores with diameters less than 2
signal received from the scanner. Finally, a be obtained and used for defect detection. mm and from 30-222 mm.
06 – 08 November 2018
Messe Stuttgart, Germany
www.vision-fair.de
continued from page 9 that are built in and Delamination at 20× magnification and focus
at the bottom of silicon
over the thinly sliced silicon wafer, the wafer
ultimately undergoes a multitude of micro-
fabrication process steps such as masking,
etching, doping, and metallization.
ICs, which have become the main com-
ponent of nearly all electronic devices, are
comprised of a microscopic array of elec-
tronic circuits and components that has
been implanted onto the surface of a single
crystal of semiconductor material such as
silicon. Within the integrated circuit, com-
ponents, circuits, and base materials are all Delamination at 20× magnification and focus
made together out of a single piece of sili- at the middle of silicon
con wafer. Hundreds of integrated circuits
are made at the same time on a single, thin
slice of silicon wafer and then they are cut
apart into individual IC chips.
Wafers accumulate residual stress during
the growth, sawing, grinding, etching and
polishing process, and cracks may be gener-
ated throughout all of these processes. Cracks
may also occur when the integrated circuits
themselves are cut into individual ICs. If unde-
tected, wafers can be rendered unusable in InGaAs cameras operating in the SWIR
subsequent manufacturing stages. As a result, wavelength band from 0.9 µm to 1.7 µm
acquire images that highlight defects within
it is important to inspect the raw material sub-
the silicon wafer.
strate for impurities before processing and
detect any defects during processing, in order a 30 µm pixel size and is sensitive in the
to reduce waste and keep cost down. short-wave infrared spectrum from 0.9
By using indium gallium arsenide (InGaAs) µm to 1.7 µm at up to 344 fps. The SWIR
cameras operating in the short-wave infra- camera—which is thermo-electrically
red (SWIR) wavelength band from 0.9 µm to cooled for low-noise images—has a GigE
1.7 µm, it is possible to see through semicon- Vision interface, a compact form factor (55
ductor silicon substrates. This ability to see mm x 55 mm x 78 mm), as well as Allied
through the material enables manufacturers Vision’s Vimba software development kit,
to generate infrared images that will highlight which enables users and developers to pro-
defects such as cracks within the silicon wafer. gram their own applications across Win-
Radiant Optronics—a company that has dows and Linux platforms.
a strong focus in the semiconductor field, “The Goldeye G-008 is a low-resolution
wafer fabrication, service laboratories, pack- model with an affordable price. The low
aging, and PCB assembly in Asia—has resolution is sufficient to detect the defects,
developed an infrared imaging microscope so it is the ideal choice for this cost-sensi-
that IC manufacturers can use to inspect for tive application. Our customers can bene-
internal defects and cracks. fit from the Goldeye’s outstanding perfor-
The microscope utilizes a Goldeye G-008 mance and we enjoy support from Allied
SWIR camera from Allied Vision (Stadtroda, Vision’s Asia-Pacific office just next door in
Germany; www.alliedvision.com), which Singapore,” explained Christopher Cheong,
features a 320 x 256 InGaAs sensor with Director of Radiant Optronics.
“O “O
Johannes Hiltner K” K”
Good Simple
samples features
Up to 100,000 comparison images may be needed for each class and other components are reliably identified, which allows the removal
in order to achieve adequate recognition rates. Even if the necessary of corresponding parts to be automated.
sample data is available, the training process takes up an enormous The food and beverage industry benefits from deep learning tech-
amount of time. Usually, the programming work for identifying dif- nologies, too. For example, poor-quality fruits and vegetables can be
ferent defect classes during fault inspection is extremely complex, too. detected more precisely before they are packaged or further processed.
The reason is that highly skilled employees with suitable training are The processes are also used in automotive engineering. This indus-
required for this purpose. try, in particular, is characterized by an especially high degree of auto-
Modern machine vision solutions, which already include a large mation. Here, for example, self-learning algorithms perfectly identify
number of deep learning functions, can help. The new version 17.12 of tiny paint defects that are not visible to the naked eye.
the standard software MVTec HALCON enables companies to train Another important area of application is pharmaceuticals. Pills often
convolutional neural networks (CNN) themselves without a great deal look very similar on the outside, but contain entirely different active sub-
of time and money. After all, the software is already equipped with two stances. Through deep learning and CNNs, the drugs can be very reli-
networks that are optimally pre-trained for industrial use — one is opti- ably identified, inspected, and distinguished from each other so they are
mized for speed and the other for maximum recognition rates. always placed in the correct blister packs.
The training process, therefore, works with only a few sample images
provided by the customer, and thus are tailored to the customers’ exact Conclusion
applications, resulting in neural networks that can be precisely matched Technologies based on artificial intelligence, such as deep learning and
to the customer’s specific requirements. CNNs, are an important part of modern machine vision solutions today.
User companies can significantly reduce the amount of programming Deep learning enables companies to train neural networks themselves
work needed by easily and systematically classifying new image data, without any in-depth expert knowledge and minimal effort, especially
saving time and money. Normally, they do not have to have any in-depth when programming defect classes during error inspections. The result
AI expertise. Companies can use their existing personnel without prob- is that companies can save money and benefit from much more robust
lems to train the network. recognition rates as well as better classification results.
TEN
Detect defects efficiently
Recognizing defects is a time-consuming process because the appear-
ance of defects, such as tiny scratches on an electronic part, can never
be accurately described in advance. Therefore, it is very difficult to man-
YEARS
ually develop suitable algorithms that can detect any conceivable faults,
based on sample images. An expert would have to manually view hun-
dreds of thousands of images and program an algorithm that describes
the error as precisely as possible based on his observations. This would
SINCE
SI
INCE
NCE ITS NTRO
ITS INTR
IINTRO
simply take too long.
AND STILL
GOING
Deep learning technologies and CNNs, on the other hand, can inde-
pendently learn certain characteristics of defects, and precisely define
the corresponding problem classes. So, only 500 sample images are
needed for each class, based on which the technology trains, verifies,
and thereby precisely detects the different types of defects.
This process takes only a few hours. Not only does it minimize the
STRONG
amount of time required, but the recognition rates are also much higher
For the ultimate in flex life performance
rmance
than with manually programmed defect classes. The self-learning algo-
demand Intercon Infiniflex assemblies
mblies
rithms, therefore, help to significantly reduce identification errors, while featuring GORE® high flex cable.
the error quotas for manual programming can be inefficiently high. Supporting all of the newest protocols
tocols
including Ethernet, Camera Link, USB, Featuring
GORE CameraLink
® ®
Many industries benefit CoaXPress and more. High Flex Flat Cables
Alan L. Lockard
After many years as an end user designing many new LEDs, as they heat up from 25°C Forward current
IF = f(VF ); TA = 25°C
and installing machine vision inspection sys- – 90°C, the brightness of the LEDs can drop 100
tems without adding a strobe controller, it’s by up to as much as 40%. A
IF
has become clear that the benefits of LED This reduction in output intensity may
lighting controllers far outweigh the cost in begin manifesting itself by means of false
applications that involve: rejects. Eventually, adjustments to either the
• Low contrast lighting output or the camera system will be
• Inspection and motion required to restore optimal performance.
• Bar code grading and mark A quality LED lighting controller can pro- 10-1
Light intensity
f stop doubles the amount of light gathered
by the lens, it also reduces the depth of field
(DOF) which may be undesirable.
Continuous lighting is limited to what light
output is available at 100% power. The only 100% output
with continuous
way to increase intensity of continuous light- supply
Continuous intensity lighting
ing is to increase the size and wattage of the
LED array. This also increases initial expense, Lighting trigger
operating costs and produces undesirable heat.
Exposure trigger
Pulsing or strobing the LED array, espe-
cially overdriving it, provides high intensity
Figure 2: Converting a constant illumination system to pulse illumination is straightforward. The
illumination at short duty cycle. This comple- trigger for the camera is sent to a lighting controller. The controller provides precise pulse width
ments short exposure imaging used for moving timing, power and brightness control for the lighting pulse. This ensures that the lighting pulses
objects, and overdriving the array has no detri- during the camera exposure time and that the light energy is the same for every image.
mental impact on the lifespan or performance
of the array (Figure 2). barcode verifying without motion blur that margin for print quality degradation caused
could result in poorer barcode grades. Cali- by subsequent handling and packaging.
Barcode grading (verifying) and OCR bration of the machine vision system for bar The principles just discussed for barcode
Inspection of barcodes at the point of printing code grading can be done using a Calibrated grading also apply to optical character recog-
helps to ensure reliable readability in the field. Conformance Standard Test Card. nition (OCR). Selecting a quality LED light-
Referred to as verifying or grading, there are a One application involves a GS1 DataMa- ing controller and using OCR fonts when
variety of recognized standards in use today. trix barcode being printed and verified on web- printing provide an extremely robust OCR
All compare numerous physical attributes, bing moving at 750 inches per minute. The reading application.
each scored individually, and combined into machine vision exposure was fixed at 250µsec Many smart camera-based machine vision
an overall print quality grade. to minimize motion blur. Because the GS1- systems come with auto-tuned OCR capabili-
Attributes such as symbol contrast and mod- specified minimum resolution is 10 pixels per ties. These read consistently printed and illu-
ulation are affected by illumination intensity, cell, the working distance was set so that the minated images of OCR fonts with a high
exposure time and system gain. Once set, resolution was approximately 10 pixels per cell. degree of success. Using a quality LED strobe
exposure time and system gain remain rela- The LED array was strobed for 1msec at controller enables consistently reading of 2mm
tively constant in the machine vision camera. 500%, and the gain of the machine vision high, low contrast, laser marked, OCR font
However, because LED output degrades over sensor was set to 1, then increased incremen- characters with a high degree of reliability.
time, only a high-quality LED lighting con- tally until the Symbol Contrast (SC) measured
troller can ensure consistently uniform light- by the sensor matched the SC recorded on the Ambient lighting
ing for the life of the project in such barcode test target. Making gain adjustments from the For a machine vision system to work predict-
verification applications. bottom up proved more reliable. ably and accurately on the plant floor, there
Mark quality assessment at the point of This methodology produced an in-line ver- should be no effects due to ambient lighting.
printing the barcode is likely to be on a moving ifier that grades GS1 barcodes in harmony If an inspection is influenced by the operator
object or surface. Therefore, the steps to miti- with the benchtop verifier being used by the standing between the overhead lighting and
gate blurring of the moving image mentioned QC department to verify QC samples. Set- the inspection, there will be problems.
earlier must be considered. Stop action imag- ting the acceptance criteria on the in-line One method of mitigating the effect of
ing, by combining short exposure time and verifier one letter grade higher than that of ambient lighting is to provide some sort of
high intensity strobing, will provide consistent the QC benchtop verifier provides a safety shrouding over the inspection area to block out
Embedded camera
brings 3D mapping
to boring machines
Mounting an embedded camera within the cutting head of a tunnel boring machine
allows contractors to document rock face structures as boring progresses.
Robert Wenighofer
Rotation
After cutting, it is necessary to visualize any Figure 2: Cutter head of the TBM. To obtain a com-
plete image of the rockface, the camera is placed
new geological features that may be pres-
at either four (shown in green) or five posi-
ent. To do so, the boring machine is
tions (shown in red) within disc housings of
retracted and the material between the cutting head while the camera control-
the cutter head and face cleaned. A ler is constantly mounted in a double disc
GigE camera (shown in red) and cutter near the rotation axis of the TBM.
control unit housed within one This results in between either 700 or
1100 images being captured depend-
of the disc cutters (top, left) then
ing on whether four or five positions
images a circular 85° field of are employed.
view (FOV) of the face as the
TBM is manually rotated away the camera at 2 fps which provides
from the rockface. sufficient overlap between images
To expand this coverage, the to perform 3D reconstruction and a
same camera system is then re- surplus of images. To ensure that the
positioned in a different disc position of each image is known as
housing, and the TBM manu- the TBM head rotates, the camera
ally rotated again. This process control unit incorporates a uniaxial
is then repeated at four or five inclination sensor from Posital Fraba
different disc housing positions (Heerlen, The Netherlands; www.pos-
approximately four or five times ital.com) mounted inside the camera
(Figure 2). While the TBM head controller and interfaced to the embed-
rotation takes approximately 3 mins, ded PC over an RS-232 interface.
it takes 10 to 15 mins to then reposition In this way, as each image is captured by
the camera system at each different disc the camera, it is assigned an angular value from
housing location. To expedite this process, the inclination sensor. This inclination angle
it would at first appear that several embed- and the known relative position of the disc hous-
ded cameras could be used. However, the ing is then used to determine the absolute posi-
massive 25-ton force that a single disc cutter than 1.7 m depth as can be seen in Figure 4. tion of the camera in the 3D coordinate system
exerts and the tons of excavation material pro- As images are captured by the GT2000 of the cutter head. Knowing this information,
duced would impede permanently mounting camera, they are transferred over its GigE both two-dimensional and 3D photogrammet-
a camera system in the cutter head. interface to a fanless quad-core embedded PC ric data of the rockface can then be generated.
from Vecow (New Taipei City, Taiwan; www. It is necessary to visualize the quality of the
Camera and controller vecow.com) that acts as a camera control unit. images of the tunnel face before the TBM
To image the rockface as the TBM head is man- Camera control software running on the PC, is rotated to ensure that the image quality is
ually rotated, a Prosilica GT2000 GigE camera based on Allied Vision’s Vimba SDK, triggers acceptable and that the protective glass of the
from Allied Vision (Stadtroda, Germany; www.
alliedvision.com) with a 2/3in CMOS 2048 ×
1088 sensor from ams Sensors Belgium (Ant- Companies mentioned
werp, Belgium; www.cmosis.com) was fitted
Agisoft Geodata Group Strabag AG
with a 5mm focal length C-Mount lens to St. Petersburg, Russia Leoben, Austria Cologne, Germany
provide the 85° FOV of the face. The camera www.agisoft.com www.geodata.at www.strabag.com
system is housed in a ruggedized housing incor- Allied Vision NVIDIA The Institute
porating a white LED ring light developed by Stadtroda, Germany Santa Clara, CA, USA for Subsurface
www.alliedvision.com www.nvidia.com Engineering at the
Geodata Group (Leoben, Austria; www.geo- Montanuniversität
data.at). While the camera is operated in a con- ams Sensors Posital Fraba Leoben
Belgium Heerlen, The Nether- Leoben, Austria
tinuous mode, capturing two frames per second, Antwerp, Belgium lands www.unileoben.ac.at
the white LED is strobed over a period of 4ms to www.cmosis.com www.posital.com
Vecow
ensure a visible light output of 10,000 Lumens so Autodesk Salini Impregilo New Taipei City, Taiwan
that no motion blur occurs as the camera system San Rafael, CA, USA S.p.A www.vecow.com
www.autodesk.com Milan, Italy
is rotated. This level of illumination is sufficient www.salini-impregilo.com
to illuminate break-outs in the rockface of more
akk
ically initiate recording after the camera con-
brreea
Foliation
OOuuttb
trol unit has powered up. Then, by incorpo-
rating a wireless network card in the camera
controller, stationary images can be analyzed
remotely using any Android-powered hand-
held device such as a smartphone or a personal Harnisch
Drive
digital assistant (PDA). direction
meets Speed
• High Speed CoaXPress Machine Vision cameras • Easy to use high speed recording software:
• High Resolution Machine Vision cameras Visual Marc
• Small ruggedized high speed recording cameras • Long term longevity Vision PCs
• Long time recording systems • Custom camera solutions
Visit us at Table/Booth 54 - South Lower Lobby | San Francisco CA, USA | 30 Jan - 1 Feb 2018
www.mikrotron.de
Member of
www.lakesighttechnologies.com
ically described as: B = (Vp x Te x Np)/FOV Frame rate relates to how many frames
where Vp is the velocity of the moving part, Te is are captured each second and shutter
the exposure time in seconds, Np is the number speed specifies how long each individual
of pixels spanning the view and FOV is the frame is exposed (which can vary). For a
field of view. Thus, for a part moving at 30cm/s graphical explanation of frame rate and
across a 1280 pixel imager over a camera FOV exposure, see “Shutter Speed vs Frame
of 1000cm, an exposure time of 33ms (0.033s) Rate,” http://bit.ly/VSD-SSFR. Thus,
would result in a pixel blur of 1.26 pixels. Even companies such as Optronis (Kehl, Germany; Figure 3: Emergent Vision Technologies
a one pixel blur such as this, Perry suggests, may www.optronis.com) in its CR-S3500, a 1280 x HR-2000 is a CMOS-based camera capable of
running 2048 x 1088-pixel images at 338 fps.
become an issue in sub-pixel precision applica- 860 CMOS-based camera capable of running
In windowing mode, this can be reduced to 320
tions but one that can be resolved by decreas- at 3,500 fps at full pixel count, is specified as x 240 to achieve a 1471 fps rate. With a band-
ing the exposure time of the camera. Unfortu- having an adjustable exposure time from 2μs width of 10Gbits/s, images can be transferred
nately, for those choosing a high-speed camera, to 1/frame rate, i.e. 2-286μs (Figure 1). to a PC without requiring host camera memory.
many manufacturers do not specify the expo-
sure times that can be achieved, merely specify- Regions of interest in both the horizontal and vertical directions
ing the frame rate of the camera. Indeed, some Unlike CCD sensors that, in partial scan mode, across the sensor can reduce the image size and
even state that the exposure time achievable is may suppress entire lines in the vertical direc- thus increase the readout speed.
the reciprocal of the frame rate (which it may tion, CMOS sensors can be operated in region For this reason, many manufacturers of
be in some cases). of interest mode (ROI) where windowed ROIs high-speed cameras specify both the maxi-
Vision + Automation
PRODUCTS
SDK supports GigE Vision
and USB3 Vision cameras 3D cameras feature
Medley is a new SDK that is designed for GigE and
USB3 Vision cameras to enable users to control all the
IP65/67 housing
X30 FA and X36 FA cameras are
functions of ISG cameras with low CPU overhead and
specifically designed for use in
memory requirements. Features include a GUI with
harsh ambient conditions and fea-
viewer and full camera control, sample code for vari-
ture IP65/67 protection class, a GigE
ous API usage models, and barcode decoding.
switch and two GigE uEye FA cam-
Imaging Solutions Group
eras that can be mounted at differ-
Fairport, NY, USA
ent distances. Each equipped with
www.isgcameras.com
a 1.3 MPixel CMOS image sensor,
the 3D cameras feature working dis-
tances from 0.5 to 5 meters, a 100-
Sony releases CMOS cameras watt projector unit on which the
A new series of SXGA cameras enable users to move from CCD to CMOS image sen- two GigE cameras can be mounted
sors. The first camera available in the series is the XCG-CG160, which is based on at different distances, and a soft-
the 1/3” IMX273 global shutter sensor. The IMX273 is a 1.6 MPixel ware development kit for configura-
sensor that can achieve 75 fps via GigE interface. The tion. The X30 is designed for moving
C-Mount camera’s features include defect-pixel correction, objects, while the X36—which fea-
shading correction with peak and average detection and tures the FlexView 2 projector—is
area gain to auto adjust for the target object. designed for stationary objects.
Sony Image Sensing Solutions IDS Imaging Development Systems
The Heights, Brooklands, Surrey, UK Obersulm, Germany
www.image-sensing-solutions.eu www.ids-imaging.com
First CMOS image sensor with Scalable line of CMOS image tion LED flicker mitigation with 120 dB high
Nyxel NIR technology introduced sensors introduced dynamic range, and >95 dB dynamic range
The OS05A20 CMOS image sensor, the first Designed for deployment in automotive appli- from one exposure.
sensor to implement Nyxel technology from cations, the new Hayabusa platform of CMOS ON Semiconductor
OmniVision Technologies, Inc., leverages novel image sensors feature a backside-illuminated Phoenix, AZ, USA
3 µm pixel design that delivers a charge capac- www.onsemi.com
ity of 100,000 electrons. Other key features
include simultaneous on-chip high dynamic Multi-camera solution for
range (HDR) with LED flicker mitigation (LFM), NVIDIA Jetson introduced
Designed for use on the NVIDIA Jetson TX1/
TX2 development kit, the e-CAM30_HEX-
CUTX2 is a multiple-camera solution that tar-
gets applications requiring multiple full HD
cameras. NVIDIA’s TX1 and TX2 can support
silicon semiconductor architectures and pro- up to six 2-lane MIPI CSI-2 cameras simulta-
cesses that address the inherent challenges in neously, and as a result, the e-CAM30_HEX-
NIR detection in image sensors. Nyxel com- CUTX2 consists of six e-CAM30_CUMI0330_
bines thick-silicon pixel architectures with care-
ful management of wafer surface texture to
improve QE, along with extended deep trench
isolation (DTI) to help retain (modular transfer
function) MTF without affecting the sensor’s
dark current. The sensor itself is a 5 MPixel
color CMOS image sensor with NIR sensitivi- plus real-time functional safety and automo-
ties exceeding 850nm and a capture rate of 60 tive grade qualification. The first product in
fps with 2688 x 1944-pixel images. the line is the AR0233, which is a 2.6 MPixel
OmniVision Technologies, Inc. CMOS image sensor capable of running at
Santa Clara, CA, USA 60 fps and featuring multi-exposure mode
www.ovt.com for >140 dB high dynamic range, full resolu-
www.phrontier-tech.com
nector on the Jetson TX1/TX2. the 6.4 MPixel IMX178 sensor with 2.4 µm Spectral
tra
0.0
range
ge
400 600 800 1000
100
e-con Systems pixel size and frame rates of 59 fps via USB
Tamil Nadu, India 3.0 and 16 fps via GigE. Other cameras fea-
Universal Color Line Scan Camera
www.e-consystems.com ture the 12.2 MPixel IMX226 sensor with a Scanner
nne Systems
Scan with integrated Bright-field
1.85 µm pixel size and frame rates of 31 fps via or Dark-field
Illumination
Line scan polarization USB 3.0 and 8 fps via GigE. The sensors feature
camera introduced low dark noise of three electrons, combined
A 2017 Innovators Awards Platinum-level with a quantum efficiency greater than 80%.
honoree, the Piranha4 line scan polariza- Basler
Bright-field
tion camera is now in full production. Polar- Ahrensburg, Germany Illumination
Sector 1 Sector 4
from the STARVIS line, some cameras feature highest resolution but not necessarily the fast- Schäfter+Kirchhoff develop and manufacture la-
ser sources, line scan camera systems and fiber
www.vision-s y stems .com VISION SYSTEMS DESIGN
optic products for worldwide distribution and use.
Konstanz, Germany
www.chromasens.de/en