Visionsystemsdesign201801 DL PDF

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 36

JANUARY 2018

www.vision-systems.com

VISION AND AUTOMATION SOLUTIONS


FOR ENGINEERS AND INTEGRATORS WORLDWIDE

LED strobe
controllers
When should you invest

Embedded
camera
Brings 3D mapping to
tunnel boring machines

High-speed
cameras
Target machine vision
applications

Deep
learning
Enhances
machine
vision

1801vsd_C1 1 1/4/18 8:07 AM


1801vsd_C2 2 1/4/18 8:07 AM

January 2018
VOL. 23 | NO. 1
Vision and Automation Solutions for
Engineers and Integrators Worldwide A P EN N W EL L P U B L I C AT I O N

C ove r St o r y
A universal, pre-trained classifier based on deep
learning algorithms enables identification of a
wide range of OCR fonts. (See page 15).

f e ature s
15 IN T EGR AT ION INSIGH T S d ep ar tment s
How deep learning is enhancing machine vision
3 My View
Developers increasingly apply deep learning and
4 Online@vision-systems.com
artificial neural networks to improve object detection
5 Snapshots
and classification. Read the latest news from our website
Johannes Hiltner 9 Technology Trends
ELECTRONICS INSPECTION
18 IN T EGR AT ION INSIGH T S Infrared imaging microscopes aid in
detecting integrated-circuit defects
When to invest in LED strobe controllers DEEP-SEA IMAGING

LED strobe controllers offer many benefits for Imaging system monitors for pollution
and coral health in the Bay of Biscay
applications involving low contrast, motion, mark 3D IMAGING
quality assessment, OCR and ambient lighting issues. 3D imaging enables high-speed
precision bore hole inspection
Alan L. Lockard
29 Vision+Automation Products

32 Ad Index/Sales Offices
21 INDUS T RY S OLU T IONS PROFIL E

Embedded camera brings 3D mapping


to tunnel boring machines
Mounting an embedded camera within the cutting
head of a tunnel boring machine allows contractors to
document rock face structures as boring progresses.

Robert Wenighofer

25 PRODUC T F OC US • Complete Archives • White Papers

High-speed cameras target machine • Industry News • Feedback Forum


vision applications • Buyers Guide • Free e-newsletter
Faced with imaging high-speed events, developers
• Webcasts • Video Library
can choose from those that transfer data over
industry-standard camera-to-computer interfaces www.vision-systems.com
and stand-alone cameras.

Andrew Wilson

www.vision-s y stems .com VISION SYSTEMS DESIGN Januar y 2018 1

1801vsd_1 1 1/4/18 7:59 AM


Visit us at

Jan. 27 - Feb. 1, 2018 | Booth 823

1801vsd_2 2 1/4/18 7:59 AM


my
view
Alan Bergstein: Group Publisher
(603) 891-9447
alanb@pennwell.com
John Lewis: Editor in Chief
(603) 891-9130
johnml@pennwell.com
James Carroll Jr.: Senior Web Editor
(603) 891-9320
jamesc@pennwell.com
Andrew Wilson: European Editor
+44 7462 476477
What is deep learning?
andycharleswilson@gmail.com
Robert Tait: Contributing Editor While sometimes seemingly used interchangeably, the subtle differences in
518-269-9410
tait@opticalmetrologysolutions.com meansings between artificial intelligence (AI), machine learning and deep
Kelli Mylchreest: Art Director learning may cause some confusion. In contrast to natural intelligence dis-
Mari Rodriguez: Production Director played by humans and other animals, AI refers to machines mimicking human
Dan Rodd: Senior Illustrator
Debbie Bouley: Audience Development Manager
cognitive functions such as problem solving or learning. So when a machine
Alison Boyer Murray: Ad Services Manager understands human speech or can compete with humans in a game of chess, AI applies.
Joni Montemagno: Marketing Manager Machine learning is the current state-of-the-art application of AI and is largely responsible
for its recent rapid growth. Based on the idea of giving machines access to data so that they can
learn for themselves, machine learning has been enabled by the internet, and the associated rise
www.pennwell.com in digital information being generated, stored and made available for analysis. Building on AI
EDITORIAL OFFICES: concepts, machine learning focuses on solving real-world problems through architectures such
Vision Systems Design
61 Spit Brook Road, Suite 401 as artificial neural networks designed to imitate human decision making.
Nashua, NH 03060 Deep learning concentrates on a subset of machine-learning techniques, with the term “deep”
Tel: (603) 891-0123 Fax: (603) 891-9328
www.vision-systems.com generally referring to the number of hidden layers in the deep neural network. While a conven-
CORPORATE OFFICERS: tional neural network may contain a few hidden layers, a deep network may have tens or hun-
Robert F. Biolchini, 1939-2017: Chairman dreds of layers. In deep learning, a computer model learns to perform classification tasks directly
Frank T. Lauinger: Vice Chairman
from text, sound or image data. In the case of images, deep learning requires substantial com-
Mark C. Wilmoth: President and
Chief Executive Officer puting power and involves feeding large amounts of labeled data through a multi-layer neural
Jayne A. Gilsinger: Executive Vice President,
Corporate Development and Strategy
network architecture to create a model that can classify the objects contained within the image.
Brian Conway: Senior Vice President, Major technology companies such as Google, Facebook, IBM, Intel and Microsoft have
Finance and Chief Financial Officer
invested in deep learning for some time, but more recently, machine-vision software compa-
TECHNOLOGY GROUP:
Christine A. Shaw: Senior Vice President and nies have begun to apply deep learning within their products, or base their entire product on it.
Group Publishing Director Machine vision has predominantly relied on rules-based machine-vision algorithms that excel
FOR SUBSCRIPTION INQUIRIES in applications where you know exactly what you’re looking for. Classic edge detection, blob,
Tel: (800) 869-6882
Fax: (866) 658-6156
object- and feature-location algorithms generally excel in tasks requiring sub-pixel accuracy for
email: : VSD@kmpsgroup.com precision measurements or robot guidance.
web: www.vsd-subscribe.com
The value of deep learning in machine-vision applications stems from its ability to make
Vision Systems Design® (ISSN 1089-3709), Volume human-like judgments of part quality and other example-based decisions. Verifying the presense
23, No.1 Vision Systems Design is published 11 times
a year in January, February, March, April, May, June, of bolts, brackets, foam pads and straps on car seat assemblies, for example, can challenge tradi-
July/August, September, October, November, December
by PennWell® Corporation, 1421 S. Sheridan, Tulsa, OK tional machine vision systems if subcomponents come from a variety of suppliers with variations
74112. Periodicals postage paid at Tulsa, OK 74112 and
at additional mailing offices. SUBSCRIPTION PRICES: in color and texture. In such applications, deep learning helps machine-vision systems cope with
USA $130 1yr., $190 2 yr., $244 3 yr.; Canada $148
1 yr., $217 2 yr., $280 3 yr.; International $160 1 yr.,
the range of acceptable part appearances. Likewise, our cover story this month on page 15 dis-
$235 2 yr., $305 3 yr. POSTMASTER: Send address cusses how a single, universal, pre-trained classifier based on deep-learning algorithms enables
corrections to Vision Systems Design, P.O. Box 47570,
Plymouth, MN 55447. Vision Systems Design is a reg- identification of a wide range of typefaces in optical character recognition applications. Also, on
istered trademark. © PennWell Corporation 2017. All
rights reserved. Reproduction in whole or in part with- page 7 we cover development of a neural network designed to identify six different defect classes
out permission is prohibited. Permission, however, is
granted for employees of corporations licensed un- on reflective metallic surface images. As always, I hope you enjoy this issue.
der the Annual Authorization Service offered by the
Copyright Clearance Center Inc. (CCC), 222 Rosewood
Drive, Danvers, Mass. 01923, or by calling CCC’s
Customer Relations Department at 978-750-8400 pri-
or to copying. We make portions of our subscriber list
available to carefully screened companies that offer
products and services that may be important for your
work. If you do not want to receive those offers and/
or information via direct mail, please let us know by John Lewis, EDITOR IN CHIEF
CH
contacting us at List Services Vision Systems Design, 61
WWW.VISION-SYSTEMS.COM
WWW VISION SYSTEMS COM
Spit Brook Road, Suite 401, Nashua, NH 03060. Printed
in the USA. GST No. 126813153. Publications Mail
Agreement no. 1421727.

www.vision-s y stems .com VISION SYSTEMS DESIGN Januar y 2018 3

1801vsd_3 3 1/4/18 7:59 AM


online @ www.vision-systems.com
3D Time of Flight company 10 ways deep learning Machine vision standard
Odos Imaging acquired by technology is being directs on evaluation of
Rockwell Automation deployed today image processing systems
Odos Imaging, a de- As the demands for speed A new standard puts
veloper of 3D Time of and accuracy in machine forward procedures for
Flight cameras for indus- vision and image process- assessing classificatory
trial imaging applications, ing applications grow, performance during the
has been acquired by Rockwell Automa- so too must the software that is used in acceptance of machine vision systems,
tion, Inc. http://bit.ly/VSD-1801-01 them. http://bit.ly/VSD-1801-03 while also providing examples from in-
dustrial inspection technology, which
Euresys acquires OEM Lucid Vision Labs machine serves as guidance.
imaging IP and hardware vision camera company http://bit.ly/VSD-1801-05
company Sensor to Image founded by industry veteran
Euresys has announced Lucid Vision Labs, Inc., High-speed imaging sheds
the acquisition of Sensor a company that designs light on secrets of flight
to image GmbH, a com- and manufactures ma- Using high-speed imag-
pany specializing in OEM chine vision cameras and ing and computation-
imaging IP and hardware for industrial components, has been founded by former al fluid dynamics, the
vision products, including cameras and Founder, President and VP of Engineering unique mechanisms in-
frame grabbers. at Point Grey Research Inc. Rod Barman. volved in mosquito flight have been re-
http://bit.ly/VSD-1801-02 http://bit.ly/VSD-1801-04 vealed. http://bit.ly/VSD-1801-06

tdtsefifeeffieptleeseeeetseefeeeee
sl pulsed ptwer laser
pselatastspecaalazedtantdevelppangptcustptazangtt
andttanuuacturangtlaser patterr gererattrstt
uprttachanetvasapntapplacatapnss
Thanks for a great 2017!
We wish you a Happy New Year.

putputtppwer
tr tr tr tr
Discover more:
Thank you!
www.alysium.com
tFF tFF tFF
eeeteeeteeeeeeeeteeeeeeteeefieffifieeee

ueatures
pfeeeteepeetpppptppptfieptpppeetfeteetpw seeeeetefitefeeeeeee
aeeeefifieeptteefieeseetefeeeteffifipteefit seeeeefieeeteefieeetfieptpeefieeeteeeetseest
eeeeepteeaeetefieeeeeeee pfeyteyeeetfeteetpp%

structuredtlaghttandtt
lasertbeattshapangtsplutapns
ppp9tt2eptaaeefe
lfieseeeptlcpthpttttpptcfiefipfi

eeee@eeeefiseeett ttppptptps2227tt
osela.com

4 Januar y 2018 VISION SYSTEMS DESIGN www.vision-s y stems .com

1801vsd_4 4 1/4/18 8:00 AM


snapshots Short takes on the
leading edge

Fully self-driving
cars are here,
says Waymo
Waymo’s (Mountain View, CA, USA; www.waymo.com) fully self-
driving vehicles are now test driving in fully autonomous mode on
public roads without a driver after having worked on the technology
for more than eight years.
A subset of vehicles from Waymo—formerly the Google self-driv-
ing car project—will operate in fully autonomous mode in the Phoe-
nix metro region. Eventually, the fleet will cover a region larger than
the size of Greater London, with more vehicles being added over time.
Waymo explains in a press release that it has prepared for this next tests, practicing rare and unusual cases,” wrote Waymo via Medium.
phase by putting its vehicles through the world’s longest and tough- “We’ve multiplied all this real world experience in simulation, where
est ongoing driving test. our software drives more than 10 million miles every day. In short: we’re
“Since we began as a Google project in 2009, we’ve driven more than building our vehicles to be the most experienced driver on the road.”
3.5 million autonomous miles on public roads across 20 U.S. cities. At Waymo’s vehicles, according to the company, are equipped with
our private test track, we’ve run more than 20,000 individual scenario the necessary features, including sensors and continued on page 6

Standalone vision system targets small parts inspection


Machine vision solution provider and sys- repeatability must also be considered. In- which becomes easy when machine vision is
tems integrator Artemis Vision (Denver, CO, specting small parts by eyesight alone can be used, as opposed to human inspection.
USA; www.artemisvision.com) has developed a daunting task, and maintaining a consis- Artemis Vision’s visionStation is precision
the visionStation standalone vision system, tent method or process of inspection can machined from aluminum and fitted with
which is designed to perform the inspection change from person to person. Small a digital machine vision camera and light-
of small parts. parts inspection applications must be ing that is tailored to suit the type of inspec-
When it comes to small parts consistent for them to be success- tion performed. The system is an operated-
inspection, there are number of fully accurate and repeatable. loaded unit that can be used as a standalone
challenges that must be over- Lastly, for small parts inspection inspection station, or integrated into an exist-
come, starting with space con- to be successful, the ing operation.
straints. Some small parts manufacturer needs “The visionStation is ideal for customers
manufacturers have to know what defects who need to run approximately 20,000 parts
limited space for in- they are looking for, as per day or less,” stated Tom Brennan, Pres-
spection equipment, some defects can be so ident at Artemis Vision. “At 3 seconds per
so it is important to understand small that they go un- cycle, an operator can load about 10,000 parts
the manufacturer’s needs noticed. Additionally, per shift. Customers who need flexible auto-
and space require- manufacturers need to mation and repeatability between operators
ments. Accuracy and catalog known defects, and production batches continued on page 7

www.vision-s y stems .com Januar y 2018 5

1801vsd_5 5 1/4/18 8:00 AM


snapshots

continued from page 5


software, to provide full autonomy, includ-
ing backup steering, braking, computer, and
New specification from MIPI Alliance
power that can bring the vehicle to a safe stop, streamlines integration of image
if needed. The vehicles’ sensor suite includes
the following:
sensors in mobile devices
LiDAR (Light detection and ranging): This The MIPI Alliance (www.mipi.org)—a ploying multiple image sensors in their
sensor beams out millions of laser pulses per global organization that develops inter- products, implementation becomes more
second in 360° to measure how long it takes to face specifications for mobile and mobile- complex and time consuming. MIPI CCS
reflect off a surface and return to the vehicle. influenced industries—has released a new aims to address these issues by making it
Waymo’s system includes three types of specification that provides a
LiDAR developed in-house: a short-range standardized way to integrate
LiDAR that provides an uninterrupted view image sensors in mobile-con-
directly around it, a high-resolution mid- nected devices.
range LiDAR, and a next generation long- Named MIPI Camera Com-
range LiDAR that can see almost three foot- mand Set v1.0 (MIPI CCS v1.0),
ball fields away. the specification defines a stan-
Vision systems: The vision system includes dard set of functionalities for
360° field of view cameras that detect color implementing and controlling
in order to spot things like traffic lights, con- image sensors. This specifica-
struction zones, school buses, and the flash- tion is offered for use with MIPI
ing lights of emergency vehicles. The system Camera Serial Interface 2 v2.0 (MIPI CSI-2 possible to craft a common software driver
is comprised of several sets of high-resolution v2.0) and is now available for download. Ad- to configure the basic functionalities of any
cameras that operate at long range in daylight ditionally, in an effort to help standardize use off-the-shelf image sensor that is compliant
and low-light conditions. of MIPI CSI-2, MIPI Alliance membership is with MIPI CCS and MIPI CSI-2 v2.0, ac-
In addition to LiDAR and a vision system, not required to access the specification. cording to the MIPI Alliance. The specifica-
the vehicles feature a radar system with a con- MIPI CSI-2, according to the alliance, is tion provides a complete command set that
tinuous 360° view and supplemental sensors the industry’s most widely-used hardware in- can be used to integrate basic image sensor
including audio detection and GPS. terface for deploying camera and imaging features, including resolution, frame rate
“With Waymo in the driver’s seat,” accord- components in mobile devices, including and exposure time, as well as advanced fea-
ing to a statement, “we can reimagine many drones. The introduction of MIPI CCS to tures like phase detection auto focus, single
different types of transportation, from ride- MIPI CSI-2 provides further interoperabil- frame HDR or fast bracketing.
hailing and logistics, to public transport and ity and reduces integration time and costs “The overall advantage of MIPI CCS is
personal vehicles, too.” for complex imaging and vision systems, ac- that it will enable rapid integration of basic
“We’ve been exploring each of these areas, cording to the alliance. camera functionalities in plug-and-play fash-
with a focus on shared mobility. By giving “MIPI Alliance is building on its success ion without requiring any device-specific
people access to a fleet of vehicles, rather than in the mobile camera and imaging ecosys- drivers, which has been a significant pain
starting with a personal ownership model, tem with MIPI CCS, a new specification point for developers,” said Mikko Muukki,
more people will be able to experience this that will enhance the market-enabling con- technical lead for MIPI CCS. “MIPI CCS
technology, sooner. A fully self-driving fleet veniences MIPI CSI-2 already provides,” will also give developers flexibility to cus-
can offer new and improved forms of sharing: said Joel Huloux, chairman of MIPI Alli- tomize their implementations for more ad-
it’ll be safer, more accessible, more flexible, ance. “The availability of MIPI CCS will vanced camera and imaging systems.”
and you can use your time and space in the help image sensor vendors promote greater The new specification was developed for
vehicle doing what you want.” adoption of their technologies and it will use with MIPI CSI-2 v2.0 and is backward
The first application of Waymo’s fully-self help developers accelerate time-to-mar- compatible with earlier versions of the MIPI
driving technology will be a Waymo driver- ket with innovative designs targeting the CSI-2 interface. It is implemented on either
less service. Over the next few months, Waymo mobile industry, connected cars, the Inter- of two physical layers from MIPI Alliance:
will be inviting members of the public to take net of Things, AR/VR and other areas.” MIPI C-PHY or MIPI D-PHY.
trips in its fully self-driving vehicles. With imaging applications becoming View more information on the specifica-
View a video of the driverless car in action more sophisticated and manufacturers de- tion: http://bit.ly/VSD-MIPI
here: http://bit.ly/VSD-WAYMO

6 Januar y 2018 VISION SYSTEMS DESIGN www.vision-s y stems .com

1801vsd_6 6 1/4/18 8:00 AM


snapshots

continued from page 5


without necessarily wanting or needing
Deep learning and convolutional
full inline automation where parts are neural networks
conveyed on a belt. Deep learning with neural networks provide CNNs can be significantly accelerated and are
Within the visionStation is a Prosil- significant image processing benefits regard- now ideally suited for image processing. Be-
ica GT4905 machine vision camera from ing object classification, image analysis and cause CNNs are shift and partially scale in-
Allied Vision (Stadtroda, Germany; www. image quality. Because small neural networks variant, they allow use of the same network
alliedvision.com). This camera features the suffice for many typical machine vision appli- structures for different image resolutions, and
KAI-16050 CCD image sensor from ON cations, processors such as FPGAs (field pro- smaller neural networks are often sufficient for
Semiconductor (Phoenix, AZ, USA; www. grammable gate arrays) can be implemented many image processing tasks.
onsemi.com), which is a 16 MPixel sensor effectively for convolutional neural networks Due to the high degree of parallel pro-
that achieves a frame rate of 7.5 fps through (CNNs), resulting in application expansion cessing, neural networks are particularly well
GigE interface. This camera was selected, beyond current classification tasks and effi- suited to FPGAs, upon which CNNs also an-
according to Brennan, as a result of its “high cient use within embedded vision systems. alyze and classify high-resolution image data
resolution and excellent dynamic range.” Deformed test objects, irregular shapes, in real time. In machine vision, FPGAs func-
“The dynamic range plays an impor- object variations, irregu-
tant part when imaging the shiny areas lar lighting and lens dis-
of a part, created by the lighting source. tortions push the limits
Furthermore, the Prosilica GT4905 can of classic image process-
handle the rigorous conditions the vision- ing when framework
Station may be used in,” he said. conditions for image ac-
Software for the system includes Omron quisition cannot be con-
Microscan’s (Renton, WA, USA; www.mi- trolled. Even individual
croscan.com) Visionscape and Artemis Vi- algorithms for feature
sion’s proprietary visionWrangler software. description are often
“Our software is built from our exten- barely possible.
sive library of components – reading a data CNNs, on the other
matrix, doing measurements, absence- hand, define characteris-
presence and counts,” explained Brennan. tics through its training
The visionStation is offered in two method, without using
sizes, the small and larger system measur- mathematical models,
ing 1 and 1.5 cubic ft, respectively. The which make it possible
small system can accommodate parts up to capture and analyze Rolled-in Patches Crazing Pitted Inclusion Scratches
scale surface
to 2 in. (5.08 cm), while the large system images in difficult situa-
can hold parts up to 6 in. (15.24 cm). The tions such as reflecting surfaces, moving ob- tion as massive accelerators of image process-
minimum defect size allowance for each jects, face detection and robotics, especially ing tasks and guarantee real-time processes
system size is: 0.003 in. (76.2 µm) for the when easier classification of image data di- with deterministic latencies.
small station and 0.01 inches (254 µm) for rectly from the preprocessing to the classifica- Until now, the high programming effort
the larger station. tion result is required. and the relatively-low resources available in
Artemis Vision builds visionStation sys- Nevertheless, CNNs cannot cover all areas an FPGA hindered efficient use. Algorithmic
tems based on the parts a manufacturer of classic image processing, such as precise lo- simplifications now make it possible to con-
supplies and intends to have inspected. cation determination of objects. Here, new struct efficient networks with high throughput
In the next generation visionStation, and advanced CNNs must be developed. Prac- rates in an FPGA.
Artemis Vision reportedly plans to up- tical experience with CNNs from prior years To implement CNNs on FPGA hardware
grade its sensor. led to mathematical assumptions and simpli- platforms, the VisualApplets graphical environ-
“We will likely upgrade to a newer fications (pooling, ReLu and overfitting avoid- ment from Silicon Software (Mannheim, Ger-
CMOS camera and use C-mount optics,” ance, to name a few) which led to a reduction many; www.silicon.software) can be used. The
Brennan commented, “which will pro- in computational expense, which in turn en- CNN operators in VisualApplets allow users to
vide more lensing options and reduce the abled implementation of deeper networks. create and synthesize diverse FPGA application
overall size of the unit.” By reducing image depth at the same rate designs without hardware programming experi-
of detection and by optimizing the algorithm, ence in a short time. continued on page 8

www.vision-s y stems .com VISION SYSTEMS DESIGN Januar y 2018 7

1801vsd_7 7 1/4/18 8:00 AM


snapshots

continued from page 7 By transferring the weight and gradient param-


eters determined in the training process to the CNN operators, the
FPGA design is configured for the application-specific task. Opera-
tors can be combined into a VisualApplets flow diagram design with
digital camera sources as image input, and additional operators to op-
timize image preprocessing.
For larger neural networks and more complex CNN applications,
a new programmable frame grabber in the microEnable marathon
series is being released that has a 2.5-times more FPGA resources com-
pared to the current marathon series. The new frame grabber is ex-
pected to be suitable for neural networks with more than 1GB/sec
CNN bandwidth.
The CNNs are not only capable of running on the frame grabbers’
FPGAs but also on VisualApplets compatible cameras and vision sen-
sors. Since FPGAs are up to ten times more energy-efficient compared
to GPUs, CNN-based applications can be implemented particularly
well on embedded machine vision systems or mobile robots with the
required low heat output.
Application diversity and complexity with neural networks based on
newly-developed processors is expected to increase. A cooperation and
exchange of research results with Professor Michael Heizmann from the
Institute for Industrial Information Technology (IIIT) at the Karlsruhe
Institute of Technology (KIT; Karlsruhe, Germany; https://www.kit.edu/
english), part of the FPGA-based “Machine Learning for Industrial Ap-
plications” project, will generate future hardware and software advances.
To determine the percentage of correctly-detected defects in diffi-
cult environmental conditions, a neural network with 1,800 reflective
metallic surface images was trained on which of six different defect
Ruggedized Mechanical Design Minimizes  classes were defined. Large differences in scratches coupled with small
Image Aberrations from Shock and Vibration differences in crazing, paired with different surface grey tones from
lighting and material alterations, made the analysis of the surface
almost impossible for conventional image processing systems.
Redundant  All elements  Same optical  The test results demonstrated that the different defects were posi-
Ultra design as 
focus locks:  are bonded  Compact 
lock nut plus  inside of the  Computar's 
locking screw lens housing
Design MPW2 Series  tively classified by the neural network an average of 97.4%, which is
(2/3” 5MP)
a higher value compared to classic methods. The data throughput in
this application configuration was 400MB/sec. By comparison, a CPU-
based software solution achieved an average 20MB/sec.
The implementation of deep learning on FPGA processor technol-
ogy, which is ideally suited for image processing, is an important step.
Requirements that the method must be deterministic and algorithmi-
cally verifiable, however, will make entry into all areas of image pro-
cessing more difficult. Likewise, options to document which areas were
identified as errors, as well as segmentation and storage, have not yet
been implemented.
Thus far, training and operational use of CNNs are two separate pro-
cesses. In the future, new generations of FPGAs with higher resources
computar.com or using powerful ARM / CPU and GPU cores will enable on-the-fly
training on the newly acquired image material, which will further
EAST COAST: WEST COAST: MEXICO: increase the detection rate and will simplify the learning process.
1 (919) 230-8700 +1 (310) 222-8600 +52 (55) 5280 4660 This article was written by Martin Cassel, Silicon Software
(Mannheim, Germany; www.silicon.software)

8 Januar y 2018 VISION SYSTEMS DESIGN www.vision-s y stems .com

1801vsd_8 8 1/4/18 8:00 AM


technologytrends
John Lewis, Editor, johnml@pennwell.com

• DEEP-SEA I M AG I N G
ELECTRONICS INSPECTION

Imaging systems monitor for Infrared imaging


pollution and coral health microscopes aid in
Scientists from the Ifremer (French Research between 50 and 1,000 meters. Surveying 15 detecting integrated
Institute for the Exploitation of the Sea; Issy- canyons and three sites on the edge of the con-
les-Moulineaux, France; wwz.ifremer.fr/en) tinental shelf, the team analyzed 6255 images
circuit defects
have used FCB block cameras from Sony Corp. from the camera and recorded 198 items of Radiant Optronics (Singapore, Singapore;
(Tokyo, Japan; http://pro.sony.com) to monitor unique litter. www.radiantoptronics.com) has devel-
for pollution and coral health in the vulnerable According to the team’s published findings, oped an infrared (IR) microscope that is
marine ecosystems of the Bay of Biscay. litter appeared to accumulate at water depths used to inspect integrated circuits (IC) for
After surveying 24 of the Bay’s canyons, and of 801 - 1100 m and 1401 - 1700 m and was internal defects or cracks that may occur
three locations between adjacent canyons, the found in all canyons and continental shelfs during the manufacturing process.
images enabled identification of 11 coral hab- surveyed. More importantly, the research-
itats, containing 62 coral species, at depths ers pointed out that 15 to continued on page 12

• 3D I M AG I N G

3D imaging enables high-speed


bore inspection
New inline-inspection applications are being developed for industry thanks to advances in
high-speed, low-coherence interferometry, a high-resolution three-dimensional (3D) imag-
ing technology. Precision bore holes, for example, must adhere to strict geometric dimen-
sioning and tolerancing (GD&T), including straightness, cylindricity, circularity, taper,
distortion, runout, roughness characteristics, semi-transparent coating thickness, or defect Radiant Optronics has developed an IR
microscope fitted with a SWIR camera that
characterization. Also, features on the insides of bores, such as threads, O-ring grooves,
inspects ICs for internal defects or cracks.
undercuts, chambers, and cross-holes, must often be characterized with high precision to
ensure that components ultimately interlock, seal, fit, or otherwise function as intended. Wafer is a thin substrate of semicon-
Understanding this, Novacam Technolo- ductor material that is used in electronics
gies (Pointe-Claire, QC, Canada; www.nova- for IC fabrication. There are many types
cam.com) has developed a system, dubbed of semiconductor materials and one of
BoreInspect, that provides fast, micron-pre- the most commonly materials used in
cision, non-contact measurements of bore electronics is Silicon (Si).
internal diameter (ID) surfaces. Silicon wafer, a key component in ICs,
is formed from highly pure, nearly defect-
A 3D imaging system based on high-speed
low-coherence interferometry is being used free, monocrystalline silicon sliced from a
for the automated inspection of bore holes. silicon boule. Serving as the substrate for
continued
continuedon
onpage
page10
9 microelectronic devices continued
continuedononpage
page139

www.vision-s y stems .com VISION SYSTEMS DESIGN Januar y 2018 9

1801vsd_9 9 1/4/18 8:00 AM


technologytrends

The system automatically generates and presents the operator with a display of the four
bores, along with an evaluation of crucial measurement criteria.

continued from page 9 Vuk Bartulovic, Pres- PC provides the user interface for data acqui-
To help manufacturers gain insight into the
ident of Novacam, says “By the time they sition control and data-analysis software. It process, BoreInspect can be configured to
come to us, many of our clients have spent also typically houses client-selected 3D- visu- provide deviation maps highlighting the devi-
a long time looking for a way to measure, in alization and metrology software. ation of the measured bore dimensions from
a non-contact manner, internal diameters High-density point clouds generated from the CAD model, to reveal micron-level sur-
face defects such as this 19.7-micrometer
of bores, cylinders, and other such hard-to- its scans enable rapid and detailed charac-
deep depression.
reach spaces. These bore diameters range terization of inner surfaces and features as
from a few millimeters to as large as truck well as defect detection. During operation, ents the operator with a display of the four
engine cylinder bores. Typically, they want the scanner spins the probe at 1,800 rpm, bores, along with an evaluation of crucial
to acquire full internal diameter dimensions, acquiring 30,000 3D surface measurements measurement criteria. A deviation map,
detect defects, and measure roughness. It is a per second. In just over 3 seconds, 100,000 highlighting the deviation of the measured
revelation to them that an instrument exists surface measurements on the inside of the bore dimensions from the CAD model,
that can provide all this functionality … and bore are collected, providing a high-fidelity helps reveal micron-level surface defects.
that micron-precision data on their bore ID 3D model of the bore ID. “Regardless of the bore diameter, the
can be acquired in a matter of seconds and in For the automated valve body bore accuracy of the non-contact 3D measure-
a non-contact manner.” metrology application, the rotational scan- ments is the same,” explains Bartulovic.
The system has been deployed for auto- ner enters each one of the four valve body “Defining scanning sequences is easy and
mated 3D metrology of valve-body assembly bores in an automated sequence and the each sequence may be saved for later recall
bores. Following a user-created inspection system automatically generates and pres- and execution. While the user may manip-
sequence, all four bores in the valve body ulate and analyze the 3D point cloud data
a)
are automatically scanned, and the resultant in an interactive manner, the BoreInspect
3D surface point-cloud analyzed and dis- Defects
system also generates automated reports.”
played using PolyWorks from InnovMetric The system is suitable for integration into
(Québec, QC, Canada; www.innovmetric. automated continuous flow manufactur-
com), a third-party 3D data visualization and Cross-holes ing, according to Vuk. “Its compact, modu-
metrology software. According to user crite- lar, and rugged design facilitates integration,
ria, selected aspects of each bore are com- even in high-throughput industrial metrol-
Bore hole circumference (15.5 mm)
pared to actual measurement data and non- ogy applications that would have the scanner
compliance visually highlighted. mounted on a stage suitable for the applica-
Comprised of three main hardware com- b) tion, such as a robot arm, gantry, or motor-
ponents, BoreInspect includes a rotational controlled stage, and the optical fiber con-
scanner that spins a non-contact optical probe nection to the system’s interferometer can be
while a robot or motion stage advances it into hundreds of meters long.”
Defect
the bore hole. The side-facing probe directs a Cross-holes
Currently, Novacam’s rotational scanners
low-coherence light beam at the surface and feature probes with diameters between 1
collects reflected light signals. The second and 18 mm. Standard probes inspect bores
main component, a profilometer, provides with diameters ranging between 6 and 28
the light source to the rotational scanner via Along with a 3D point cloud, 3D height data
mm, however Novacam builds custom sys-
a fiber-optic cable and processes the optical (top), and a light intensity image (bottom), can tems for bores with diameters less than 2
signal received from the scanner. Finally, a be obtained and used for defect detection. mm and from 30-222 mm.

10 Januar y 2018 VISION SYSTEMS DESIGN www.vision-s y stems .com

1801vsd_10 10 1/4/18 8:00 AM


Machine vision: key technology for automated production.
Experience how robots react flexibly to their environment.
Meet industry visionaries and innovators, discuss important
topics such as embedded vision, and discover the path that
non-industrial machine vision is taking. At VISION, the world’s
leading trade fair for machine vision.

06 – 08 November 2018
Messe Stuttgart, Germany
www.vision-fair.de

1801vsd_11 11 1/4/18 8:00 AM


technologytrends

continued from page 9 20% of the marine litter Bandwith (Gbits)


found was related to fishing activities. 120
“Deep-sea environments pose several imag- CoaXPress
Gen3 PCI Express (octal lane)
100 100 GigE
(8 lane) (63/7)
ing challenges. Capturing images in low-light, CXP-12.5 (100/N/A) (100/100-OM4-MMF)
80   (optical)
silt reflecting light back causing glare and the Thunderbolt V.3 (40/3)
reduced field of view from being under water,” 60
40 GigE CoaXPress
(40/30 - CAT 8) (octal lane)
explains Marco Boldrini, Product Manager at 40 CXP-6 (50/68) 40 GigE
Sony Europe Image Sensing Solutions (Wey- Gen3 PCI Express CoaXPress (40/100-OM4-MMF)
20 ((4 lane) (31.5/7) (Quad lane) (optical)
bridge, Surrey, United Kingdom; www.image- 10 CXP-6 (25/68)
Ca
Camera
Cam Link HS (16.8/15) 10 GigE
sensing-solutions.eu). “Plus, the challenge of (10/100)
8 Thu
TTh
Thunderbolt V1/2 (20/3)
transmitting data to the surface from depths
U 3.1 Gen2 (10/3)
USB CoaXPress
6
of greater than 2,000 meters.” (single lane)
CameraLink (Deca)(6.8/10) CXP-6 (6.25/68) NBASE -T
Water, with a refractive index of 1.333, shrinks 4 (5/100)
USB 3.0 (5/3)
the field of view by 25%, so any camera chosen
2 IEEE 1394b (0.8/4.5)
needs to take this into consideration. The USB 2.0 (0.48/5)
Gigabit Ethernet
1 (1/100)
research team acquired images using towed stills
10 20 30 40 50 60 70 80 90 100 110 120
camera moving at 0.9 m/s and with a remotely Maximum theoretical cable length (meters)
operated vehicle (ROV) running a 2008 Sony
Fiber-optic cable was required for data transmission due to distance limitations of traditional
FCB-H11 color block camera. Equipped with a
machine vision camera interface standards.
10X zoom (f1.8-2.1), the camera is capable of cap-
turing images at HD resolutions with a 50-degree Consequently, lighting needs to be carefully trolled system such as the ROV an operator
horizontal viewing angle. controlled, and designed to illuminate the sea would ideally have the ability to calibrate lighting
“Light deteriorates quickly under water,” floor while avoiding blinding reflections from and exposure times to maximize image quality.”
explains Boldrini, “and while cameras such as silt. Indeed, the research team highlighted that Another challenge is the corrosive nature of
the FCB-H11 can capture down to 1 lx (and several images were discarded due to the prob- salt water, and the pressures the systems will be
several cameras, such as the FCB-EV7520A lem of silt creating “poor quality images due to exposed to (1 additional atmosphere of pres-
down to 0.35 lx), there is little significant light sediment clouds obscuring the image,” notes sure for every 10 m in depth) means the hous-
found below 200 meters, and no sunlight Boldrini. “This makes it very difficult to auto- ing used needs to be exceptionally well
beyond the twilight zone (200 m - 1,000 m).” mate lighting control, and even in a more con- designed. In this application, the camera sys-
tems must reside inside housings capable of
surviving more than 250 atmospheres of pres-
sure to operate at such depths.
“Here systems such as MacCartney Bene-
lux’s Luxus range can be used,” Boldrini notes.
“These couple Sony cameras with adaptable
lighting control and a housing capable of
reaching depths below 9,000 meters.”
Corals were found at depths of between 50 m
and 1,000 m, with the team searching for marine
litter at depths of almost 2,400 m. While an ROV
could be automated to travel at a fixed depth and
avoid canyons / objects via sonar, the challenge
of how to transmit acquired images back to the
surface for analysis in a timely manner and with-
out compromising image quality remains.
Traditional machine vision standards simply
aren’t capable of transmitting data over these
distances, Boldrini explains. “Therefor fiber-
optic cable was needed to control both the
lighting and the camera systems, but this adds
Sony FCB cameras help researchers monitor for pollution and coral health in the Bay of Biscay. significantly to the cost of the build.”

12 Januar y 2018 VISION SYSTEMS DESIGN www.vision-s y stems .com

1801vsd_12 12 1/4/18 8:00 AM


technologytrends

continued from page 9 that are built in and Delamination at 20× magnification and focus
at the bottom of silicon
over the thinly sliced silicon wafer, the wafer
ultimately undergoes a multitude of micro-
fabrication process steps such as masking,
etching, doping, and metallization.
ICs, which have become the main com-
ponent of nearly all electronic devices, are
comprised of a microscopic array of elec-
tronic circuits and components that has
been implanted onto the surface of a single
crystal of semiconductor material such as
silicon. Within the integrated circuit, com-
ponents, circuits, and base materials are all Delamination at 20× magnification and focus
made together out of a single piece of sili- at the middle of silicon
con wafer. Hundreds of integrated circuits
are made at the same time on a single, thin
slice of silicon wafer and then they are cut
apart into individual IC chips.
Wafers accumulate residual stress during
the growth, sawing, grinding, etching and
polishing process, and cracks may be gener-
ated throughout all of these processes. Cracks
may also occur when the integrated circuits
themselves are cut into individual ICs. If unde-
tected, wafers can be rendered unusable in InGaAs cameras operating in the SWIR
subsequent manufacturing stages. As a result, wavelength band from 0.9 µm to 1.7 µm
acquire images that highlight defects within
it is important to inspect the raw material sub-
the silicon wafer.
strate for impurities before processing and
detect any defects during processing, in order a 30 µm pixel size and is sensitive in the
to reduce waste and keep cost down. short-wave infrared spectrum from 0.9
By using indium gallium arsenide (InGaAs) µm to 1.7 µm at up to 344 fps. The SWIR
cameras operating in the short-wave infra- camera—which is thermo-electrically
red (SWIR) wavelength band from 0.9 µm to cooled for low-noise images—has a GigE
1.7 µm, it is possible to see through semicon- Vision interface, a compact form factor (55
ductor silicon substrates. This ability to see mm x 55 mm x 78 mm), as well as Allied
through the material enables manufacturers Vision’s Vimba software development kit,
to generate infrared images that will highlight which enables users and developers to pro-
defects such as cracks within the silicon wafer. gram their own applications across Win-
Radiant Optronics—a company that has dows and Linux platforms.
a strong focus in the semiconductor field, “The Goldeye G-008 is a low-resolution
wafer fabrication, service laboratories, pack- model with an affordable price. The low
aging, and PCB assembly in Asia—has resolution is sufficient to detect the defects,
developed an infrared imaging microscope so it is the ideal choice for this cost-sensi-
that IC manufacturers can use to inspect for tive application. Our customers can bene-
internal defects and cracks. fit from the Goldeye’s outstanding perfor-
The microscope utilizes a Goldeye G-008 mance and we enjoy support from Allied
SWIR camera from Allied Vision (Stadtroda, Vision’s Asia-Pacific office just next door in
Germany; www.alliedvision.com), which Singapore,” explained Christopher Cheong,
features a 320 x 256 InGaAs sensor with Director of Radiant Optronics.

www.vision-s y stems .com VISION SYSTEMS DESIGN Januar y 2018 13

1801vsd_13 13 1/4/18 8:00 AM


1801vsd_14 14 1/4/18 8:00 AM
Integration Insights

How deep learning is


enhancing machine vision
Developers increasingly apply deep learning and artificial neural networks
to improve object detection and classification.

“O “O
Johannes Hiltner K” K”

Good Simple
samples features

Digitalization has a firm grip on industrial pro-


duction, with processes increasingly automated Deep Complex Deep
learning features CNN
as part of the Industrial Internet of Things “N
OK
“N
OK
” ”
(IIoT). In the IIoT, which is also known as
Defective Classifier
Industry 4.0, various machines and robots take samples
on more everyday production tasks. In assembly
for example, new, compact and mobile robots,
such as collaborative robots (cobots), often work
Deep learning technologies and convolutional neural networks
hand in hand with their human colleagues.
(CNNs) can learn and distinguish between defects.
The IIoT’s highly automated and universally
networked production flows characterized by degree of automation, much greater produc- range of tasks such as fault inspection, work-
machine-to-machine interaction depend on tivity, and more reliable identification, alloca- piece positioning and the automatic handling
machine vision to reliably identify a wide range tion, and handling of a wider range of objects of objects in robotics.
of objects in the flow of goods within facto- throughout the entire value chain.
ries and the rest of the process chain. Machine As the “eye of production,” machine vision Analyze and evaluate large data sets
vision increases the efficiency and safety of these software has become an essential element of In an effort to make the identification pro-
workflows, and has become the technology, processing cess even more robust and adaptable to the
an indispensable tool for engi- unstructured data such as requirements of flexible and networked IIoT
Johannes Hiltner is a
neers seeking to automate and digital images and video gen- processes, machine vision software develop-
speed up production. product manager at MVTec erated by cameras to iden- ers increasingly rely on methods from the
Today, innovative machine Software GmbH (Munich, tify objects by their external field of artificial intelligence (AI). Deep
learning and deep learning Germany; www.mvtec. optical features alone. Such learning is an area of machine learning that
processes can ensure even software works very fast and enables computers to be trained and learn
com) and responsible for
more robust recognition rates. achieves extremely high and through architectures such as convolutional
the HALCON software, the
Thanks to such advances in reliable identification rates, neural networks (CNNs).
artificial intelligence, compa- company’s flagship product. and consequently, is used The special attribute of AI, machine-learn-
nies can benefit from a higher across industries for a wide ing and deep-learning technologies is that they

www.vision-s y stems .com VISION SYSTEMS DESIGN Januar y 2018 15

1801vsd_15 15 1/4/18 8:00 AM


Integration Insights

to learn new things independently. By taking the


Good features of all image data into account, conclusions
can then be drawn about the properties of a cer-
? ? ? tain class, which significantly improves the iden-
tification rates. This process is called “inference.”
Therefore, deep learning algorithms are also
very suitable for optical character recognition
(OCR) applications, that is, for precisely identi-
Defective fying letter or number combinations. Due to the
extensive training process, the typical features
of the individual characters are precisely identi-
Input Output
fied based on the defined classes. However, since
there are many different fonts, some with devi-
Methods such as deep-learning technologies and convolutional neural networks (CNNs) from the
field of artificial intelligence (AI) are entering machine vision to help image-processing systems ating features such as serifs, problems may arise
learn and distinguish between defects and make identification processes even more precise. allocating them with certainty.
Advanced machine vision software can solve
comprehensively analyze and evaluate large amounts of data (big data) this problem. MERLIC and HALCON from MVTec (Munich, Ger-
in order to train many different classes and thereby more effectively dis- many; www.mvtec.com), for example, contain an OCR classifier based
tinguish between objects. Increasingly, this data is generated within the on deep learning algorithms, which can be accessed via many pre-trained
IIoT. This can be digital image information as well as data from sensors, fonts. As a result, a wide range of typefaces, such as dot-print, SEMI,
scanners, and other process components. industrial, and document-based ones, can be identified with certainty
In order to use deep learning, CNNs must first be trained. This thanks to a single, universal, pre-trained classifier.
training process relates to certain external features that are typical of
the object, such as color, shape, texture, and surface structure. The Avoid excessive training time
objects are divided into different classes based on these properties to However, companies often shy away from using AI-based technolo-
allocate them more precisely later. gies such as deep learning, since, due to their complexity, they require
In conventional machine vision methods, a developer must labori- developers to have extensive expertise. The training process generally
ously define and verify the individual features manually. With deep requires many sample images to recognize objects.
learning, however, self-learning algorithms
are used to automatically find and extract Input Input
the unique patterns in order to differentiate
Input Input
between the particular classes.

Train objects through classification Input Input


How does the training process work exactly? The
user first supplies image data that has already
Input
been provided with labels. Each label corre-
sponds to a tag that indicates the identity of the
particular object. The system analyzes this data,
and–on this basis–creates or “trains” correspond-
ing models of the objects to be identified.
Due to these self-learned object models, the
deep learning network is now able to assign
the newly-added image data to the appropriate
classes, such that, their data content or objects
are also classified. Thanks to this allocation to
certain classes, the items can then continue to
be identified automatically. Output
A sample image for direct comparison is
therefore no longer necessary for each individual Modern machine-vision solutions enable companies to train neural networks themselves. Photo
object. After all, deep learning processes are able credit: MVTec Software GmbH

16 Januar y 2018 VISION SYSTEMS DESIGN www.vision-s y stems .com

1801vsd_16 16 1/4/18 8:00 AM


Integration Insights

Up to 100,000 comparison images may be needed for each class and other components are reliably identified, which allows the removal
in order to achieve adequate recognition rates. Even if the necessary of corresponding parts to be automated.
sample data is available, the training process takes up an enormous The food and beverage industry benefits from deep learning tech-
amount of time. Usually, the programming work for identifying dif- nologies, too. For example, poor-quality fruits and vegetables can be
ferent defect classes during fault inspection is extremely complex, too. detected more precisely before they are packaged or further processed.
The reason is that highly skilled employees with suitable training are The processes are also used in automotive engineering. This indus-
required for this purpose. try, in particular, is characterized by an especially high degree of auto-
Modern machine vision solutions, which already include a large mation. Here, for example, self-learning algorithms perfectly identify
number of deep learning functions, can help. The new version 17.12 of tiny paint defects that are not visible to the naked eye.
the standard software MVTec HALCON enables companies to train Another important area of application is pharmaceuticals. Pills often
convolutional neural networks (CNN) themselves without a great deal look very similar on the outside, but contain entirely different active sub-
of time and money. After all, the software is already equipped with two stances. Through deep learning and CNNs, the drugs can be very reli-
networks that are optimally pre-trained for industrial use — one is opti- ably identified, inspected, and distinguished from each other so they are
mized for speed and the other for maximum recognition rates. always placed in the correct blister packs.
The training process, therefore, works with only a few sample images
provided by the customer, and thus are tailored to the customers’ exact Conclusion
applications, resulting in neural networks that can be precisely matched Technologies based on artificial intelligence, such as deep learning and
to the customer’s specific requirements. CNNs, are an important part of modern machine vision solutions today.
User companies can significantly reduce the amount of programming Deep learning enables companies to train neural networks themselves
work needed by easily and systematically classifying new image data, without any in-depth expert knowledge and minimal effort, especially
saving time and money. Normally, they do not have to have any in-depth when programming defect classes during error inspections. The result
AI expertise. Companies can use their existing personnel without prob- is that companies can save money and benefit from much more robust
lems to train the network. recognition rates as well as better classification results.

TEN
Detect defects efficiently
Recognizing defects is a time-consuming process because the appear-
ance of defects, such as tiny scratches on an electronic part, can never
be accurately described in advance. Therefore, it is very difficult to man-

YEARS
ually develop suitable algorithms that can detect any conceivable faults,
based on sample images. An expert would have to manually view hun-
dreds of thousands of images and program an algorithm that describes
the error as precisely as possible based on his observations. This would
SINCE
SI
INCE
NCE ITS NTRO
ITS INTR
IINTRO
simply take too long.
AND STILL

GOING
Deep learning technologies and CNNs, on the other hand, can inde-
pendently learn certain characteristics of defects, and precisely define
the corresponding problem classes. So, only 500 sample images are
needed for each class, based on which the technology trains, verifies,
and thereby precisely detects the different types of defects.
This process takes only a few hours. Not only does it minimize the
STRONG
amount of time required, but the recognition rates are also much higher
For the ultimate in flex life performance
rmance
than with manually programmed defect classes. The self-learning algo-
demand Intercon Infiniflex assemblies
mblies
rithms, therefore, help to significantly reduce identification errors, while featuring GORE® high flex cable.
the error quotas for manual programming can be inefficiently high. Supporting all of the newest protocols
tocols
including Ethernet, Camera Link, USB, Featuring
GORE CameraLink
® ®

Many industries benefit CoaXPress and more. High Flex Flat Cables

Machine vision technologies based on deep learning and CNNs can be


used profitably in many different branches of industry and applications.
Precision Cable
In the electronics industry, the inspection process can be automated and Assemblies for the
Vision and High Speed
accelerated. With the help of self-learning methods, therefore, all con- Manufacturing Industries

ceivable product defects can be effectively detected – as described above.


(218) 828-3157 | intercon@nortechsys.com | www.intercon-1.com
Even the tiniest scratches or cracks in circuit boards, semiconductors,

www.vision-s y stems .com VISION SYSTEMS DESIGN Januar y 2018 17

1801vsd_17 17 1/4/18 8:00 AM


Integration Insights

When to invest in LED


strobe controllers
LED strobe controllers offer many benefits for applications involving low
contrast, motion, mark quality assessment, OCR and ambient lighting issues.

Alan L. Lockard

After many years as an end user designing many new LEDs, as they heat up from 25°C Forward current
IF = f(VF ); TA = 25°C
and installing machine vision inspection sys- – 90°C, the brightness of the LEDs can drop 100
tems without adding a strobe controller, it’s by up to as much as 40%. A
IF
has become clear that the benefits of LED This reduction in output intensity may
lighting controllers far outweigh the cost in begin manifesting itself by means of false
applications that involve: rejects. Eventually, adjustments to either the
• Low contrast lighting output or the camera system will be
• Inspection and motion required to restore optimal performance.
• Bar code grading and mark A quality LED lighting controller can pro- 10-1

quality assessment vide consistently-uniform lighting over the


• Optical character recognition life of the project. For continuous operation,
• Ambient lighting issues the life of the LED array can be extended
• Low contrast by operating at less than 100% power. For
Machine vision applications involving strobe control, even when overdriving at
image contrasts below 20% present specific lower duty cycle operation, the LED array 10-2
2.0 2.5 3.0 3.5 4.0 4.5 5.0 V 6.0
challenges. In such low contrast applications, will last longer. VF
Typical LED output curve
even the slightest variation in LED output Operating the LED array at reduced
intensity can have a substantial impact on output or in strobed mode reduces heat gen- Figure 1: While LED lights are specified as
the performance of the eration and substantially either 12V or 24V, the actual light output from
inspection system. Alan L. Lockard is an Ad- slows the degradation of the LED results from current through the semi-
conductor device. Consequently, all LED device
An LED array operat- vanced Certified Vision Profes- LED array output. LED
manufacturers specify current control for the
ing at 100% power gen- controllers that regulate
sional and Principal Engineer in most efficient use.
erates heat that short- the current to the LED
ens the life of the LED Reagent Engineering at bioMéri- array rather than the Inspection and motion
array, gradually dimin- eux, Inc. (Hazelwood, MO, USA; voltage also provide more In-line machine vision systems tend to be
ishing the output inten- www.biomerieux.com). consistent illumination installed on machines where the object being
sity over time. Even with intensity (Figure 1). inspected is moving on a conveyor. Depending

18 Januar y 2018 VISION SYSTEMS DESIGN www.vision-s y stems .com

1801vsd_18 18 1/4/18 8:00 AM


on the speed of the conveyor, the image will Exposure Exposure
begins ends
be blurred due to the amount of movement of
the object during the exposure period. Factors
Camera Exposure
involved are time (exposure), area (aperture)
and intensity (light).
Illustraton Minimum 5x
If time is reduced by use of a shorter expo- intensity with
pulsed supply
sure time to mitigate blur, then either the area Overdriven pulse
or the intensity must be increased for sufficient
exposure. While opening the aperture by one

Light intensity
f stop doubles the amount of light gathered
by the lens, it also reduces the depth of field
(DOF) which may be undesirable.
Continuous lighting is limited to what light
output is available at 100% power. The only 100% output
with continuous
way to increase intensity of continuous light- supply
Continuous intensity lighting
ing is to increase the size and wattage of the
LED array. This also increases initial expense, Lighting trigger
operating costs and produces undesirable heat.
Exposure trigger
Pulsing or strobing the LED array, espe-
cially overdriving it, provides high intensity
Figure 2: Converting a constant illumination system to pulse illumination is straightforward. The
illumination at short duty cycle. This comple- trigger for the camera is sent to a lighting controller. The controller provides precise pulse width
ments short exposure imaging used for moving timing, power and brightness control for the lighting pulse. This ensures that the lighting pulses
objects, and overdriving the array has no detri- during the camera exposure time and that the light energy is the same for every image.
mental impact on the lifespan or performance
of the array (Figure 2). barcode verifying without motion blur that margin for print quality degradation caused
could result in poorer barcode grades. Cali- by subsequent handling and packaging.
Barcode grading (verifying) and OCR bration of the machine vision system for bar The principles just discussed for barcode
Inspection of barcodes at the point of printing code grading can be done using a Calibrated grading also apply to optical character recog-
helps to ensure reliable readability in the field. Conformance Standard Test Card. nition (OCR). Selecting a quality LED light-
Referred to as verifying or grading, there are a One application involves a GS1 DataMa- ing controller and using OCR fonts when
variety of recognized standards in use today. trix barcode being printed and verified on web- printing provide an extremely robust OCR
All compare numerous physical attributes, bing moving at 750 inches per minute. The reading application.
each scored individually, and combined into machine vision exposure was fixed at 250µsec Many smart camera-based machine vision
an overall print quality grade. to minimize motion blur. Because the GS1- systems come with auto-tuned OCR capabili-
Attributes such as symbol contrast and mod- specified minimum resolution is 10 pixels per ties. These read consistently printed and illu-
ulation are affected by illumination intensity, cell, the working distance was set so that the minated images of OCR fonts with a high
exposure time and system gain. Once set, resolution was approximately 10 pixels per cell. degree of success. Using a quality LED strobe
exposure time and system gain remain rela- The LED array was strobed for 1msec at controller enables consistently reading of 2mm
tively constant in the machine vision camera. 500%, and the gain of the machine vision high, low contrast, laser marked, OCR font
However, because LED output degrades over sensor was set to 1, then increased incremen- characters with a high degree of reliability.
time, only a high-quality LED lighting con- tally until the Symbol Contrast (SC) measured
troller can ensure consistently uniform light- by the sensor matched the SC recorded on the Ambient lighting
ing for the life of the project in such barcode test target. Making gain adjustments from the For a machine vision system to work predict-
verification applications. bottom up proved more reliable. ably and accurately on the plant floor, there
Mark quality assessment at the point of This methodology produced an in-line ver- should be no effects due to ambient lighting.
printing the barcode is likely to be on a moving ifier that grades GS1 barcodes in harmony If an inspection is influenced by the operator
object or surface. Therefore, the steps to miti- with the benchtop verifier being used by the standing between the overhead lighting and
gate blurring of the moving image mentioned QC department to verify QC samples. Set- the inspection, there will be problems.
earlier must be considered. Stop action imag- ting the acceptance criteria on the in-line One method of mitigating the effect of
ing, by combining short exposure time and verifier one letter grade higher than that of ambient lighting is to provide some sort of
high intensity strobing, will provide consistent the QC benchtop verifier provides a safety shrouding over the inspection area to block out

www.vision-s y stems .com VISION SYSTEMS DESIGN Januar y 2018 19

1801vsd_19 19 1/4/18 8:00 AM


NEW PRODUCTS –
NEW POSSIBILITIES
a)
Telecentric
lenses with
tunable working
distance
ambient light. However, such an approach
also obstructs visual inspection of the object.
In pharma and other quality-critical envi-
ronments, shrouding sections of the produc-
tion line poses issues with line-clearance pro- b)

cedures. By using high intensity strobing, the


 focusing without influence of ambient lighting on the inspec-
Figure 3: The Gardasoft RT220-20 two chan-
moving elements tion can be substantially reduced.
nel (left) and RC-120 single channel (right) con-
 0.13x - 0.66x for sensor With an overdriven LED strobe, the ambi-
up to 16 mm trollers regulate current to LED lights, provide
ent light is literally overpowered. This theory overdriving capability and are only configu-
 1x - 3x for sensors
up to 35 mm can be easily tested by unplugging the LED rable over Ethernet via embedded web pages
array and triggering an inspection. If the to help keep validated settings in their validat-
inspection image is completely dark, the ambi- ed state by limiting access.

ent light is very unlikely to have an impact on


the inspection. driving capability and are only configurable
When using monochromatic machine over Ethernet via embedded web pages, which
vision cameras, use of monochromatic LED keeps the validated settings in their validated
lighting can also help mitigate the effects of state by limiting access.
ambient lighting. Choosing the appropriate Snippets of program code included with
monochrome LED color is really a matter of the machine vision smart cameras used pro-
what works best with your inspection appli- vide communication with both the RT and
Telecentric cation. However, once a color is chosen, use RC controllers. For new “Greenfield Appli-
lenses with of a matching bandpass color filter on the cations” it would be recommended to couple
integrated coaxial machine vision camera lens limits the impact Gardasoft controllers and machine vision cam-
illumination of the ambient lighting to only the range of the eras that utilize triniti technology. This tech-
 improved image homo-
bandpass filter. nology offers tight integration of the machine
geneity and intensity
 exchangable beamsplitter Use of a monochromatic LED light source vision camera and the LED controller.
(unpolarized, polarized) provides a narrow but repeatable imaging sce- Whereas the applications mentioned in this
 possibility to integrate nario for the machine vision system. Reducing as article are fixed and unlikely to require recon-
a retardation plate many extrinsic variables of a machine vision figuration over the life of the system, many
system adds to system performance and reliability. machine vision applications become part
Visit us at of more flexible automation requiring a rec-
Choice of LED controller ipe-based solution for many different product
Because these applications were all validated families. Having such an integrated machine
South hall, booth 1146 systems, RT220-20 two channel and RC-120 vision camera and lighting configuration will
SILL OPTICS GmbH & Co. KG single-channel controllers with Ethernet com- help to meet such demanding applications
Johann-Hoellfritsch-Str. 13 munications from Gardasoft Vision (Cam- with greater ease than previous platforms.
DE-90530 Wendelstein bridge, UK; www.gardasoft.com) were speci-
Phone: +49 (0)9129 - 90 23-0
fied (Figure 3). These controllers regulate the This article does not reflect the views or position
info@silloptics.de • silloptics.de
current to the LED light array, provide over- of the Company but those of the author.

20 Januar y 2018 VISION SYSTEMS DESIGN www.vision-s y stems .com

1801vsd_20 20 1/4/18 8:00 AM


Industry Solutions Profile

Embedded camera
brings 3D mapping
to boring machines
Mounting an embedded camera within the cutting head of a tunnel boring machine
allows contractors to document rock face structures as boring progresses.

Robert Wenighofer

In the past, drilling and blasting methods were


used to excavate tunnels, a process that was
labor intensive and time consuming. Today,
more efficient tunnel boring machines
(TBMs) are used as an alternative, produc-
ing smooth tunnel walls while limiting any
disturbances to the surrounding soil or rock.
Although expensive, such TBMs have the
advantage over drilling and blasting meth-
Figure 1: Multiple disc cutters are used in a tunnel boring machine to cut the rock-face. In a
ods since, when tunnel lengths are long, they
single disc housing, the disc cutter (middle right) shown in this front elevation, rotates as it cuts
promise to be more efficient and can perform the rockface. By housing a GigE camera and control unit (shown in red) within this disc cutter
such tasks in a relatively short period of time. (top, left), a circular 85° field of view (FOV) of the face can be imaged as the TBM rotates.
One of the advantages of drilling and blast-
ing methods, however, is that open access is unileoben.ac.at) has developed an embedded trol unit mounted in the disc cutter housing
provided to the tunnel face, making it rela- camera system to provide 3D reconstructions of the TBM (Figure 1). In a single disc hous-
tively simple to observe, analyze and record of the tunnel face. In operation, the system ing, the disc cutter (middle right), shown in
geological features. Such geological mapping visualizes the spatial position of stratifications, front elevation, rotates as it cuts the rockface.
is more difficult when using mechanized tun- the depth of any cavities and any discontinui- Currently, the TBM is building the explor-
neling methods, since the ties of the rock. atory tunnel of the Brenner Base Tunnel,
cutting head obstructs the Robert Wenighofer, Developed in collabora- the world’s longest tunnel now under con-
view of the tunnel face. To The Institute for Subsurface tion with Geodata Group struction. The camera system was deployed
overcome this, the Institute (Leoben, Austria, www. at Tulfes-Pfons section of the tunnel at Inns-
Engineering at the Monta-
for Subsurface Engineering geodata.at), the system bruck, Western Austria by a joint venture
at the Montanuniversität nuniversität Leoben (Leoben, comprises an off-the-shelf between Strabag AG (Cologne, Germany;
Leoben (Leoben, Austria; Austria; www.subsurface.at) camera mounted in a rug- www.strabag.com) and Salini Impregilo S.p.A
www.subsurface.at, www. gedized housing and a con- (Milan, Italy; www.salini-impregilo.com).

www.vision-s y stems .com VISION SYSTEMS DESIGN Januar y 2018 21

1801vsd_21 21 1/4/18 8:00 AM


Industry Solutions Profile

Rotation

After cutting, it is necessary to visualize any Figure 2: Cutter head of the TBM. To obtain a com-
plete image of the rockface, the camera is placed
new geological features that may be pres-
at either four (shown in green) or five posi-
ent. To do so, the boring machine is
tions (shown in red) within disc housings of
retracted and the material between the cutting head while the camera control-
the cutter head and face cleaned. A ler is constantly mounted in a double disc
GigE camera (shown in red) and cutter near the rotation axis of the TBM.
control unit housed within one This results in between either 700 or
1100 images being captured depend-
of the disc cutters (top, left) then
ing on whether four or five positions
images a circular 85° field of are employed.
view (FOV) of the face as the
TBM is manually rotated away the camera at 2 fps which provides
from the rockface. sufficient overlap between images
To expand this coverage, the to perform 3D reconstruction and a
same camera system is then re- surplus of images. To ensure that the
positioned in a different disc position of each image is known as
housing, and the TBM manu- the TBM head rotates, the camera
ally rotated again. This process control unit incorporates a uniaxial
is then repeated at four or five inclination sensor from Posital Fraba
different disc housing positions (Heerlen, The Netherlands; www.pos-
approximately four or five times ital.com) mounted inside the camera
(Figure 2). While the TBM head controller and interfaced to the embed-
rotation takes approximately 3 mins, ded PC over an RS-232 interface.
it takes 10 to 15 mins to then reposition In this way, as each image is captured by
the camera system at each different disc the camera, it is assigned an angular value from
housing location. To expedite this process, the inclination sensor. This inclination angle
it would at first appear that several embed- and the known relative position of the disc hous-
ded cameras could be used. However, the ing is then used to determine the absolute posi-
massive 25-ton force that a single disc cutter than 1.7 m depth as can be seen in Figure 4. tion of the camera in the 3D coordinate system
exerts and the tons of excavation material pro- As images are captured by the GT2000 of the cutter head. Knowing this information,
duced would impede permanently mounting camera, they are transferred over its GigE both two-dimensional and 3D photogrammet-
a camera system in the cutter head. interface to a fanless quad-core embedded PC ric data of the rockface can then be generated.
from Vecow (New Taipei City, Taiwan; www. It is necessary to visualize the quality of the
Camera and controller vecow.com) that acts as a camera control unit. images of the tunnel face before the TBM
To image the rockface as the TBM head is man- Camera control software running on the PC, is rotated to ensure that the image quality is
ually rotated, a Prosilica GT2000 GigE camera based on Allied Vision’s Vimba SDK, triggers acceptable and that the protective glass of the
from Allied Vision (Stadtroda, Germany; www.
alliedvision.com) with a 2/3in CMOS 2048 ×
1088 sensor from ams Sensors Belgium (Ant- Companies mentioned
werp, Belgium; www.cmosis.com) was fitted
Agisoft Geodata Group Strabag AG
with a 5mm focal length C-Mount lens to St. Petersburg, Russia Leoben, Austria Cologne, Germany
provide the 85° FOV of the face. The camera www.agisoft.com www.geodata.at www.strabag.com
system is housed in a ruggedized housing incor- Allied Vision NVIDIA The Institute
porating a white LED ring light developed by Stadtroda, Germany Santa Clara, CA, USA for Subsurface
www.alliedvision.com www.nvidia.com Engineering at the
Geodata Group (Leoben, Austria; www.geo- Montanuniversität
data.at). While the camera is operated in a con- ams Sensors Posital Fraba Leoben
Belgium Heerlen, The Nether- Leoben, Austria
tinuous mode, capturing two frames per second, Antwerp, Belgium lands www.unileoben.ac.at
the white LED is strobed over a period of 4ms to www.cmosis.com www.posital.com
Vecow
ensure a visible light output of 10,000 Lumens so Autodesk Salini Impregilo New Taipei City, Taiwan
that no motion blur occurs as the camera system San Rafael, CA, USA S.p.A www.vecow.com
www.autodesk.com Milan, Italy
is rotated. This level of illumination is sufficient www.salini-impregilo.com
to illuminate break-outs in the rockface of more

22 Januar y 2018 VISION SYSTEMS DESIGN www.vision-s y stems .com

1801vsd_22 22 1/4/18 8:00 AM


Industry Solutions Profile

camera unit is not fogged up or soiled. Because -0.60m


of this, the camera controller is set to automat-

akk
ically initiate recording after the camera con-

brreea
Foliation

OOuuttb
trol unit has powered up. Then, by incorpo-
rating a wireless network card in the camera
controller, stationary images can be analyzed
remotely using any Android-powered hand-
held device such as a smartphone or a personal Harnisch
Drive
digital assistant (PDA). direction

Mapping the face Drive direction


To obtain a complete image of the rockface, -0.30m
only the camera unit is placed at either four
(shown in green) or five positions (shown in Figure 3: Using the 2D image sequences generated as tunneling progresses, geologists can char-
acterize and measure such features as spatial discontinuities and visualize the tunnel face and the
red) within the disc housings of the cutting
behavior of the excavation process as it continues
head. This results in between either 700 or 1100
images being captured depending on whether -0.70m
four or five positions are employed. After each
image is captured, it is stored along with is asso-
ciated unique angular value from the inclina-
tion sensor. After these images are captured, the
camera and camera controller are removed and
the data transferred over a USB interface to an
Intel i7-based desktop PC with a GTX970 PCI
Express graphics card from NVIDIA (Santa
Clara, CA, USA; www.nvidia.com).
To generate 2D and 3D maps of the rockface
and allow image measurements to be made, -0.30m
captured images are then processed using the Figure 4: Color-coded elevation images that represent the distance between the cutting head
Photoscan software package from Agisoft (St. and the tunnel face can also be used to detail the structure of the surface of the face. In this col-
Petersburg, Russia; www.agisoft.com). Using or-coded image of the rockface (left), a huge break-out can be clearly visualized.
this software, the camera can be first cali-
brated before use to rectify any possible distor- the monochrome image on the left, there is a repair or replacement to the cutting discs. For
tion from the camera’s lens as well as comput- failure of the tunnel face which may be due contractors to objectively document this is vital
ing the parameters of the camera matrix such as to structural weakness in the rock. To further since any such possible repairs may be expen-
the focal length and projection center of the lens. analyze this structural weakness geological fea- sive and need to be passed on to their customers.
This information along with each 2D image and tures like schistosity can be displayed (graphic, Field-tested in the construction of the
camera position data from the inclination sensor far left) as a stereographic projection. This is Brenner Base Tunnel in the Austrian Alps, the
is then processed by Photoscan in a process accomplished using a software plug-in based camera system has been used for more than
known as bundle block adjustment. In this pro- on the .NET application programming inter- a year where it has proven to be an appropri-
cess, a set of 2D images are taken from different face (API) of AutoCAD software from Autodesk ate means of documenting and visualizing the
viewpoints and the relative motion of the camera (San Rafael, CA, USA; www.autodesk.com). rockface even in the harsh environments cre-
and its optical characteristics are used to gener- Color-coded elevation images that represent ated by TBMs. Patent protection for the system
ate an optimal 3D reconstruction of the scene. the distance between the cutting head and the is currently being sought in Middle-Europe.
Using the 2D image sequences generated tunnel face can also be used to detail the struc-
as tunneling progresses, geologists can char- ture of the surface of the face (Figure 4). In this Reference:
acterize and measure such features as spatial color-coded image of the rockface (left), dis- Wenighofer, R., Galler., R., Six, G., Chmelina,
discontinuities and visualize the behavior of continuities can be clearly visualized. This is K.: Cameras for the digitized 3d geological
the excavation process (Figure 3). In this exam- important since, when tunneling, the cutting documentation of the tunnel face from cutter
ple, three images of the rock face are shown disks will be subject to different forces at differ- heads of TBMs, Geomechanics and tunneling,
as tunneling progresses (from left to right). In ent points on the surface possibly requiring later 10 (2017) No. 6, 760-766

www.vision-s y stems .com VISION SYSTEMS DESIGN Januar y 2018 23

1801vsd_23 23 1/4/18 8:00 AM


High Resolution
graphics: www.donaldecompany.com

meets Speed

• High Speed CoaXPress Machine Vision cameras • Easy to use high speed recording software:
• High Resolution Machine Vision cameras Visual Marc
• Small ruggedized high speed recording cameras • Long term longevity Vision PCs
• Long time recording systems • Custom camera solutions

Visit us at Table/Booth 54 - South Lower Lobby | San Francisco CA, USA | 30 Jan - 1 Feb 2018
www.mikrotron.de

Member of
www.lakesighttechnologies.com

1801vsd_24 24 1/4/18 8:00 AM


product focus on High-speed imaging

High-speed cameras target


machine vision applications
Faced with imaging high-speed events, developers can choose from those that transfer
data over industry-standard camera-to-computer interfaces and stand-alone cameras.

Andrew Wilson, European Editor

Traditionally, high-speed cameras were camera systems, namely faster speeds


expensive and relegated to applications such and lower power consumption. They
as ballistics applications and automotive test- have gained acceptance in high-speed
ing where the cost of such systems was second- camera designs because by individu-
ary. Today, however, with the advent of high- ally digitizing the captured charge at
speed CMOS imagers, low-cost memory, each photosite, blooming and smear-
high-speed camera interfaces and low-cost ing is eliminated and, when oper-
image capture and analysis software; high- ated in global shutter mode, each pixel is
speed cameras have found wider market exposured simultaneously, eliminating spa-
Figure 2: The IL5, a 2560 x 2048 CMOS-
acceptance in fields such as scientific research tial aberrations from extremely fast-moving
based camera from Fastec Imaging with a max-
and film and video production. objects. Such global shutter modes are, how- imum frame rate of 253 fps at full pixel count
ever, achieved at the cost of a significantly can be operated in a number of window-
CCD vs CMOS reduced frame rate over that of operation ing modes. The fastest of these windows the
In the late 1970s, the emergence of CCD cam- in rolling shutter mode (see “Rolling Shut- imager to 64 x 32 pixels allowing a frame rate
of 29,090 fps to be achieved.
eras began to challenge film-based products ter vs. Global Shutter,” http://bit.ly/VSD-
and numerous CCD architectures including SHUT, on the website of QImaging (Surrey,
full-frame, frame-transfer and interline-trans- BC, Canada; qimaging.com). Image motion
fer devices were devised to meet the demands Knowing the exposure times of a high-speed
of different applications. For high-speed camera is very important since this, along
imaging, the disadvantages of using CCDs with the velocity of the moving part, the
included the need to employ mechanical shut- number of pixels in the image sensor and
ters with full-frame devices, charge smearing the camera’s field of view (FOV) can have
associated with frame transfer devices and the a dramatic impact on the pixel blur that
reduction in sensitivity of interline-transfer occurs when an image is captured.
devices due to the devices’ interline mask (see Needless to say, to make accu-
“Architectures commonly used for high per- rate image measurements, the
formance cameras,” http://bit.ly/VSD-CCD, less this pixel blur is, the more
on the website of Andor Technology (Belfast, accurate the data analysis.
Ireland; www.andor.com). As Perry West, President of Auto-
In the mid-1990s, CMOS-based imag- mated Vision Systems (San Jose, CA,
Figure 1: Optronis’ CR-S3500, is a 1280 x 860
ers began to challenge the stronghold once USA; www.autovis.com) points out in
CMOS-based camera capable of running at
commanded by CCD-based devices. These 3,500 fps at full pixel count and specified as his excellent paper “High Speed Real-Time
devices have several advantages over CCD- having an adjustable exposure time from 2μs Machine Vision,” http://bit.ly/VSD-HS, the
based sensors when used in high-speed to 1/frame rate, i.e. 2-286μs. magnitude of the pixel blur can be mathemat-

www.vision-s y stems .com VISION SYSTE M S DESIGN Januar y 2018 25

1801vsd_25 25 1/4/18 8:00 AM


product focus on High-speed imaging

ically described as: B = (Vp x Te x Np)/FOV Frame rate relates to how many frames
where Vp is the velocity of the moving part, Te is are captured each second and shutter
the exposure time in seconds, Np is the number speed specifies how long each individual
of pixels spanning the view and FOV is the frame is exposed (which can vary). For a
field of view. Thus, for a part moving at 30cm/s graphical explanation of frame rate and
across a 1280 pixel imager over a camera FOV exposure, see “Shutter Speed vs Frame
of 1000cm, an exposure time of 33ms (0.033s) Rate,” http://bit.ly/VSD-SSFR. Thus,
would result in a pixel blur of 1.26 pixels. Even companies such as Optronis (Kehl, Germany; Figure 3: Emergent Vision Technologies
a one pixel blur such as this, Perry suggests, may www.optronis.com) in its CR-S3500, a 1280 x HR-2000 is a CMOS-based camera capable of
running 2048 x 1088-pixel images at 338 fps.
become an issue in sub-pixel precision applica- 860 CMOS-based camera capable of running
In windowing mode, this can be reduced to 320
tions but one that can be resolved by decreas- at 3,500 fps at full pixel count, is specified as x 240 to achieve a 1471 fps rate. With a band-
ing the exposure time of the camera. Unfortu- having an adjustable exposure time from 2μs width of 10Gbits/s, images can be transferred
nately, for those choosing a high-speed camera, to 1/frame rate, i.e. 2-286μs (Figure 1). to a PC without requiring host camera memory.
many manufacturers do not specify the expo-
sure times that can be achieved, merely specify- Regions of interest in both the horizontal and vertical directions
ing the frame rate of the camera. Indeed, some Unlike CCD sensors that, in partial scan mode, across the sensor can reduce the image size and
even state that the exposure time achievable is may suppress entire lines in the vertical direc- thus increase the readout speed.
the reciprocal of the frame rate (which it may tion, CMOS sensors can be operated in region For this reason, many manufacturers of
be in some cases). of interest mode (ROI) where windowed ROIs high-speed cameras specify both the maxi-

Understanding ISO in a digital world


Many of those responsible for specifying high-speed cameras will real- a “T”, such as ISO 1200T. However, the use of a “D” for daylight is optional,
ize the importance of sensitivity. Those in the machine vision industry so the compulsory “T” could be lost without being noticed.
realize that the specific sensitivity of any given camera is device spe- Also, illumination in the test can be measured at the scene as either a
cific and depends on a number of factors including the quantum effi- “scene luminance method” or at the sensor as a “focal plane method.”
ciency, pixel size, and the shot noise and temporal dark noise associated The mathematics in the standard should yield the same value for both
with the CCD or CMOS imager used in the camera (see “How to Evalu- techniques, but there is a temptation to try them both to see if one
ate Camera Sensitivity,” FLIR White Paper; http://bit.ly/VSD-CAMSEN). gives better results.
For those involved in high-speed imaging, the ISO standard is often The biggest discrepancies come from the choice of image rating
used to describe sensitivity. ISO Sensitivity (or ISO speed) is a measure of technique. There are two noise-based speed measurements, Snoise10
how strongly an image sensor or camera responds to light; the higher the and Snoise40, which are related to film standards. There is also the sat-
sensitivity the less light that is required to capture a good quality image. uration-based speed measurement, Ssat, but this method does not pre-
However, the ideas and measurements behind it are less well known. If vent manufacturers from using an undisclosed amount of gain. The con-
two cameras are rated at ISO 1200, will they produce the same images cept of recommended Exposure Index (EI) correctly allows for gain in the
for the same amount of light? Regrettably, the answer is “not necessarily.” camera, but this is closely related to another measurement, the Stan-
Film sensitivity measurements began in the late 1800s and, since dard Output Sensitivity whose result does not mention gain. In truth,
then, many organizations have vied to produce the dominant standard. gain is not necessarily bad, but it will increase noise in an image, which
DIN and ASA ratings were the de-facto standards for many years and, in may prove more undesirable than lower sensitivity.
1974, the International Standards Organisation (ISO) started collecting The differences between Saturation-based and Standard Output
these together, eventually creating ISO 6, 2240 and 5800. Sensitivity are documented in “ISO Sensitivity and Exposure Index,”
In 1998, as digital cameras became ubiquitous, ISO created a new - Technical Documentation from Imatest (Boulder, CO, USA; www.
standard specifically for digital still cameras. The latest version, ISO 12232- imatest.com) that can be found at http://bit.ly/VSD-IMA.
2:2006 has become the de-facto standard for digital still and video cam- Choosing a camera would be easier if all manufacturers were to use
eras. The rigorous method in the standard requires an illuminated scene, the same measurement - perhaps an update to the standard would be
a camera to collect images and a measurement or assessment of the beneficial. In the meantime, the saturation method, with true disclosure
images. Unfortunately, there are options in each of these steps. of gain and light source technology would seem to be a good baseline.
For example, the standard allows the use of either daylight or tungsten
lighting. For a monochrome camera, especially one without an IR-cut filter, Chris Robinson, Director of Technology, iX Cameras (Woburn, MA, USA;
tungsten illumination is advantageous. This is supposed to be declared with www.ix-cameras.com)

26 Januar y 2018 VISION SYSTEMS DESIGN www.vision-s y stems .com

1801vsd_26 26 1/4/18 8:00 AM


mum frame rate and the ROI used to achieve Ximea (Münster, Germany; www.ximea.com)
these rates. Fastec Imaging (San Diego, CA, in the MT023MG-5Y and xiB-64 series of cam-
USA; www.fastecimaging.com), for example, eras respectively. Forthcoming standards such
in the design of its IL5, has developed a 2560 as USB 3.2 Gen 2 also promise faster data trans-
x 2048 CMOS-based camera (Figure 2) with fer rates for developers of high-speed cameras.
a maximum frame rate of 253 fps at full pixel Although companies such as FLIR Integrated
count, that can be operated in a number of Imaging Solutions (Richmond, BC, Canada;
windowing modes. The fastest of these win-
dows the imager to 64 x 32 pixels allowing
a frame rate of 29,090 fps to be achieved.
With the large amount of data being cap-
tured by such cameras, systems develop-
ers need to consider whether the current
camera-to-computer inter-
COMPACT
faces are enough to sustain
high-speed image data trans-
C-MOUNT
fer to a host computer. If so,
then a number of high-speed LENSES
interfaces are available with
which to implement such systems including • Equipped with our original
10GigE, USB, CoaXPress and less-commer- Schneider-Kreuznach
cially popular camera-to-computer interfaces Figure 4: The Phantom V2512 camera from
focusing mount
such as Thunderbolt and PCI Express. If not, Vision Research is based around a custom
1280 x 800 CMOS imager that is capable
then such cameras must be equipped with • Stable even in harsh
of capturing 12-bit pixels at speeds of up to
on-board image memory to store sequences of 25,000 fps. This equates to an image capture environmental conditions
high-speed images that can then be later trans- data rate of approximately 307Gbit/s.
ferred to host computers for further analysis. • Proven since 1993
Two examples highlight the image transfer www.flir.com/mv) and others have not, at the
speeds that can be achieved with the latest cam- time of writing, offered any products based on Quality, Lightweight
era-to-computer interfaces. Using the 10GigE this standard, the company does offer a number & Cost-Effective
interface standard, Emergent Vision Technol- of products based on the 5Gbits/s USB 3.1 Gen
ogies (EVT; Maple Ridge, BC; Canada; www. 1 standard and is likely to offer 10Gbit/s USB
emergentvisiontec.com), for example, offers its 3.2 Gen 2 cameras soon.
HR-2000, a CMOS-based camera capable of
running 2048 x 1088 pixel images at 338 fps Going faster
(Figure 3). In windowing mode, this can be While high-speed interfaces are allowing rel-
reduced to 320 x 240 to achieve a 1471 fps rate. atively high-speed data transfer between cam-
With a bandwidth of 10Gbits/s, this is ample eras and host computers, even the fastest
enough to transfer such images to a PC with- implementations cannot be used for some of
out requiring host camera memory. the most demanding high-speed applications.
Similarly, in its EoSens 25CXP+ 5120 x 5120 For example, the Phantom V2512 camera
ENGINEERED
CMOS-based camera, Mikrotron (Unterschleis- from Vision Research (Wayne, NJ, USA; www. IN GERMANY

sheim, Germany; www.mikrotron.de) has imple- phantomhighspeed.com) is based around a


mented a four-channel CoaXPress implementa- custom 1280 x 800 CMOS imager that is capa-
tion capable of transferring data at 25Gbits/s. In ble of capturing 12-bit pixels at speeds of up to
windowing mode, the 80 fps rate of the camera’s 25,000 fps (Figure 4). This equates to an image
full 5120 x 5120 pixel imager can be reduced to capture data rate of approximately 307Gbit/s
1024 x 768, increasing the frame rate to 423 fps. – faster than can be transferred from camera In the USA: +1 631 761-5000
Outside the USA: +49.671.601.387
Other high-speed interfaces such as the to host computer using any popular commer-
w w w. s c h n e i d e r o p t i c s . c o m
10Gbit/s Thunderbolt 2 and the 64Gbits/s cially-available camera to computer interface.
PCIe Gen3 interfaces have been employed by For this reason, the camera can be equipped

www.vision-s y stems .com VISION SYSTEMS DESIGN Januar y 2018 27

1801vsd_27 27 1/4/18 8:00 AM


product focus on High-speed imaging

with up to 288GBytes of memory so that at Responding to light


speeds of 10,000 fps, an approximately 20s When specifying which model to choose,
image sequence can be captured. At data systems integrators must, according to
rates of 25Gpixels/s, over 7.6s of recording Chris Robinson, Director of Technol-
time can be achieved. Once captured, this ogy with iX Cameras, be aware of more
data can be saved in the camera’s 2Terrabyte than just frame rates, shutter speeds,
CineMag IV non-volatile memory and/or camera-to-computer interfaces and on-
downloaded to a host computer over a 10Gbit board image memory capability. One of
Ethernet interface. these is sensitivity or how well the camera
Figure 5: Cordin’s rotating mirror-based
Similarly, the Fastcam SA-Z from Photron responds to light (see sidebar “Understanding
camera allows twenty, forty or seventy-eight
(San Diego, CA, USA; www.photron.com) ISO in a digital world,” p.26 this issue). 2Mpixel CCD images to be captured at frame
also employs a proprietary CMOS imager For its part, Vision Research specifies this rates of 4 million fps.
with 1024 x 1024 pixels capable of capturing ISO using the ISO 12232 SAT method, for
12-bit image data. Running at 20,000 fps, both tungsten and daylight illumination as ISO amount of light, many applications use strobed
this equates to a data capture rate of approxi- 100,000T, 32,000D when the camera is oper- LED illumination. However, when deploying
mately 250Gbit/s. This data is then captured ated in monochrome mode. “iX Cameras spec- such lighting, systems integrators must ensure
in the camera’s 128GBytes internal memory ify this sensitivity as 40,000 for its i-SPEED 726 that both camera, lighting and triggering are
that can be transferred to an optional FAST- since the “D” is optional, but the “T” is not and tightly coupled to reduce system latency.
Drive 2TByte removable SSD drive or down- because iX Cameras are tested in daylight, not To study ballistics, for example, Dr. Chris
loaded to a host computer over a dual Gigabit quoting “D” after the ISO reading is correct, Yates and his colleagues at Odos Imaging
Ethernet interface. Other companies, such but sub-optimal,” says Robinson. (Edinburgh, Scotland; www.odos-imaging.
as iX Cameras (Woburn, MA, USA; www.ix- Because camera shutter speeds are often of com) developed a system based on the com-
cameras.com) also produce such high-speed the order of microseconds in high-speed imag- pany’s SE-1000 camera. With an input latency
cameras, all of which can run at variable ing applications, ensuring the correct amount of of less than 1μs, equivalent to a bullet travel-
frame rates based on the ROI chosen. lighting is present is important. To increase the ing less than 1mm across the camera’s FOV,
the time between trigger and exposure is short
enough such that the bullet will not enter the
Companies mentioned FOV of the camera until after recording is ini-
tiated (see “High-speed vision system targets
Andor Technology Mikrotron
Belfast, Ireland Unterschleissheim, Germany ballistics analysis,” Vision Systems Design,
www.andor.com www.mikrotron.de December 2015; http://bit.ly/VSD-BALS).
Automated Vision Systems Odos Imaging Interestingly, while the origins of digi-
San Jose, CA, USA Edinburgh, Scotland tal high-speed imaging stem from its film-
www.autovis.com www.odos-imaging.com
based origins, many historically film-based
Cordin Optronis standards such as the ISO sensitivity stan-
Salt Lake City, UT, USA Kehl, Germany
www.cordin.com www.optronis.com dard have migrated (albeit somewhat unsuc-
cessfully) to the digital domain. Likewise,
Emergent Vision Photron
Technologies (EVT) San Diego, CA, USA some of the concepts such as film-based rotat-
Maple Ridge, BC, Canada www.photron.com ing mirror cameras have been adapted using
www.emergentvisiontec.com
QImaging solid-sate imagers, an example of which is the
Fastec Imaging Surrey, BC, Canada Model 560 high-speed rotating mirror camera
San Diego, CA, USA www.qimaging.com
www.fastecimaging.com from Cordin (Salt Lake City, UT, USA; www.
Vision Research
FLIR Integrated Imaging Wayne, NJ, USA
cordin.com) that allows twenty, forty or sev-
Solutions www.phantomhighspeed.com enty-eight 2M pixel CCD images to be cap-
Richmond, BC, Canada tured at frame rates of 4M fps (Figure 5).
www.flir.com/mv Ximea
Münster, Germany In the future, while faster camera-to-com-
iX Cameras www.ximea.com
Woburn, MA, USA
puter interfaces will lower the cost of high-
www.ix-cameras.com speed imaging for many applications, the use
of cameras with on-board memory and novel
For more information about high-speed camera companies and products, visit Vision Systems
Design’s Buyer’s Guide buyersguide.vision-systems.com opto-mechanical architectures will remain for
those requiring even faster image capture.

28 Januar y 2018 VISION SYSTEMS DESIGN www.vision-s y stems .com

1801vsd_28 28 1/4/18 8:00 AM


» E-mail your product announcements, with photo if available, to vsdproducts@pennwell.com | Compiled by James Carroll

Vision + Automation

PRODUCTS
SDK supports GigE Vision
and USB3 Vision cameras 3D cameras feature
Medley is a new SDK that is designed for GigE and
USB3 Vision cameras to enable users to control all the
IP65/67 housing
X30 FA and X36 FA cameras are
functions of ISG cameras with low CPU overhead and
specifically designed for use in
memory requirements. Features include a GUI with
harsh ambient conditions and fea-
viewer and full camera control, sample code for vari-
ture IP65/67 protection class, a GigE
ous API usage models, and barcode decoding.
switch and two GigE uEye FA cam-
Imaging Solutions Group
eras that can be mounted at differ-
Fairport, NY, USA
ent distances. Each equipped with
www.isgcameras.com
a 1.3 MPixel CMOS image sensor,
the 3D cameras feature working dis-
tances from 0.5 to 5 meters, a 100-
Sony releases CMOS cameras watt projector unit on which the
A new series of SXGA cameras enable users to move from CCD to CMOS image sen- two GigE cameras can be mounted
sors. The first camera available in the series is the XCG-CG160, which is based on at different distances, and a soft-
the 1/3” IMX273 global shutter sensor. The IMX273 is a 1.6 MPixel ware development kit for configura-
sensor that can achieve 75 fps via GigE interface. The tion. The X30 is designed for moving
C-Mount camera’s features include defect-pixel correction, objects, while the X36—which fea-
shading correction with peak and average detection and tures the FlexView 2 projector—is
area gain to auto adjust for the target object. designed for stationary objects.
Sony Image Sensing Solutions IDS Imaging Development Systems
The Heights, Brooklands, Surrey, UK Obersulm, Germany
www.image-sensing-solutions.eu www.ids-imaging.com

VIS-NIR coated achromatic lens line expanded


Optimized to correct on-axis spherical and chromatic aberrations, the TECHSPEC
VIS-NIR coated achromatic lens line has been expanded with nine new models.
The lenses are corrected for on-axis spherical and chromatic aberrations. Addi-
tionally, the lenses’ VIS-NIR coating features a transmission range from 400 to
1000nm. Nine models with diameters of 12.7mm, 18mm, 30mm, and 75mm,
with ground or Inked Edge models, are now available.
Edmund Optics
Barrington, NJ, USA
www.edmundoptics.com

www.vision-s y stems .com VISION SYSTEMS DESIGN Januar y 2018 29

1801vsd_29 29 1/4/18 8:00 AM


+ Automation Products
Vision ■

First CMOS image sensor with Scalable line of CMOS image tion LED flicker mitigation with 120 dB high
Nyxel NIR technology introduced sensors introduced dynamic range, and >95 dB dynamic range
The OS05A20 CMOS image sensor, the first Designed for deployment in automotive appli- from one exposure.
sensor to implement Nyxel technology from cations, the new Hayabusa platform of CMOS ON Semiconductor
OmniVision Technologies, Inc., leverages novel image sensors feature a backside-illuminated Phoenix, AZ, USA
3 µm pixel design that delivers a charge capac- www.onsemi.com
ity of 100,000 electrons. Other key features
include simultaneous on-chip high dynamic Multi-camera solution for
range (HDR) with LED flicker mitigation (LFM), NVIDIA Jetson introduced
Designed for use on the NVIDIA Jetson TX1/
TX2 development kit, the e-CAM30_HEX-
CUTX2 is a multiple-camera solution that tar-
gets applications requiring multiple full HD
cameras. NVIDIA’s TX1 and TX2 can support
silicon semiconductor architectures and pro- up to six 2-lane MIPI CSI-2 cameras simulta-
cesses that address the inherent challenges in neously, and as a result, the e-CAM30_HEX-
NIR detection in image sensors. Nyxel com- CUTX2 consists of six e-CAM30_CUMI0330_
bines thick-silicon pixel architectures with care-
ful management of wafer surface texture to
improve QE, along with extended deep trench
isolation (DTI) to help retain (modular transfer
function) MTF without affecting the sensor’s
dark current. The sensor itself is a 5 MPixel
color CMOS image sensor with NIR sensitivi- plus real-time functional safety and automo-
ties exceeding 850nm and a capture rate of 60 tive grade qualification. The first product in
fps with 2688 x 1944-pixel images. the line is the AR0233, which is a 2.6 MPixel
OmniVision Technologies, Inc. CMOS image sensor capable of running at
Santa Clara, CA, USA 60 fps and featuring multi-exposure mode
www.ovt.com for >140 dB high dynamic range, full resolu-

Pro duc t S howc as e


A D V E R T I S E M E N T

SUBSCRIBE NOW AT PHORTETM CoaXPress®


Quad Fiber Extenders
WWW.VSD-SUBSCRIBE.COM
• Fully supports CoaXPress
standard & PoCXP
• Supports high-speed bit
rates: 1.25G-6.25Gb/s
• Plug-and-play: no configura-
tion required
• Extends link distance up to
80 km
• Optional AUX channels
• Standard DIN 1.0/2.3
connectors
• -40~80ºC temp OK
• Low 5W pwr consumption

www.phrontier-tech.com

30 Januar y 2018 VISION SYSTEMS DESIGN www.vision-s y stems .com

1801vsd_30 30 1/4/18 8:00 AM


+ Automation Products
Vision ■ Line Scan Cameras
Monochrome or color from 512 to 8160 pixels.
Advantages: Also available:
MOD cameras. These board-level cameras • USB 3.0 SuperSpeed
• Ruggedized connector
feature the 3.4 MPixel AR0330 color CMOS
• No external power supply
image sensor from ON Semiconductor and SK 9170: Gray Scale Line Signal – 1
255

integrated advance image signal processor, ZOOM

M12 lens, and are connected to the e-CAM-


0

HEX_TX2ADAP adapter board using custom- 0

ized Micro-Coaxial cables. From there, the


color monochrome
ome
adapter board interfaces with the J22 con- 1.0

nector on the Jetson TX1/TX2. the 6.4 MPixel IMX178 sensor with 2.4 µm Spectral
tra
0.0
range
ge
400 600 800 1000
100

e-con Systems pixel size and frame rates of 59 fps via USB
Tamil Nadu, India 3.0 and 16 fps via GigE. Other cameras fea-
Universal Color Line Scan Camera
www.e-consystems.com ture the 12.2 MPixel IMX226 sensor with a Scanner
nne Systems
Scan with integrated Bright-field
1.85 µm pixel size and frame rates of 31 fps via or Dark-field
Illumination
Line scan polarization USB 3.0 and 8 fps via GigE. The sensors feature
camera introduced low dark noise of three electrons, combined
A 2017 Innovators Awards Platinum-level with a quantum efficiency greater than 80%.
honoree, the Piranha4 line scan polariza- Basler
Bright-field
tion camera is now in full production. Polar- Ahrensburg, Germany Illumination

ization imaging, according to the company, www.baslerweb.com

10GigE camera series introduced


The first model in the QX series of 10GigE cam-
eras feature the 12 MPixel CMV12000 CMOS Corrosion Inspector Objective analysis
image sensor from ams (CMOSIS), which is a of corrosion
phenomena
global shutter sensor with a 5.5 µm pixel size that
can achieve 335 fps in burst mode and 92 fps
in regular mode via the 10GigE interface. Avail-

Sector 1 Sector 4

significantly enhances detection capability in


many demanding applications. The Piranha4-
2k polarization camera uses Teledyne DALSA’s
Repre-
quadlinear 2048 x 4 CMOS image sensor with sentative
nanowire micro-polarizer filters and captures Test plate Sector 2 Sector 3

multiple native polarization state data without


any interpolation. With a maximum line rate SAN FRANCISCO, CALIFORNIA
JAN 30 - FEB 1, 2018
of 70 kHz, the camera outputs independent
Visit us in North Hall, Booth 4962
images of 0°(s), 90°(p), and 135° polarization able in both color and monochrome models, the
states as well as an unfiltered channel. Addi- camera also has an internal image memory of 2
Laser Line, Micro Focus,
tionally, the camera is available in Camera Link GB, and up to 169 images can be buffered at full
Laser Pattern Generators
Base, Medium, Full, or Deca formats and is 8, resolution. At maximum speed, this corresponds Wavelengths 405 – 2050 nm

10, or 12-bit depth selectable. to a recording time of 0.5 seconds.


Teledyne DALSA Baumer
Waterloo, ON, Canada Radeberg, Germany
www.teledynedalsa.com www.baumer.com

STARVIS sensors featured Entry-level high-speed camera


in eight new cameras reaches 500,000 fps
New ace U cameras feature back-illuminated, The tablet-controlled i-SPEED 713 is designed
rolling shutter CMOS image sensors from for applications including drop testing, airbag
Sony’s STARVIS line. Based on two sensors testing and manufacturing that demand the info@SukHamburg.de www.SuKHamburg.com

from the STARVIS line, some cameras feature highest resolution but not necessarily the fast- Schäfter+Kirchhoff develop and manufacture la-
ser sources, line scan camera systems and fiber
www.vision-s y stems .com VISION SYSTEMS DESIGN
optic products for worldwide distribution and use.

1801vsd_31 31 1/4/18 8:00 AM


+ Automation Products
Vision ■

est speed available. The entry-level i-SPEED 713 features a custom


Sales Offices
2048 x 1538 global shutter CMOS image sensor that reaches a Main Office Product Showcase
61 Spit Brook Road, Suite 401 Advertising & Reprint Sales
frame rate of 500,000 fps (4,260 fps at full resolution), combin- Nashua, NH 03060 Judy Leger
ing for 13 GP/s throughput. The camera also features iX Cameras’ (603) 891-0123 (603) 891-9113
FAX: (603) 888-4659 FAX: (603) 888-4659
i-SPEED Software Suite 2.0 PC software for control, analysis, edit- E-mail: judyl@pennwell.com
Publisher
ing, and playback. The high-speed camera is available in an instru- Alan Bergstein INTERNATIONAL SALES CONTACTS
mentation version or an optional (603) 891-9447
Germany, Austria,
FAX: (603) 888-4659
rugged, high-G rated version Northern Switzerland,
E-mail: alanb@pennwell.com
Eastern Europe
designed to resist shock. Executive Assistant Holger Gerisch
iX Cameras Julia Campbell +49 (0) 8847-6986656
(603) 891-9174 FAX: +49 (0) 8801-9153792
Rochford, Essex, UK FAX: (603) 888-4659 E-mail: holgerg@pennwell.com
www.ix-cameras.com E-mail: juliac@pennwell.com
Hong Kong, China
Campaign Manager Adonis Mak
Tom Markley
3D line scan camera features wide field of view (603) 891-9307
852-2-838-6298
FAX: 852-2-838-2766
Featuring a 1400mm (55in.) field of view, the 3DPIXA dual 200 µm HR 3D FAX: (603) 888-4659 E-mail: adonism@actintl.com.hk
line scan camera offers an optical resolution of 200 µm/pixel. Designed for E-mail: thomasm@Pennwell.com
Japan
machine vision inspection, including complete scans Ad Services Manager Masaki Mori
Alison Boyer Murray 81-3-3219-3561
of large, complex and irregularly-shaped (918) 832-9369 FAX: 81-3-5645-1272
objects or textures, the FAX: (918) 831-9415 E-mail: masaki.mori@ex-press.jp
E-mail: alisonb@pennwell.com
stereo camera features Israel
List Rental Dan Aronovic (Tel Aviv)
two RGB tri-linear CCD Kelli Berry 972-9-899-5813
line scan image sensors, (918) 831-9782 E-mail: aronovic@actcom.co.il
FAX: (918) 831-9758
which feature a 10 µm E-mail: kellib@pennwell.com Should you need assistance with
pixel size and line scan frequencies of up to 30 kHz North American Advertising creating your ad please contact:
at full resolution. The 3DPIXA dual 200 µm HR features a Camera Link & Sponsorship Sales MARKETING SOLUTIONS
Judy Leger Vice President
base/medium interface and can simultaneously acquire 2D color images
(603) 891-9113 Paul Andrews
along with either a height map or 3D point cloud. FAX: (603) 888-4659 (240) 595-2352
Chromasens E-mail: judyl@pennwell.com Email: pandrews@pennwell.com

Konstanz, Germany
www.chromasens.de/en

Machine vision camera line launched Advertisers Index


by Industrial Vision Systems Advertiser / Page no.
Available in five color and five monochrome models, IVS-NCGi cam-
Alysium-Tech GmbH......................................................................4
eras are equipped with either a GigE or USB interface. The cam-
Automated Imaging Association .................................................14
eras feature sensors ranging from 1/3” to 1/1.2” in size and 2.3
Computar Optics by CBC Americas...............................................8
MPixel to 5 MPixel in resolution, with frame
Edmund Optics ............................................................................. 2
rates reaching 181 fps in the USB
Intercon 1 ................................................................................... 17
models and 60 fps in the
Jargy Co., Ltd............................................................................ CV3
GigE models. IVS-NCGi
Landesmesse Stuttgart GmbH .................................................... 11
cameras come ready
Matrox Imaging ....................................................................... CV4
to be mounted with
Mikrotron GmbH ........................................................................24
standard LED lighting
Osela Inc. ......................................................................................4
options, plus a range of Photonfocus AG .......................................................................... 13
field-changeable C-Mount lenses and industrial auto- Phrontier Technologies................................................................30
focus lens options. Additionally, the cameras come with IVS soft- Schäfter+Kirchhoff GmbH........................................................... 31
ware platforms, which enable set up and integration for inspection in Schneider Optics ......................................................................... 27
machine vision applications. Sill Optics GmbH & Co KG...........................................................20
Industrial Vision Systems Vieworks .................................................................................. CV2
Oxfordshire, UK This ad index is published as a service. The publisher does not assume any
liability for errors or omissions.
www.industrialvision.co.uk

32 Januar y 2018 VISION SYSTEMS DESIGN www.vision-s y stems .com

1801vsd_32 32 1/4/18 8:00 AM


1801vsd_C3 3 1/4/18 8:07 AM
Why walk
when you can fly?
Meet MIL CoPilot, the interactive environment for Matrox Imaging
Library (MIL). Machine vision application developers can plan their
course by readily and efficiently experimenting and prototyping
with MIL—all without writing a single line of program code. With
the trajectory set, MIL CoPilot accelerates the journey towards
application deployment with a code generator that produces clear,
functional program code in C++, C#, CPython, and Visual Basic®.
Developers using MIL get projects off the ground quicker and
easier than ever before with MIL CoPilot and ready access to
Available as part of Matrox Vision Academy online training.
MIL

Get there faster with a CoPilot


www.matrox.com/imaging/co-pilot/vsd

1801vsd_C4 4 1/4/18 8:07 AM

You might also like