Download as pdf or txt
Download as pdf or txt
You are on page 1of 258

LabVIEW

TM
Machine Vision
and Image Processing
Course Manual
Course Software Version 8.5
April 2008 Edition
Part Number 321998F-01
LabVIEW Machine Vision and Image Processing
Copyright
1998-2008 National Instruments Corporation. All rights reserved.
Under the copyright laws, this publication may not be reproduced or transmitted in any form, electronic or mechanical, including
photocopying, recording, storing in an information retrieval system, or translating, in whole or in part, without the prior written consent
of National Instruments Corporation.
National Instruments respects the intellectual property of others, and we ask our users to do the same. NI software is protected by
copyright and other intellectual property laws. Where NI software may be used to reproduce software or other materials belonging to
others, you may use NI software only to reproduce materials that you may reproduce in accordance with the terms of any applicable
license or other legal restriction.
Trademarks
National Instruments, NI, ni.com, and LabVIEW are trademarks of National Instruments Corporation. Refer to the Terms of Use section
on ni.com/legal for more information about National Instruments trademarks.
Other product and company names mentioned herein are trademarks or trade names of their respective companies.
Members of the National Instruments Alliance Partner Program are business entities independent from National Instruments and have
no agency, partnership, or joint-venture relationship with National Instruments.
Patents
For patents covering National Instruments products, refer to the appropriate location: HelpPatents in your software,
the patents.txt file on your CD, or ni.com/legal/patents.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Worldwide Technical Support and Product Information
ni.com
National Instruments Corporate Headquarters
11500 North Mopac Expressway Austin, Texas 78759-3504 USA Tel: 512 683 0100
Worldwide Offices
Australia 1800 300 800, Austria 43 662 457990-0, Belgium 32 (0) 2 757 0020, Brazil 55 11 3262 3599, Canada 800 433 3488,
China 86 21 5050 9800, Czech Republic 420 224 235 774, Denmark 45 45 76 26 00, Finland 358 (0) 9 725 72511,
France 01 57 66 24 24, Germany 49 89 7413130, India 91 80 41190000, Israel 972 3 6393737, Italy 39 02 41309277,
Japan 0120-527196, Korea 82 02 3451 3400, Lebanon 961 (0) 1 33 28 28, Malaysia 1800 887710, Mexico 01 800 010 0793,
Netherlands 31 (0) 348 433 466, New Zealand 0800 553 322, Norway 47 (0) 66 90 76 60, Poland 48 22 3390150,
Portugal 351 210 311 210, Russia 7 495 783 6851, Singapore 1800 226 5886, Slovenia 386 3 425 42 00,
South Africa 27 0 11 805 8197, Spain 34 91 640 0085, Sweden 46 (0) 8 587 895 00, Switzerland 41 56 2005151,
Taiwan 886 02 2377 2222, Thailand 662 278 6777, Turkey 90 212 279 3031, United Kingdom 44 (0) 1635 523545
For further support information, refer to the Additional Information and Resources appendix. To comment on National Instruments
documentation, refer to the National Instruments Web site at ni.com/info and enter the info code feedback.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
National Instruments Corporation iii LabVIEW Machine Vision and Image Processing
Contents
Student Guide
A. Course Description ...............................................................................................v
B. What You Need to Get Started .............................................................................vi
C. Installing the Course Software..............................................................................vii
D. Course Goals.........................................................................................................vii
E. Course Conventions..............................................................................................viii
Lesson 1
Introduction to Machine Vision
A. National Instruments Machine Vision ..................................................................1-2
B. NI Vision Products................................................................................................1-2
C. Measurement & Automation Explorer .................................................................1-6
Lesson 2
Preparing Your Imaging Environment
A. Preparing Your Imaging Environment .................................................................2-2
B. Selecting a Camera ...............................................................................................2-11
Lesson 3
Acquiring and Displaying Images
A. Acquisition Modes................................................................................................3-2
B. Property Nodes .....................................................................................................3-35
C. Triggering .............................................................................................................3-45
Lesson 4
Processing Images
A. NI Vision VIs........................................................................................................4-2
B. Prototyping Applications with NI Vision Assistant .............................................4-3
Lesson 5
Enhancing Acquired Images
A. Using Spatial Calibration......................................................................................5-2
B. Calibrating Images with NI Vision.......................................................................5-3
C. Calibrating Your Imaging Setup...........................................................................5-4
D. Using Spatial Filters..............................................................................................5-13
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Contents
LabVIEW Machine Vision and Image Processing iv ni.com
Lesson 6
Measuring Features
A. NI Vision Machine Vision VIs .............................................................................6-2
B. Regions of Interest ................................................................................................6-2
C. Nondestructive Overlays.......................................................................................6-8
D. Edge Detection......................................................................................................6-9
Lesson 7
Using Machine Vision Techniques
A. Pattern Matching...................................................................................................7-2
B. Geometric Matching .............................................................................................7-6
C. Coordinate Systems ..............................................................................................7-20
Lesson 8
Processing Binary Images
A. Collecting Image Information with Histograms ...................................................8-2
B. Thresholds.............................................................................................................8-4
C. Morphology ..........................................................................................................8-11
D. Making Particle Measurements ............................................................................8-15
E. Using the Golden Template ..................................................................................8-25
Lesson 9
Identifying Images
A. Binary Particle Classification ...............................................................................9-2
B. Optical Character Recognition..............................................................................9-5
C. 2D Barcode Functions ..........................................................................................9-17
Appendix A
Using Color Tools
A. Introduction to Color ............................................................................................A-2
B. Using Color Tools.................................................................................................A-3
Appendix B
Additional Information and Resources
Glossary
Course Evaluation
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
National Instruments Corporation v LabVIEW Machine Vision and Image Processing
Student Guide
Thank you for purchasing the LabVIEW Machine Vision and Image
Processing course kit. This course manual and the accompanying software
are used in the two-day, hands-on LabVIEW Machine Vision and Image
Processing course.
You can apply the full purchase price of this course kit toward the
corresponding course registration fee if you register within 90 days of
purchasing the kit. Visit ni.com/training to register for a course and to
access course schedules, syllabi, and training center location information.
A. Course Description
The LabVIEW Machine Vision and Image Processing course teaches you to
use National Instruments Vision products to build your machine vision
application. By the end of this course, you should understand the
fundamentals of machine vision, the components that make up a machine
vision system, and the various resources for locating appropriate cameras,
lenses, and lighting equipment.
This course assumes that you have a basic knowledge of LabVIEW,
including the concepts of front panels, block diagrams, While Loops,
controls, and indicators. The course does not cover LabVIEW basics, so
National Instruments encourages prior LabVIEW experience to fully
understand the content of the course. If you are new to LabVIEW, you may
want to consider taking the LabVIEW Basics I: Introduction course from
National Instruments before starting this course.
The course is divided into lessons, each covering a topic or a set of topics.
Each lesson consists of the following parts:
An introduction that describes what you will learn.
A discussion of the topics.
A set of exercises to reinforce the topics presented in the discussion.
In this course you will use an analog image acquisition device, commonly
called a frame grabber, and the NI Vision Development Module to complete
the exercises.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Student Guide
LabVIEW Machine Vision and Image Processing vi ni.com
B. What You Need to Get Started
Before you use this course manual, make sure you have the following items:
Hardware Option A
NI PCI-8254R
IIDC-compliant IEEE 1394 Camera
FireWire Cable (1394a or 1394b)
Lens
Hardware Option B
NI PCI-1407, NI PCI-1409, NI PCI-1410, or NI PCI-1411 image
acquisition device
Monochrome analog video camera
BNC cable to connect your camera to your image acquisition device
Power supply
Lens
Software Requirements
LabVIEW 8.5 or later
NI-IMAQ 3.8 or later
NI Vision 8.5 Development Module
LabVIEW Machine Vision and Image Processing CD, which contains
the following files:
Folder Name Description
Exercises Folder for saving VIs and scripts created during
the course
Solutions Folder containing the solutions to all of the
course exercises
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Student Guide
National Instruments Corporation vii LabVIEW Machine Vision and Image Processing
C. Installing the Course Software
Complete the following steps to install the course software.
1. Insert the course CD into your computer. The LabVIEW Machine
Vision Course Setup dialog box opens.
2. Click Next.
3. Click Next to being the installation.
4. Click Finish to exit the Setup Wizard.
5. The installer places the Exercises and Solutions folders at the top
level of the C:\ directory.
Exercise files are located in the C:\Exercises\LabVIEW Machine
Vision directory.
Removing Course Material
You can remove the course material using the Add or Remove Programs
feature on the Windows Control Panel. Remove the course material if you
no longer need the files on your computer.
D. Course Goals
The goals of this course are:
To teach the fundamentals of building a complete machine vision system
To introduce the basics of National Instruments image acquisition
hardware and NI Vision software
To accelerate the machine vision learning curve
This course does not describe any of the following topics:
Basic principles of LabVIEW covered in the LabVIEW Basics I:
Introduction course.
Every machine vision function or the algorithms behind the NI Vision
VIs
How to build a complete machine vision application
The NI-IMAQ and NI Vision libraries contain many VIs that this course
does not cover due to time constraints. This course covers the VIs that are
the most commonly used by NI customers. If you have questions about other
NI-IMAQ and NI Vision VIs, refer to the NI-IMAQ VI Reference Help and
the NI Vision for LabVIEW VI Reference Help.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Student Guide
LabVIEW Machine Vision and Image Processing viii ni.com
E. Course Conventions
The following conventions are used in this course manual:
The symbol leads you through nested menu items and dialog box options
to a final action. The sequence FilePage SetupOptions directs you to pull
down the File menu, select the Page Setup item, and select Options from
the last dialog box.
This icon denotes a tip, which alerts you to advisory information.
This icon denotes a note, which alerts you to important information.
bold Bold text denotes items that you must select or click in the software, such as
menu items and dialog box options. Bold text also denotes parameter names.
italic Italic text denotes variables, emphasis, a cross-reference, or an introduction
to a key concept. Italic text also denotes text that is a placeholder for a word
or value that you must supply.
monospace Text in this font denotes text or characters that you enter from the keyboard,
sections of code, programming examples, and syntax examples. This font
also is used for the proper names of disk drives, paths, directories, programs,
subprograms, subroutines, device names, functions, operations, variables,
filenames, and extensions.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
National Instruments Corporation 1-1 LabVIEW Machine Vision and Image Processing
1
Introduction to Machine Vision
Your computer provides the platform you need to make your measurement
and automation systems dependable and efficient. Its extensive processing
capabilities empower you to create flexible solutions based on industry
standards. With this flexibility, you can adjust your application
specifications more easily than with traditional tools.
National Instruments hardware and software connect your computer to your
application to offer the widest range of solutions for practically any
measurement or automation application. PC-based machine vision systems
have the flexibility to address the needs of research, test and measurement,
and industrial automation vision applications.
This lesson describes National Instruments Machine Vision products and
explains how to set up your camera and acquire your first image.
Topics
A. National Instruments Machine Vision
B. NI Vision Products
C. Measurement & Automation Explorer
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 1 Introduction to Machine Vision
LabVIEW Machine Vision and Image Processing 1-2 ni.com
A. National Instruments Machine Vision
In test and measurement applicationssuch as movement measurement,
event recording, and result verificationyou can easily integrate and
correlate images with transducer-based data acquired with data acquisition
hardware. National Instruments image acquisition devices can generate
timing signals to time sequences of images during the test so that the
resulting images correlate precisely to temperature, strain, and other signals.
In industrial applications, vision analysis routines test incoming images for
part quality. Overall, computer vision systems are more reliable and
cost-effective than human workers in the high-speed, detailed, repetitive
manufacturing processes required for making semiconductors, electronic,
medical, pharmaceutical, and computer products.
B. NI Vision Products
Image Acquisition Hardware
The National Instruments line of image acquisition devices features
compact machine vision systems that connect to cameras, and image
acquisition plug-in boards that connect to parallel digital, analog, Camera
Link, IEEE 1394, and GigE Vision cameras. These devices include
advanced triggering and digital I/O features that you can use to trigger an
acquisition based on a digital signal from photocells or proximity switches.
You also can use digital I/O signals to strobe lights or relay devices.
National Instruments high-speed image acquisition devices provide up to
128 MB of onboard memory. With onboard memory, you can acquire at
high rates while sustaining high-speed throughput and greater overall
system performance. Image acquisition devices also feature state-of-the-art
digital technology to maximize throughput over PCI, PCIe, and PXI buses.
This technology allows you to acquire images from high-speed digital
cameras with low latency and no loss of data.
Most image acquisition devices work with motion control and data
acquisition hardware using the real-time system integration (RTSI) bus. On
National Instruments PCI boards, the RTSI bus connector sits on the top of
the board. You can use a ribbon cable to connect RTSI connectors on
adjacent boards and send triggering and timing information between your
device and up to four National Instruments data acquisition (DAQ), motion
control, or other image acquisition devices in your computer. On National
Instruments PXI boards, a PXI Trigger Bus on the PXI backplane replaces
the RTSI cable.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 1 Introduction to Machine Vision
National Instruments Corporation 1-3 LabVIEW Machine Vision and Image Processing
If you require more advanced triggering or additional I/O lines (either
digital or analog), you can use your frame grabber and NI-IMAQ with the
National Instruments DAQ product line.
Image acquisition devices also support preprocessing, which can improve
the performance of your application. Image acquisition devices can perform
such tasks as pixel and line scaling (decimation) and region-of-interest
acquisition.
National Instruments offers the following types of image acquisition
hardware:
Table 1-1. NI Image Acquisition Hardware
Analog Devices NI analog frame grabbers are ideal for cost-sensitive machine vision
and scientific imaging applications. Analog frame grabbers have
simple cabling and support a diverse range of cameras. Analog frame
grabbers also support the color and monochrome standards NTSC,
PAL, RS-170, and CCIR.
Parallel Digital Devices NI parallel digital frame grabbers are designed for image acquisition
from TTL, LVDS, and RS-422 digital cameras. Dozens of custom
cables are available.
Camera Link Devices NI frame grabbers for Camera Link cameras are designed for machine
vision and scientific applications that require high-performance
image acquisition with simple cabling. NI frame grabbers for Camera
Link can acquire images from any base, medium, or full configuration
Camera Link camera.
IEEE 1394 Devices NI hardware devices for IEEE 1394 cameras provide a low-cost and
simple way to acquire images from any IIDC-compliant IEEE 1394
camera. With IEEE 1394 cameras, there is no frame grabber
requirement.
GigE Vision Devices NI hardware devices for GigE Vision cameras provide a low-cost and
highly optimized way to acquire images from any GigE Vision
cameras. With GigE Vision cameras, there is no frame grabber
requirement.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 1 Introduction to Machine Vision
LabVIEW Machine Vision and Image Processing 1-4 ni.com
Detailed specifications for NI image acquisition devices are available online
at ni.com/manuals.
NI-IMAQ
NI-IMAQ driver software is a complete, robust application programming
interface (API) for image acquisition that ships with your image acquisition
hardware. Whether you are using LabVIEW, LabWindows

/CVI

, or
Measurement Studio, NI-IMAQ gives you high-level control over National
Instruments image acquisition devices. NI-IMAQ performs all of the
computer- and board-specific tasks, allowing straightforward image
acquisition without register level programming. NI-IMAQ is compatible
with NI-DAQ and all other National Instruments driver software for easy
integration of imaging into any National Instruments solution.
NI-IMAQ features an extensive library of functions that you can call from
your application programming environment. These functions include
routines for video configuration, image acquisition (continuous and
single-shot), memory buffer allocation, trigger control, and board
configuration. NI-IMAQ provides all of the functionality you need to
acquire and save images. Refer to Lesson 3, Acquiring and Displaying
Images, for more information about acquiring and displaying images using
NI-IMAQ.
NI-IMAQdx
NI-IMAQdx is a separate driver software API for GigE Vision cameras and
IEEE 1394 industrial digital video cameras. With functionality similar to
NI-IMAQ, NI-IMAQdx gives you high-level control in LabVIEW,
LabWindows/CVI, and Measurement Studio. NI-IMAQdx features a library
of functions, including image acquisition (continuous and single-shot),
trigger configuration, camera attribute control, and register-level reading
Compact Vision
Systems
The NI CVS-1450 Series compact vision systems are stand-alone
devices which you program using a network connection, unlike
typical NI image acquisition devices, which are plugged into a PCI,
PXI, or PCIe bus. CVS-1450 Series devices provide three IEEE 1394
ports to connect up to 15 cameras with the use of external hubs.
NI Smart Camera The NI Smart Camera is a combination of an image sensor and a
high-performance processor that returns inspection results instead of
images. While a typical industrial camera acquires and transmits
images through a standard camera bus, such as Camera Link or IEEE
1394, to a host PC or vision system that processes the images, a smart
camera performs all of these operations directly on the camera.
Table 1-1. NI Image Acquisition Hardware
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 1 Introduction to Machine Vision
National Instruments Corporation 1-5 LabVIEW Machine Vision and Image Processing
and writing. NI-IMAQdx provides all of the functionality you need to
acquire and configure your camera.
NI Vision Development Module
The Vision Development Module is a software package for engineers
and scientists who are developing machine vision and scientific
imaging applications. The development module includes NI Vision
Assistantan interactive environment for developers who need to quickly
prototype vision applications without programmingand NI Vision, a
library of powerful functions for image processing. The development
module also includes NI-IMAQ and NI-IMAQdx.
NI Vision
NI Vision is the image processing toolkit, or library, that adds high-level
machine vision and image processing to your programming environment.
You must have LabVIEW, LabWindows/CVI, Measurement Studio, or
another programming environment to use NI Vision.
NI Vision includes an extensive set of MMX-optimized functions for the
following machine vision tasks:
Grayscale, color, and binary image display
Image processingincluding statistics, filtering, and geometric
transforms
Pattern matching and geometric matching
Particle analysis
Gauging
Measurement
Object Classification
Optical character recognition
Use NI Vision to accelerate the development of industrial machine vision
and scientific imaging applications. Refer to Lesson 4, Processing Images,
for more information about using NI Vision.
NI Vision Assistant
NI Vision Assistant is a tool for prototyping and testing image processing
applications. Create custom algorithms with the Vision Assistant scripting
feature, which records every step of your processing algorithm. After
completing the algorithm, you can test it on other images to check its
reliability.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 1 Introduction to Machine Vision
LabVIEW Machine Vision and Image Processing 1-6 ni.com
Vision Assistant uses the NI Vision library but can be used independently of
other development environments. In addition to being a tool for prototyping
vision systems, you can use Vision Assistant to learn how different image
processing functions perform. It can also produce a working LabVIEW VI
based on the script created or generate a builder file that contains the calls
needed to execute the script in C or Visual Basic. Additionally, there is a
Vision Assistant Express VI in LabVIEW that can be used to generate
LabVIEW code.
The Vision Assistant interface makes prototyping your application easy and
efficient with features such as a reference window that displays your
original image, a script window that stores your image processing steps, and
a processing window that reflects changes to your images as you apply new
parameters. Refer to Lesson 4, Processing Images, for more information
about using Vision Assistant.
NI Vision Builder for Automated Inspection
Note NI Vision Builder for Automated Inspection is not covered in this course.
NI Vision Builder for Automated Inspection (Vision Builder AI) is a
stand-alone prototyping and testing program, much like Vision Assistant.
However, you can run your final inspection application from within Vision
Builder AI. Vision Builder AI requires no programming experience, which
enables you to develop projects in a shorter amount of time.
The software includes functions for setting up complex pass/fail decisions,
controlling digital I/O devices, and communicating with serial devices such
as PLCs. After prototyping your application with the NI Vision library,
Vision Builder AI builds a script from the functions you selected that you
can run in Vision Builder AI as your inspection application.
C. Measurement & Automation Explorer
You can easily configure your image acquisition system with Measurement
& Automation Explorer (MAX), which is included with NI-IMAQ. MAX is
an interactive tool for configuring National Instruments hardware devices.
Use MAX to select the type of camera (RS-170, CCIR, NTSC, PAL, and
nonstandard) you are using. You also can set parameters for the region of
interest, black and white levels, antichrominance filter, asynchronous
acquisition, gain, and exposure time. Additionally, you can use MAX to set
up acquisitions from noninterlaced progressive scan cameras.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 1 Introduction to Machine Vision
National Instruments Corporation 1-7 LabVIEW Machine Vision and Image Processing
Exercise 1-1 Image Acquisition with MAX
Goal
Configure a camera and acquire an image using MAX.
Description
Complete the following steps to examine the configuration for the camera
attached to the computer and to acquire video.
Instructors Note This exercise is designed for use with either an image acquisition
device and an analog camera or an IEEE 1394 camera. If an image acquisition device is
installed in the computer, complete Part A: Acquiring from an IEEE 1394 Camera. If an
IEEE 1394 camera is attached to the computer, complete Part B: Acquiring from a
Frame Grabber.
Launch MAX by double-clicking the icon on the desktop or by selecting
ToolsMeasurement & Automation Explorer in LabVIEW. MAX
searches the computer for installed National Instruments hardware and
displays the information.
Part A: Acquiring from an IEEE 1394 Camera
1. Acquire VGA video at 30 fps from your camera.
Expand Devices and Interfaces. A list of all NI devices in your
system opens.
Expand NI-IMAQdx Devices to view the attached cameras that are
detected by the operating system.
Note When attached to a PC running Windows XP, an IEEE 1394 camera may be
assigned the Generic 1394 Desktop Camera driver. If your camera is listed as a Generic
1394 Desktop Camera instead of its brand and model, right-click the camera name and
select DriversNI-IMAQdx IIDC Digital Camera. MAX will use the National
Instruments driver instead of the Windows default driver.
Select the name of your camera, such as cam0 : Basler
scA640-70fm.
Click the Snap button on the toolbar to acquire your first image.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 1 Introduction to Machine Vision
LabVIEW Machine Vision and Image Processing 1-8 ni.com
Figure 1-1. Acquiring Your First Image with MAX
Click the Grab button on the MAX toolbar to acquire continuous
images.
Click the Grab button again to stop the acquisition.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 1 Introduction to Machine Vision
National Instruments Corporation 1-9 LabVIEW Machine Vision and Image Processing
2. Examine the adjustable attributes available on your camera.
Select the Camera Attributes tab to view the list of attributes
provided with your camera.
Figure 1-2. Viewing Camera Attributes in MAX
Note Many attributes are defined in the IIDC specification for IEEE 1394 cameras.
However, each attribute is an optional feature that the camera manufacturer may or may
not have included. If an attribute is available, NI-IMAQdx Cameras will detect the
attribute and display the attribute in the Camera Attribute list.
Experiment with different values for each attribute. Perform a Snap
or Grab for each value to view the effect on the image. When
finished, click Revert to return to the prior settings and click the
Save button on the MAX toolbar.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 1 Introduction to Machine Vision
LabVIEW Machine Vision and Image Processing 1-10 ni.com
Part B: Acquiring from a Frame Grabber
1. Acquire live video from your camera.
Expand Devices and Interfaces. A list of all NI devices in your
system opens.
Expand NI-IMAQ Devices to view the image acquisition devices
installed in your system.
Expand the device to which your camera is attached, such as
img0: NI PCI-1411. (The specific hardware may vary in your
class.) The available channels on that device become visible.
Select the channel your camera is attached to, such as
Channel 0: rs170.
Click the Snap button on the MAX toolbar to acquire your first
image.
Note To zoom in on an image, right-click the image and select Viewer ToolsZoom,
and then click the image with the zoom tool. To zoom out on an image, right-click the
image, select Viewer ToolsZoom, hold the <Shift> key, and click the image.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 1 Introduction to Machine Vision
National Instruments Corporation 1-11 LabVIEW Machine Vision and Image Processing
Figure 1-3. Acquiring Your First Image with MAX
Click the Grab button on the MAX toolbar to acquire continuous
images.
Click the Grab button again to stop the acquisition.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 1 Introduction to Machine Vision
LabVIEW Machine Vision and Image Processing 1-12 ni.com
2. Examine the acquisition parameters of your device.
Click the Acquisition Parameters tab.
Adjust the Width and Height parameters. Click Snap to view the
effect on the acquired image.
Figure 1-4. Effect of Adjusting Width and Height in MAX
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 1 Introduction to Machine Vision
National Instruments Corporation 1-13 LabVIEW Machine Vision and Image Processing
3. Examine the lookup tables (LUT) applied to your acquisition.
Click the LUT tab to display the lookup tables.
Experiment with the different values in the Lookup Table listbox.
For each value, perform a snap or a grab to view the effects on the
image.
Figure 1-5. Effect of Using the Lookup Table in MAX
4. When finished, click Revert to return to the prior settings.
End of Exercise 1-1
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 1 Introduction to Machine Vision
LabVIEW Machine Vision and Image Processing 1-14 ni.com
Notes
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
National Instruments Corporation 2-1 LabVIEW Machine Vision and Image Processing
2
Preparing Your Imaging Environment
This lesson describes how to set up an imaging system and what factors you
should consider when choosing a camera.
Setting up the imaging environment is a critical first step to any imaging
application. If you set up your system properly, you can focus your
development energy on the application rather than problems from the
environment, and you can save processing time at run time.
Selecting a camera is an important step in preparing your imaging
environment. Consider the requirements of your application to determine
the frame rate, resolution, and camera type you need.
Topics
A. Preparing Your Imaging Environment
B. Selecting a Camera
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 2 Preparing Your Imaging Environment
LabVIEW Machine Vision and Image Processing 2-2 ni.com
A. Preparing Your Imaging Environment
To properly prepare the imaging environment for your application, you first
must examine the imaging task to determine the machine vision requirements.
During system setup, you decide the type of lighting and lens you need, and
you determine some of the basic specifications of the imaging tasks.
Consider the following three elements when setting up your system:
imaging system parameters, lighting, and motion.
Fundamental Parameters of an Imaging System
Before you acquire images, you must set up your imaging system.
Figure 2-1 illustrates the eight fundamental parameters that comprise an
imaging system.
Figure 2-1. Fundamental Parameters of an Imaging System
1 Resolution
2 Field of View
3 Working Distance
4 Sensor Size
5 Depth of Field
6 Image
7 Pixel
8 Pixel Resolution
7 8
6
2
5
3
1
4
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 2 Preparing Your Imaging Environment
National Instruments Corporation 2-3 LabVIEW Machine Vision and Image Processing
The manner in which you set up your system depends on the type of
analysis, processing, and inspection you need to do. Your imaging system
should produce images with high enough quality to extract the information
you need from the images. Five factors contribute to overall image quality:
resolution, contrast, depth of field, perspective, and distortion.
Resolution
Resolution indicates the amount of object detail that the imaging system can
reproduce. You can determine the required resolution of your imaging
system by measuring in real-world units the size of the smallest feature you
need to detect in the image.
Figure 2-2 depicts a barcode. To read a barcode, you need to detect the
narrowest bar in the image. The resolution of your imaging system in this
case is equal to the width of the narrowest bar (w).
Figure 2-2. Determining the Resolution of Your Imaging System
To make accurate measurements, a minimum of two pixels should represent
the smallest feature you want to detect in the digitized image. In Figure 2-2,
the narrowest vertical bar (w) should be at least two pixels wide in the
image. With this information, you can use the following guidelines to select
the appropriate camera and lens for your application.
Sensor Resolution
Sensor resolution is the number of columns and rows of CCD pixels in the
camera sensor. To compute the sensor resolution, you need to know the field
of view (FOV). The FOV is the area under inspection that the camera can
acquire. The horizontal and vertical dimensions of the inspection area
determine the FOV. Make sure the FOV includes the object you want to
inspect.
a. b.
h h
w
w
w
fov
h
fov
w
fov
h
fov
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 2 Preparing Your Imaging Environment
LabVIEW Machine Vision and Image Processing 2-4 ni.com
When you know the FOV, you can use the following equation to determine
your required sensor resolution:
Sensor Resolution = (FOV/resolution) 2
= (FOV/size of smallest feature) 2
Use the same units for FOV and size of smallest feature. Choose the largest
FOV value (horizontal or vertical). For example, you would use the
horizontal FOV value to calculate the sensor resolution for Figure 2-2.
Cameras are manufactured with a limited number of standard sensor
resolutions. Table 2-1 shows some typical camera sensors and their
resolutions.
If your required sensor resolution does not correspond to a standard sensor
resolution, choose a camera whose sensor resolution is larger than you
require, or use multiple cameras. Be aware that camera prices increase as
sensor sizes increase.
By determining the sensor resolution you need, you narrow down the
number of camera options that meet your application needs. Another
important factor that affects your camera choice is the physical size of the
sensor, known as the sensor size. The sensors diagonal length specifies the
size of the sensors active area. The number of pixels in your sensor should
be greater than or equal to the pixel resolution. Figure 2-3 shows the sensor
size dimensions for standard 1/3 inch, 1/2 inch, and 2/3 inch sensors.
Note The names of the sensors do not reflect the actual sensor dimensions.
Table 2-1. Typical Camera Sensors and Resolutions
Number of CCD Pixels FOV Resolution
640 480 60 mm 0.185 mm
768 572 60 mm 0.156 mm
1280 1072 60 mm 0.093 mm
2048 2048 60 mm 0.058 mm
4000 2624 60 mm 0.030 mm
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 2 Preparing Your Imaging Environment
National Instruments Corporation 2-5 LabVIEW Machine Vision and Image Processing
Figure 2-3. Common Sensor Sizes
In most cases, the sensor size is fixed for a given sensor resolution. If you
find cameras with the same resolution but different sensor sizes, you can
determine the sensor size you need based on the next guideline.
Determining the Focal Length of Your Lens
A lens is primarily defined by its focal length. Figure 2-4 illustrates the
relationship between the focal length of the lens, field of view, sensor size,
and working distance.
Figure 2-4. Relationship between Focal Length, FOV, Sensor Size,
and Working Distance
The working distance is the distance from the front of the lens to the object
under inspection.
If you know the FOV, sensor size, and working distance, you can compute
the focal length of the lens you need using the following formula:
Focal Length = Sensor Size Working Distance/FOV
6.0
8.0
11.0
4.8
3.6
4.8
6.4
6.6
8.8
1/3 inch 1/2 inch 2/3 inch
Units: mm
Working Distance Focal Length
Lens
F
i
e
l
d

o
f

V
i
e
w
Scene
Sensor
S
e
n
s
o
r

S
i
z
e
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 2 Preparing Your Imaging Environment
LabVIEW Machine Vision and Image Processing 2-6 ni.com
Lenses are manufactured with a limited number of standard focal lengths.
Common lens focal lengths include 6 mm, 8 mm, 12.5 mm, 25 mm, and
50 mm. After you choose a lens with a focal length close to the focal length
required by your imaging system, you need to adjust the working distance
to get the object under inspection in focus.
Lenses with short focal lengths (less than 12 mm) produce images with a
significant amount of distortion. If your application is sensitive to image
distortion, try increasing the working distance and use a lens with a higher
focal length. If you cannot change the working distance, you are somewhat
limited in choosing your lens.
Note When you are setting up your system, you need to fine tune the various parameters
of the focal length equation until you achieve the right combination of components that
match your inspection needs and meet your cost requirements.
Contrast
Resolution and contrast are closely related factors that contribute to image
quality. Contrast defines the differences in intensity values between the
object under inspection and the background. Your imaging system should
have enough contrast to distinguish objects from the background. Proper
lighting techniques can enhance contrast in your system.
Depth of Field
The depth of field of a lens is its ability to keep in focus objects of varying
heights or objects located at various distances from the camera. If you need
to inspect objects of various heights, choose a lens that can maintain the
image quality you need as the objects move closer to and further from the
lens. You can increase the depth of field by closing the iris of the lens and
providing more powerful lighting.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 2 Preparing Your Imaging Environment
National Instruments Corporation 2-7 LabVIEW Machine Vision and Image Processing
Perspective
Perspective errors occur when the camera axis is not perpendicular to the
object under inspection. Figure 2-5a shows an ideal camera position.
Figure 2-5b shows a camera imaging an object from an angle.
Figure 2-5. Camera Angle Relative to the Object Under Inspection
Distortion
Nonlinear distortion is a geometric aberration caused by optical errors in the
camera lens. A typical camera lens introduces radial distortion. This causes
points that are away from the lenss optical center to appear further away
from the center than they really are. When distortion occurs, information in
the image is misplaced relative to the center of the field of view, but the
information is not necessarily lost. Therefore, you can correct your image
through spatial calibration.
Lighting
One of the most important aspects of your imaging environment is proper
illumination. Images acquired under proper lighting conditions make image
processing software development easier and decrease the overall processing
time. One objective of lighting is to separate the feature or part you want to
inspect from the surrounding background by as many gray levels as
possible. Another goal is to control the light in the scene. Set up your
lighting devices so that changes in ambient illuminationsuch as sunlight
changing with the weather or time of daydo not compromise image
analysis and processing.
Common types of light sources include halogen, LED, fluorescent, and
laser. To learn more about lighting techniques and decide which is best for
your application, visit the Web sites of National Instruments lighting
partners, which are listed at ni.com/vision.
1 Lens Distortion 2 Perspective Error 3 Known Orientation Offset
3
2
1
a. b.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 2 Preparing Your Imaging Environment
LabVIEW Machine Vision and Image Processing 2-8 ni.com
The type of lighting techniques you select can determine the success or
failure of your application. Improper lighting can cause shadows and glares
that degrade the performance of your image processing routine. For
example, some objects reflect large amounts of light because of their
curvature or surface texture. Highly directional light sources increase the
sensitivity of specular highlights (glints), as shown on the barcode in
Figure 2-6a. Figure 2-6b shows an image of the same barcode acquired
under diffused lighting.
Figure 2-6. Using Diffused Lighting to Eliminate Glare
Backlighting is another lighting technique that can help improve the
performance of your vision system. If you can solve your application by
looking at only the shape of the object, you may want to create a silhouette
of the object by placing the light source behind the object you are imaging.
By lighting the object from behind, you create sharp contrasts that make
finding edges and measuring distances fast and easy. Figure 2-7 shows a
stamped metal part acquired in a setup using backlighting.
Figure 2-7. Using Backlighting to Create a Silhouette of an Object
a. b.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 2 Preparing Your Imaging Environment
National Instruments Corporation 2-9 LabVIEW Machine Vision and Image Processing
Many other factors, such as the camera you choose, contribute to your
decision about the appropriate lighting for your application. For example,
you may want to choose lighting sources and filters whose wavelengths
match the sensitivity of the CCD sensor in your camera and the color of the
object under inspection. Also, you may need to use special lighting filters or
lenses to acquire images that meet your inspection needs.
Movement Considerations
In some machine vision applications, the object under inspection moves or
has moving parts. In other applications, you may need to incorporate motion
control to position the camera in various places around the object or move
the object under the camera. Images acquired in applications that involve
movement may appear blurry. Follow these suggestions to reduce or
eliminate blur caused by movement.
One way to eliminate blur resulting from movement is to use a progressive
scan camera. Progressive scan cameras acquire a full frame at a time,
whereas standard interlaced cameras acquire the odd and even fields
separately and then interlace them. Motion-induced blur in images acquired
with an interlace camera occurs because the two fields are not acquired at
the same time.
Another method for eliminating movement-induced blur in images is to use
a strobe light with a progressive scan camera. Camera sensors acquiring
images of a moving object require a short exposure time to avoid blur. To
obtain an image with enough contrast, the sensor needs to be exposed to a
sufficient amount of light during a short amount of time. Strobe lights emit
light for only microseconds at a time, thus limiting the exposure time of the
image to the CCD and resulting in a very crisp image. Strobe lights require
synchronization with the camera. National Instruments image acquisition
hardware devices have up to four standard digital I/O lines that you can use
to send pulses to control strobe lights. You also can use National Instruments
data acquisition hardware to control strobe lights.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 2 Preparing Your Imaging Environment
LabVIEW Machine Vision and Image Processing 2-10 ni.com
Exercise 2-1 Concept: Camera Attributes
Goal
Use your knowledge of imaging parameters to determine the focal length
and camera resolution needed in a barcode application.
Scenario
You are developing a system to verify that the correct barcode is placed on
each newly assembled product. The length of the bar code is 62 mm. The
smallest bar has a width of 0.2 mm.
Figure 2-8. Barcode
Due to mechanical constraints in the system, the lens can be no closer
than 124 mm from the barcode.
The sensor size on your camera is 10 mm.
1. Determine the optimal size of the lens (in mm) you would purchase for
this application.
Focal Length = ____________________________________________
2. Your camera features a resolution of 640 480 pixels. Given that the
smallest bar has a width of 0.2 mm, determine if the resolution of this
camera is acceptable for reading this barcode.
Resolution needed for barcode: _______________________________
Acceptable? ______________________________________________
End of Exercise 2-1
62 mm
0.2 mm
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 2 Preparing Your Imaging Environment
National Instruments Corporation 2-11 LabVIEW Machine Vision and Image Processing
B. Selecting a Camera
Camera Advisor
National Instruments offers Camera Advisor (ni.com/camera), a
one-stop Web resource that you can use to select an imaging camera. This
virtual catalog provides camera features and specifications so that you can
compare cameras by category (analog, digital, line scan, area scan, or
progressive scan), specification, or manufacturer.
Area Scan Versus Line Scan Cameras
Cameras use different methods to acquire the pixels of an image. Two
popular methods are area scan and line scan. An area scan camera acquires
a two-dimensional array of pixels at a time. A line scan camera scans only
one line of pixels at a time, providing faster acquisition. However, you must
fit the lines together with software to create a whole image. Line scan
cameras are useful in web inspection applications during which the object
under inspection moves along a conveyor or stage in a production system.
Line scan cameras also are useful in high-resolution applications because
you can arbitrarily lengthen the image by fitting a specified number of lines
together.
Analog Cameras
Analog cameras are ideal for many applications because of their established
technology, simple cabling, and low cost. Most analog cameras produce
interlaced video, a method used by the television industry to increase the
perceived image update rate.
Analog cameras output a video signal in an analog format. The horizontal
sync (HSYNC) pulse identifies the beginning of each line; several lines
make up a field. An additional pulse, called the vertical sync (VSYNC),
identifies the beginning of a field. For most low-end cameras, the odd and
even fields are interlaced to increase the perceived image update rate.
Two fields make up one frame. Figure 2-9 illustrates the analog output of a
video signal.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 2 Preparing Your Imaging Environment
LabVIEW Machine Vision and Image Processing 2-12 ni.com
Figure 2-9. Analog Output of a Video Signal
Notice the black level indicated in Figure 2-9. The black level is a reference
voltage for measuring pixel intensities. Low voltages typically indicate
darker pixels, while higher voltages result in lighter pixels.
CCD Sensors
When you acquire an image, a charge coupled device (CCD) sensor, is
exposed to light. The CCD sensor measures the light intensity across a
two-dimensional field. The CCD then transmits the measurements serially
through an output register to your image acquisition device, where the light
intensity data is digitized. Figure 2-10 illustrates a CCD sensor.
Figure 2-10. CCD Sensor
1 HSYNC Signal
2 Black Level
3 Video Data
4 Single Line Scan
1 X Pixel Resolution 2 Y Pixel Resolution 3 Output Registers
1
1
2
4
3
1
2
3
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 2 Preparing Your Imaging Environment
National Instruments Corporation 2-13 LabVIEW Machine Vision and Image Processing
Standard Analog Video Formats
Table 2-2 describes the standard video formats used by imaging cameras.
These formats vary in their image size, the availability of color, and frame
rate. Most analog cameras adhere to one of these four formats.
Digital Cameras
Digital cameras use three types of signalsdata lines, a pixel clock, and
enable lines. Data lines are parallel wires that carry digital signals
corresponding to pixel values. Digital cameras typically represent pixels
with 8, 10, 12, or 14 bits, and color digital cameras can use up to 24 bits or
more for each pixel. Therefore, depending on your camera, you may have as
many as 24 data lines, or wires, representing one pixel. The number of data
lines per pixels determines the pixel depth.
The pixel clock is a high-frequency pulse train that determines when the data
lines contain valid data. On the active edge of the pixel clock (which can be
either the rising edge or the falling edge, depending on the camera), the
digital lines have a constant value that is input into your image acquisition
device. The image acquisition device then latches in the data. The pixel
clock frequency determines the rate at which pixels are acquired.
Enable lines indicate when data lines contain valid data. The HSYNC signal
(also known as the Line Valid signal) is active while a row of pixels is
acquired. At the end of that row, the HSYNC signal goes inactive until the
next row of pixels begins. A second signal, the VSYNC signal (or Frame
Valid signal) is active during the acquisition of an entire frame. At the end
of that frame, the signal goes inactive until the beginning of the next frame.
Digital line scan cameras consist of a single row of CCD elements and only
require a Line Valid timing signal. Area scan cameras need both the Line
Valid and Frame Valid signals. Figure 2-11 illustrates a digital video signal.
Table 2-2. Standard Video Formats
Format Location
Frames per
Second Color Image Size
RS-170 USA, Japan 30 No 640 480
NTSC USA, Japan 30 Yes 640 480
CCIR Europe 25 No 768 576
PAL Europe 25 Yes 768 576
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 2 Preparing Your Imaging Environment
LabVIEW Machine Vision and Image Processing 2-14 ni.com
Figure 2-11. Digital Video Signal
Taps
Increasing the speed of a digital cameras pixel clock or acquiring more than
one pixel at a time can greatly improve the cameras acquisition speed. A
tap, or channel, is a group of data lines that carry one pixel each. A camera
that latches only one pixel during the active edge of the pixel clock is known
as a single tap camera. Other cameras have multiple pixels on separate data
lines that are all available during the active edge of the pixel clock. Taps
require more data lines but provide faster data transfer.
Types of Digital Cameras
Many older or specialized digital cameras use the parallel interface, a
well-established standard that provides a wide range of acquisition speeds,
image sizes, and pixel depths. Parallel cameras often require users to
customize their cables and connectors to suit their image acquisition device.
Another digital camera interface is the IEEE 1394 standard. IEEE 1394
offers a simple daisy chain cabling system using a standard interface, but it
lacks the data throughput capabilities of the parallel interface. NI-IMAQdx
is required to integrate the camera with MAX to provide functions for
programming.
Another digital camera interface is the GigE Vision standard. The GigE
Vision Standard is owned by the Automated Imaging Association (AIA) and
was developed by a group of companies from every sector within the
1 Lighting Input
2 Output Registers
3 Valid Data
4 Line Valid (HSYNC)
5 Frame Valid (VSYNC)
6 Pixel Clock
2
1
3
4
5
6
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 2 Preparing Your Imaging Environment
National Instruments Corporation 2-15 LabVIEW Machine Vision and Image Processing
machine vision industry for the purpose of establishing a standard that
allows camera companies and software companies to integrate seamlessly
on the Gigabit Ethernet bus. The GigE Vision standard defines the process
by which a host machine can discover, control and acquire images from one
or more GigE Vision cameras. NI-IMAQdx is the National Instruments
driver that can discover, control and acquire images from cameras that
follow the GigE Vision standard.
The Camera Link standard was developed to ease the challenges of the
custom cable interface between digital cameras and frame grabbers. As part
of the Camera Link Standards Committee, National Instruments and several
camera and frame grabber manufacturers developed this standard to offer
speed and triggering functionality with the ease of standardized cabling.
Camera Files
Because digital cameras vary in specifications such as speed, image size,
pixel depth, number of taps, and modes, NI-IMAQ and NI-IMAQdx require
a camera file specific to your camera to define all of these values.
When using a frame grabber with a parallel digital camera or a Camera Link
camera, you must ensure that you have a camera file that is specific to your
model of camera. These camera files are custom-designed to provide
efficient and effective interaction between your camera and your image
acquisition device. You can find a list of camera files that have been tested
and approved by National Instruments online at the Camera Advisor
(ni.com/cameras). If you do not find a camera file for your camera at the
Table 2-3. Digital Interface Standards
Interface Advantages Disadvantages
Parallel High Speed
Easy-to-configure options and
other functionality
Bulky cabling
No physical or protocol
standards for interfacing with
image acquisition devices
IEEE 1394 standard Simple cabling
Low cost
No camera files required
Slower data transfer rate
Camera Link standard High speed
Uniform cables
More costly than IEEE 1394
10 m cable length limit
GigE Vision standard High speed
Standard interface
Non-determinism
Limited bandwidth
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 2 Preparing Your Imaging Environment
LabVIEW Machine Vision and Image Processing 2-16 ni.com
Camera Advisor, you can create a custom camera file using the NI Camera
File Generator. The NI Camera File Generator is a menu-driven,
configuration environment for generating new camera files to equip cameras
for which National Instruments does not have files, or for adding features to
existing NI camera files. The NI Camera File Generator is a free software
tool available from the Camera Advisor Web site.
When using IEEE 1394 and GigE Vision cameras, NI-IMAQdx will
generate its own camera file for each camera based on the cameras default
settings. This file will be generated when the camera is connected for the
first time. Once the file has been generated, the parameters can be changed
for each camera.
Analog Cameras versus Digital Cameras
Table 2-4 describes some of the advantages and disadvantages of both
analog and digital cameras.
Table 2-4. Analog Cameras versus Digital Cameras
Format Advantages Disadvantages
Analog cameras Established technology
Simple cabling
Low cost
Little market variation
Potentially poor image quality
Digital cameras High speed, high pixel depth,
and large image size
Programmable controls
Less image noise
Expensive
May require custom cables
May require camera files for
camera configuration
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 2 Preparing Your Imaging Environment
National Instruments Corporation 2-17 LabVIEW Machine Vision and Image Processing
Notes
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 2 Preparing Your Imaging Environment
LabVIEW Machine Vision and Image Processing 2-18 ni.com
Notes
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
National Instruments Corporation 3-1 LabVIEW Machine Vision and Image Processing
3
Acquiring and Displaying Images
This lesson describes how to use single buffer acquisition VIs and multiple
buffer acquisition VIs, how to use triggering, and how to display images.
This lesson also explains how to allocate and free memory and how to begin
and end image acquisition sessions.
Topics
A. Acquisition Modes
B. Triggering
C. Property Nodes
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 3 Acquiring and Displaying Images
LabVIEW Machine Vision and Image Processing 3-2 ni.com
A. Acquisition Modes
One of the first things you need to determine is the correct acquisition mode
for your application. Decide if your application requires a single shot or a
continuous acquisition. A single shot acquisition runs once and then stops.
A continuous acquisition runs indefinitely until you stop the acquisition.
Next, determine if your application requires a single buffer or multiple
buffers. A buffer is the memory space used to hold your images.
Acquisition Management
The following VIs are used throughout this lesson and the rest of the course.
These VIs are used to create and manipulate images, and to allocate and free
memory for storing images.
IMAQ Init loads your camera configuration information and configures
your NI Frame Grabber or NI Smart Camera. The Interface Name
(default img0) refers to the device name from MAX. IMAQ Init
generates an IMAQ Session that references the specified image
acquisition device for any future NI-IMAQ driver calls.
IMAQ Close stops the NI Frame Grabber or NI Smart Camera
acquisition, releases any allocated resources, and closes the specified
IMAQ Session.
IMAQdx Open Camera opens an IEEE 1394 or GigE Vision camera,
queries the camera for its capabilities, loads a camera configuration file,
and creates a unique reference to the camera.
IMAQdx Close Camera stops an IEEE 1394 or GigE Vision camera
acquisition in progress, releases resources associated with the
acquisition, and closes the specified session.
IMAQ Create allocates an image buffer that stores your image.
IMAQ Dispose frees the memory allocated for the image buffer. Call
this VI only after the image is no longer required for the remainder of
the processing.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 3 Acquiring and Displaying Images
National Instruments Corporation 3-3 LabVIEW Machine Vision and Image Processing
Single Buffer Acquisition VIs
The simplest acquisition mode is the snap, which acquires a single image
into a memory buffer. Figure 3-1 illustrates a snap acquisition.
Figure 3-1. Snap Acquisition
Grab acquisitions transfer images into a single buffer, overwriting the same
buffer with new frames as long as the acquisition is in progress. The buffers
are copied as necessary to a separate processing buffer where analysis or
display may occur. Figure 3-2 illustrates a grab acquisition.
Figure 3-2. Grab Acquisition
Use the IMAQ Snap VI, IMAQ Grab Setup VI, and IMAQ Grab Acquire VI
to acquire images. A snap is like a snapshot, in which you acquire a single
image from the camera. A grab is similar to a video, in which you acquire
every image that comes from the camera. The images in a grab are displayed
successively, producing a full-motion video.
IMAQ Snap acquires a single image into a memory buffer for NI Frame
Grabbers and NI Smart Cameras. Use this VI for low-speed or
single-capture applications.
IMAQ Grab Setup initializes an NI Frame Grabber or NI Smart Camera
acquisition and starts capturing the image to a single internal buffer. Call
this VI only once before a grab acquisition.
IMAQ Grab Acquire copies the current image acquired by an NI Frame
Grabber or NI Smart Camera to a LabVIEW image buffer.
IMAQdx Snap configures, starts, acquires, and unconfigures a snap
acquisition for IEEE 1394 and GigE Vision cameras. If you call this VI
before calling IMAQdx Open Camera VI, IMAQdx Snap VI uses cam0
by default. If the image type does not match the video format of the
camera, this VI changes the image type to a suitable format.
Image
Acquisition
Device
Buffer
Image
Acquisition
Device
Acquisition
Buffer
Processing
Buffer
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 3 Acquiring and Displaying Images
LabVIEW Machine Vision and Image Processing 3-4 ni.com
IMAQdx Configure Grab configures and starts a grab acquisition for
IEEE 1394 and GigE Vision cameras.If you call this VI before calling
IMAQdx Open Camera VI, IMAQdx Configure Grab VI uses cam0 by
default. Use IMAQdx Unconfigure Acquisition VI to unconfigure the
acquisition.
IMAQdx Grab acquires the most current frame of an IEEE 1394 or GigE
Vision camera into Image Out. Call this VI only after calling IMAQdx
Configure Grab VI.
Vision Acquisition Express VI
The Vision Acquisition Express VI is a powerful tool that allows you to
quickly configure your acquisition. This VI can be used for all types of
acquisitions, including single-buffer, multi-buffer, triggered and simulated
acquisitions. The Express VI form factor allows for simple, easy
configuration, and flexibility for reconfiguration. The Express VI can also
be converted into standard LabVIEW code.
Displaying Images
One of the first things you may want to do when you acquire an image is
display it on your monitor. NI Vision VIs provide several ways for you to
display your images.
In LabVIEW 6.1 and earlier, the most common display method was a new
window that opens separately from your LabVIEW interface. This required
NI-IMAQ. LabVIEW 7.0 introduced the Image data type, which allows the
display of images on your front panel using an Image Display control that is
native to LabVIEW. This is the easiest method and the one you use most
during this course. If you are interested in displaying images embedded in
your front panel in previous versions of LabVIEW, refer to the LabVIEW
Help for information on displaying images in a Picture control or an
Intensity graph.
Remember that when you display images, the display may differ from the
actual image stored in memory. For instance, if you want to display an 8-bit
monochrome image, but your monitor displays 16 or 256 colors, the pixel
intensities of the image may appear distorted.
Note Although the display method may change, the image stored in memory is
unaffected. You can verify the image by looking at its actual pixel values.
The display of your images is not necessary or required for image
acquisition and processing. If your application does not require a graphical
user interface, you may want to eliminate all displays from the source code
of the application.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 3 Acquiring and Displaying Images
National Instruments Corporation 3-5 LabVIEW Machine Vision and Image Processing
WindDraw Tools
NI Vision includes VIs for image display such as IMAQ WindDraw, IMAQ
GetPalette, and several Image Browser VIs.
IMAQ WindDraw opens or refreshes a display window. Use this VI to
open up to 16 display windows and give each window certain properties,
such as a window title.
Figure 3-3. IMAQ WindDraw Example
IMAQ GetPalette provides a different color palette for displaying a
grayscale image and maps each pixel intensity to a predefined color
value. Use IMAQ GetPalette to modify the display palette to highlight
intensities that are difficult to visualize.
The Image Browser functions display multiple thumbnail images in a
single display window. These functions are useful for displaying the
results of a sequence because you can display every frame in your
sequence for easy inspection.
You also can control your display window with utilities that allow you
to close, hide, show, move, or resize the window.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 3 Acquiring and Displaying Images
LabVIEW Machine Vision and Image Processing 3-6 ni.com
Exercise 3-1 Snap and Display VI
Goal
Acquire and display images using LabVIEW.
Scenario
You need to acquire a single image from your camera and view it on a
monitor.
Design
The input for this problem is the image acquired from your camera. The
output is a display of the image.
Flowchart
Figure 3-4. Flowchart of Snap and Display VI
Implementation
Instructors Note This exercise is designed for use with either a frame grabber and an
analog camera or an IEEE 1394 camera. If a frame grabber is installed in the computer,
complete Part A: Acquiring from an IEEE 1394 Camera. If an IEEE 1394 camera is
attached to the computer, complete Part B: Acquiring from a Frame Grabber.
Initialize a
Session with the
Frame Grabber or
IEEE 1394
Camera
Acquire and
Display an Image
Close
the Session
Dispose
of the Image
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 3 Acquiring and Displaying Images
National Instruments Corporation 3-7 LabVIEW Machine Vision and Image Processing
Part A: Acquiring from an IEEE 1394 Camera
1. Open a blank VI.
2. Save the VI as Snap and Display 1394.vi in the
<Exercises>\LabVIEW Machine Vision directory. You will use
this VI in later exercises.
Note Folder names in angle brackets, such as <Exercises>, refer to folders on the root
directory of your computer.
3. Acquire an image.
Place the Vision Acquisition Express VI (Vision and Motion
Vision ExpressVision Acquisition) on the block diagram.
In the NI Vision Acquisition Express configuration window, select
NI-IMAQdx Devicescam0 in the left-hand pane.
Click Acquire Single Image to test your acquisition. You should see
an acquired image.
Click Next.
Select Single Acquisition with processing for the acquisition type.
Click Next.
Click Test to verify that your acquisition is configured correctly.
Click Next.
Click Finish to finish building the express VI.
On the front panel, right-click the Image Display indicator and select
Snapshot.
Note Snapshot mode allows the Image Display control to maintain a copy of the image.
If Snapshot mode is off, the Image Display clears when the IMAQ Dispose VI is called
and frees the memory allocated to the image.
4. Add image management and error handling to the VI.
Place an IMAQ Dispose VI (Vision and MotionVision Utilities
Image ManagementIMAQ Dispose) on the block diagram.
Place a Simple Error Handler VI (ProgrammingDialog & User
InterfaceSimple Error Handler) on the block diagram.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 3 Acquiring and Displaying Images
LabVIEW Machine Vision and Image Processing 3-8 ni.com
Tip To search for functions and VIs, click the Search button on the Functions palette
toolbar and start typing in the name of the function or VI in the text box at the top of the
palette. LabVIEW lists all matching items that either start with or contain the text you
typed. You can click one of the search results and drag it to the block diagram.
Double-click the search result to highlight its location on the palette.
Wire the block diagram as shown in Figure 3-5.
Figure 3-5. Snap and Display 1394 VI Block Diagram
Note The execution order of the Image Display control is not enforced. Because
LabVIEW decides execution order when it is compiled, some instances of the program
could dispose of the image buffer before it is read by the Image Display. If this occurs,
there will be no image to show on the front panel.
5. Force proper execution order in the VI.
Draw a Flat Sequence Structure (ProgrammingStructuresFlat
Sequence Structure) around everything except IMAQ Dispose and
the Simple Error Handler.
Right-click the Sequence Structure and select Add Frame After.
Move the IMAQ Dispose VI and the Simple Error Handler VI into
the second frame of the Sequence Structure. Reconnect the wires.
Your VI should look like Figure 3-6.
Figure 3-6. Snap and Display 1394 VI with Sequence Structure
6. Save the VI.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 3 Acquiring and Displaying Images
National Instruments Corporation 3-9 LabVIEW Machine Vision and Image Processing
Testing
1. Display the front panel.
2. Run the VI. An image is acquired and displayed in the Image Display
control.
3. Examine the Vision Acquisition Express VI.
Go to the block diagram.
Right-click the Vision Acquisition Express VI and select Open
Front Panel.
Click Convert button when prompted to convert to a subVI.
View the code generated by the Vision Acquisition Express VI.
Open the Context Help window by selecting HelpShow Context
Help.
Place your mouse cursor over each VI in the block diagram. The
Context Help window content changes to show information about
the object that your mouse is over. To see the detailed help for a VI,
right-click the VI and select Help.
Note Notice that the code opens a session, creates a temporary memory location for a
single image, acquires a single image, outputs the image, and closes the session.
Close the generated subVI. Click Defer Decision if prompted to
save changes.
Go to the block diagram of Snap and Display 1394.vi. Notice
that the Vision Acquisition Express VI is pale yellow because you
converted it into a subVI.
Select EditUndo Change Attribute.
Click Dont Save when prompted to save changes. Notice that the
converted subVI is now blue because it has changed back into the
original Vision Acquisition Express VI.
4. Save the VI.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 3 Acquiring and Displaying Images
LabVIEW Machine Vision and Image Processing 3-10 ni.com
Part B: Acquiring from a Frame Grabber
1. Open a blank VI.
2. Save the VI as Snap and Display.vi in the <Exercises>\
LabVIEW Machine Vision directory. You will use this VI in later
exercises.
Note Folder names in angle brackets, such as <Exercises>, refer to folders on the root
directory of your computer.
3. Acquire an image.
Place the Vision Acquisition Express VI (Vision and Motion
Vision ExpressVision Acquisition) on the block diagram.
In the NI Vision Acquisition Express configuration window, select
NI-IMAQ DevicesPCI-14xxChannel0 in the left-hand pane.
Click Acquire Single Image to test an acquisition. You should see
an acquired image.
Click Next.
Select Single Acquisition with processing for the acquisition type.
Click Next.
Click Test to verify that your acquisition is configured correctly.
Click Next.
Click Finish to finish building the express VI.
On the front panel, right-click the Image Display control and select
Snapshot.
Note Snapshot mode allows the Image Display control to maintain a copy of the image.
If Snapshot mode is off, the Image Display clears when the IMAQ Dispose VI is called
and frees the memory allocated to the image.
4. Add image management and error handling to the VI.
Place an IMAQ Dispose VI (Vision and MotionVision Utilities
Image ManagementIMAQ Dispose) on the block diagram.
Place a Simple Error Handler VI (ProgrammingDialog & User
InterfaceSimple Error Handler) on the block diagram.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 3 Acquiring and Displaying Images
National Instruments Corporation 3-11 LabVIEW Machine Vision and Image Processing
Tip To search for functions and VIs, click the Search button on the Functions palette
toolbar and start typing in the name of the function or VI in the text box at the top of the
palette. LabVIEW lists all matching items that either start with or contain the text you
typed. You can click one of the search results and drag it to the block diagram.
Double-click the search result to highlight its location on the palette.
Wire the block diagram as shown in Figure 3-7.
Figure 3-7. Snap and Display VI Block Diagram
Note The execution order of the Image Display control is not enforced. Because
LabVIEW decides execution order when it is compiled, some instances of the program
could dispose of the image buffer before it is read by the Image Display. If this occurs,
there will be no image to show on the front panel.
5. Force proper execution order in the VI.
Draw a Flat Sequence Structure (ProgrammingStructuresFlat
Sequence Structure) around everything except IMAQ Dispose and
the Simple Error Handler.
Right-click the Sequence Structure and select Add Frame After.
Move the IMAQ Dispose VI and the Simple Error Handler VI into
the second frame of the Sequence Structure. Reconnect the wires.
Your VI should look like Figure 3-8.
Figure 3-8. Snap and Display VI with Sequence Structure
6. Save the VI.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 3 Acquiring and Displaying Images
LabVIEW Machine Vision and Image Processing 3-12 ni.com
Testing
1. Display the front panel.
2. Run the VI. An image is acquired and displayed in the Image Display
control.
3. Examine the Vision Acquisition Express VI.
Go to the block diagram.
Right-click the Vision Acquisition Express VI and select Open
Front Panel.
Click Convert button when prompted to convert to a subVI.
View the code generated by the Vision Acquisition Express VI.
Open the Context Help window by selecting HelpShow Context
Help.
Place your mouse cursor over each VI in the block diagram. The
Context Help window content changes to show information about
the object that your mouse is over. To see the detailed help for a VI,
right-click the VI and select Help.
Note Notice that the code opens a session, creates a temporary memory location for a
single image, acquires a single image, outputs the image, and closes the session.
Close the generated subVI. Click Defer Decision if prompted to
save changes.
Go to the block diagram of Snap and Display.vi. Notice that
the Vision Acquisition Express VI is pale yellow because you
converted it into a subVI.
Select EditUndo Change Attribute.
Click Dont Save when prompted to save changes. Notice that the
converted subVI is now blue because it has changed back into the
original Vision Acquisition Express VI.
4. Save the VI.
End of Exercise 3-1
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 3 Acquiring and Displaying Images
National Instruments Corporation 3-13 LabVIEW Machine Vision and Image Processing
Exercise 3-2 Snapping Continuously
Goal
Observe the effect of snapping images continuously.
Description
You will enhance an application you previously created. The code you add
is the same for either the frame grabber or IEEE 1394 camera versions of
your application.
Implementation
In the following steps, you will create a block diagram similar to Figure 3-9.
Figure 3-9. Snap and Display Continuous VI Block Diagram
1. Open Snap and Display.vi (or Snap and Display 1394.vi),
located in the <Exercises>\LabVIEW Machine Vision directory.
2. Save the VI as Snap and Display Continuous.vi (or Snap and
Display Continuous 1394.vi) in the <Exercises>\LabVIEW
Machine Vision directory.
3. Open the block diagram of the VI.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 3 Acquiring and Displaying Images
LabVIEW Machine Vision and Image Processing 3-14 ni.com
4. Use a While Loop to make the snap and display process run
continuously.
Right-click the sequence structure and select Remove Sequence.
Place a While Loop (ProgrammingStructuresWhile Loop)
around the Vision Acquisition Express VI and the Image Display
indicator.
Right-click the conditional terminal of the While Loop and select
Create Control to create a Stop button for the application.
5. Add code for error handling.
Right-click the loop tunnel on the right side of the While Loop
connecting the error wire and select Replace with Shift Register.
Wire the output of the shift register on the left side of the While Loop
to the error in input of the Vision Acquisition Express VI.
Right-click the input of the shift register on the left side of the While
Loop and select CreateConstant. This initializes the shift register
with no errors every time you start this VI.
Place the Unbundle By Name (ProgrammingCluster, Class, &
VariantUnbundle By Name) function inside the While Loop.
Wire the error out output from the Vision Acquisition Express VI to
the input of the Unbundle By Name function.
Place the Or (ProgrammingBooleanOr) function inside the
While Loop.
Wire the status element of the error cluster to the x input of the
Or function.
Wire the conditional terminal as shown in Figure 3-9.
6. Add code to monitor the image display rate in milliseconds.
Place the Tick Count (ms) (ProgrammingTimingTick Count
(ms)) function inside the While Loop.
Place the Subtract function (ProgrammingNumericSubtract)
inside the While Loop.
Wire the output of the Tick Count function to the upper input of the
Subtract function.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 3 Acquiring and Displaying Images
National Instruments Corporation 3-15 LabVIEW Machine Vision and Image Processing
Create a wire branch from the output of the Tick Count function and
connect it to the right side of the While Loop. A loop tunnel is
created.
Right-click the loop tunnel and select Replace with Shift Register.
Wire the output of the left Shift Register to the lower input of the
Subtract function.
Right-click the output of the Subtract function and select Create
Indicator. Rename the indicator Milliseconds between
images.
7. Save the VI.
Testing
1. Display the front panel.
2. Run the VI. The numeric indicator displays how many milliseconds it
takes to acquire each frame. Watch as the images are displayed in the
Image Display, and notice the amount of time it takes to acquire each
frame.
3. Click the Stop button on the front panel to stop the acquisition.
End of Exercise 3-2
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 3 Acquiring and Displaying Images
LabVIEW Machine Vision and Image Processing 3-16 ni.com
Exercise 3-3 Grab and Display
Goal
Acquire live images using a Grab and compare the acquisition rates of Grab
and Snap.
Description
The live acquisition in the Snap and Display Continuous VI is slow because
the Vision Acquisition Express VI you configured for that VI performs
several memory management routines each time the VI is called. It is not
necessary to perform these routines every time. You can configure the
Vision Acquisition Express VI to initialize once and then acquire
continuously, resulting in a faster rate of acquisition.
Flowchart
Figure 3-10 illustrates the new method of acquisition.
Figure 3-10. Flowchart of Grab and Display VI
No
Initialize a
Session with the
Frame Grabber or
IEEE 1394
Camera
Copy an Image
from the Acquisition
Buffer Into a
Processing Buffer
and Display It
Stop
Acquisition?
Close
the Session
Dispose
of the Image
Yes
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 3 Acquiring and Displaying Images
National Instruments Corporation 3-17 LabVIEW Machine Vision and Image Processing
Implementation
You will create a block diagram similar to Figure 3-11.
Figure 3-11. Block Diagram of Grab and Display VI
1. Open a blank VI.
2. Save the VI as Grab and Display.vi (or Grab and Display
1394.vi) in the <Exercises>\LabVIEW Machine Vision
directory.
3. Acquire live images.
Place the Vision Acquisition Express VI (Vision and Motion
Vision Express palette) on the block diagram.
In the NI Vision Acquisition Express configuration window, select
NI-IMAQdx Devicescam0 in the left-hand pane if using an
IEEE 1394 camera. Select NI-IMAQ DevicesPCI-14xx
Channel0 in the left-hand pane if using a frame grabber.
Click Acquire Single Image to test your acquisition. You should see
an acquired image.
Click Next.
Select Continuous Acquisition with inline processing for the
acquisition type.
Verify that Acquire Image Type is set to Acquire Most Recent
Image. This mode could miss some images, but the image returned
will always be the most recent.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 3 Acquiring and Displaying Images
LabVIEW Machine Vision and Image Processing 3-18 ni.com
Click Next.
Click Test to verify that your acquisition is configured correctly.
Click Next.
In the Indicators section, enable the Optional IndicatorsFrame
Rate checkbox.
Click Finish to finish building the express VI.
Right-click the Frame Rate output terminal of the Vision
Acquisition Express VI and select CreateIndicator.
On the front panel, right-click the Image Display control and select
Snapshot.
4. Add code for image management and error handling.
On the block diagram, right-click the right border of the While Loop
and select Add Shift Register.
Wire the error out output of the Vision Acquisition Express VI to
the input of the shift register on the right side of the While Loop.
Wire the output of the shift register on the left side of the While Loop
to the error in input of the Vision Acquisition Express VI.
Right-click the input of the shift register on the left side of the While
Loop and select CreateConstant. This initializes the shift register
with no errors every time you start this VI.
Place the Unbundle By Name (ProgrammingCluster, Class, &
VariantUnbundle By Name) function inside the While Loop.
Wire the error out output from the Vision Acquisition Express VI to
the input of the Unbundle By Name function.
Place the Or (ProgrammingBooleanOr) function inside the
While Loop.
Wire the status element of the error cluster to the x input of the
Or function.
Wire the conditional terminal as shown in Figure 3-11.
Place an IMAQ Dispose VI (Vision and MotionVision Utilities
Image ManagementIMAQ Dispose) on the block diagram.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 3 Acquiring and Displaying Images
National Instruments Corporation 3-19 LabVIEW Machine Vision and Image Processing
Place a Simple Error Handler VI (ProgrammingDialog & User
InterfaceSimple Error Handler) on the block diagram.
Wire the IMAQ Dispose VI and Simple Error Handler VI as shown
in Figure 3-11.
5. Add code to monitor the image display rate in milliseconds.
Place the Tick Count (ms) (ProgrammingTimingTick Count
(ms)) function inside the While Loop.
Place the Subtract function (ProgrammingNumericSubtract)
inside the While Loop.
Wire the output of the Tick Count function to the upper input of the
Subtract function.
Create a wire branch from the output of the Tick Count function and
connect it to the right side of the While Loop. A loop tunnel is
created.
Right-click the loop tunnel and select Replace with Shift Register.
Wire the output of the left Shift Register to the lower input of the
Subtract function.
Right-click the output of the Subtract function and select Create
Indicator. Rename the indicator Milliseconds between
images.
6. Save the VI.
Testing
1. Display the front panel.
2. Run the VI. Watch as the images are displayed in the image window, and
notice the amount of time it takes to acquire each frame.
3. Click the Stop control on the front panel to stop the acquisition.
4. Examine the Vision Acquisition Express VI.
Go to the block diagram.
Right-click the Vision Acquisition Express VI and select Open
Front Panel.
Click Convert button when prompted to convert to a subVI.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 3 Acquiring and Displaying Images
LabVIEW Machine Vision and Image Processing 3-20 ni.com
Open the block diagram to view the code generated by the Vision
Acquisition Express VI.
Open the Context Help window by selecting HelpShow Context
Help.
Place your mouse cursor over each VI in the block diagram. The
Context Help window content changes to show information about
the object that your mouse is over. To see the detailed help for a VI,
right-click the VI and select Help.
Note Notice in the first case structure that the code in the True case will only execute
the first time this subVI is called by the Grab and Display VI because of the First Call
function located to the left of the While Loop. As a result, the subVI opens a session,
creates a temporary memory location for a single image buffer, and configures a grab
acquisition only the first time the subVI is called from the Grab and Display VI. In all
subsequent calls to the subVI, the subVI acquires an image and outputs the image until
the grab acquisition is stopped. Notice that the session and image buffer created in the
True case are stored in the two shift registers on the While Loop.
Close the generated subVI. Click Defer Decision if prompted to
save changes.
Go to the block diagram of Grab and Display.vi (or Grab and
Display 1394.vi). Notice that the Vision Acquisition Express VI
is pale yellow because you converted it into a subVI.
Select EditUndo Change Attribute.
Click Dont Save when prompted to save changes. Notice that the
converted subVI is now blue because it has changed back into the
original Vision Acquisition Express VI.
5. Save the VI.
End of Exercise 3-3
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 3 Acquiring and Displaying Images
National Instruments Corporation 3-21 LabVIEW Machine Vision and Image Processing
File VIs
While it is not necessary to save acquired images, you may want to save a
single image or a series of images. NI Vision provides VIs that allow you to
open and save images.
IMAQ ReadFile reads an image file. The file format can be a standard
format (BMP, TIFF, JPEG, JPEG2000, PNG, and AIPD) or a
nonstandard format known to the user. In all cases, the read pixels are
converted automatically into the image type passed by Image.
IMAQ Write File 2 writes the image to a file in the selected format. The
format can be BMP, JPG, JPG2000, PNG, PNG with Vision info,
or TIFF.
Vision also includes a library of VIs that operate on AVI files. Use the
AVI VIs to read and write multiple images to an AVI file. You can write
compressed AVIs and additional data, such as time-stamp data, with your
images. The compression options available are based on the compatible AVI
compression filters currently installed on the computer.
IMAQ AVI Create creates a new AVI file or rewrites an old AVI file.
This VI also creates an AVI Refnum that is a reference to the AVI file
created.
IMAQ AVI Write Frame writes an image to the AVI file specified by the
AVI Refnum.
IMAQ AVI Open opens an existing AVI file. You can specify which file
is opened.
IMAQ AVI Read Frame reads an image from the AVI file specified by
the AVI Refnum. You can specify which frame of the AVI file to read.
IMAQ AVI Close closes the AVI file associated with the AVI Refnum.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 3 Acquiring and Displaying Images
LabVIEW Machine Vision and Image Processing 3-22 ni.com
Exercise 3-4 Snap and Save to Image File
Goal
Acquire an image and save the image to file.
Description
You will enhance an application you previously created. You can modify the
code you created for the Snap and Display.vi to save the acquired
image to file.
Implementation
In the following steps, you will create a block diagram similar to
Figure 3-12.
Figure 3-12. Snap and Save To File VI Block Diagram
1. Open Snap and Display.vi (or Snap and Display 1394.vi)
located in the <Exercises>\LabVIEW Machine Vision directory.
2. Save the VI as Snap and Save to File.vi (or Snap and Save
to File 1394.vi) in the <Exercises>\LabVIEW Machine
Vision directory.
3. Save the acquired image to file.
Place the IMAQ Write File 2 VI on the block diagram (Vision and
MotionVision UtilitiesFilesIMAQ Write File 2) in the first
frame of the Sequence Structure.
Create the File Path control by right-clicking the File Path input of
the IMAQ Write File 2 VI and select CreateControl.
Create the Compress control by right-clicking the Compress? (N)
input of the IMAQ Write File 2 VI and select CreateControl.
Finish wiring the VI as shown in Figure 3-12.
4. Save the VI.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 3 Acquiring and Displaying Images
National Instruments Corporation 3-23 LabVIEW Machine Vision and Image Processing
Testing
1. Display the front panel.
2. Set the File Path control to <Exercises>\LabVIEW Machine
Vision\Acquired Image.bmp to create a new image file.
3. Run the VI.
4. In Windows Explorer, open Acquired Image.bmp in the
<Exercises>\LabVIEW Machine Vision directory. Confirm that
the image file displays the image acquired by the VI.
5. Close the VI when finished.
End of Exercise 3-4
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 3 Acquiring and Displaying Images
LabVIEW Machine Vision and Image Processing 3-24 ni.com
Exercise 3-5 Grab and Save to AVI File
Goal
Acquire live images using a Grab and save the images to an AVI file.
Description
You will modify an application you previously created to save acquired
images to an AVI file.
Implementation
In the following steps, you will create a block diagram similar to
Figure 3-13.
Figure 3-13. Grab and Save To AVI VI Block Diagram
1. Open Grab and Display.vi (or Grab and Display 1394.vi)
located in the <Exercises>\LabVIEW Machine Vision directory.
2. Save the VI as Grab and Save to AVI.vi (or Grab and Save
to AVI 1394.vi) in the <Exercises>\LabVIEW Machine
Vision directory.
3. Remove the Error constant located to the left of the While Loop.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 3 Acquiring and Displaying Images
National Instruments Corporation 3-25 LabVIEW Machine Vision and Image Processing
4. Save the acquired images to an AVI file.
Place the IMAQ AVI Create VI (Vision and MotionVision
UtilitiesFilesAVIIMAQ AVI Create) on the block diagram to
the left of the While Loop.
Create the AVI Path control by right-clicking the AVI Path input of
the IMAQ AVI Create VI and selecting CreateControl.
Create the Frames Per Second numeric constant by right-clicking the
Frames Per Second input of the IMAQ AVI Create VI and selecting
CreateConstant. The Frames Per Second input indicates the
desired playback rate of the AVI you create.
Place the IMAQ AVI Write Frame VI (Vision and MotionVision
UtilitiesFilesAVIIMAQ AVI Write Frame) on the block
diagram inside the While Loop.
Place the IMAQ AVI Close VI (Vision and MotionVision
UtilitiesFilesAVIIMAQ AVI Write Frame) on the block
diagram to the right of the While Loop.
Wire the block diagram as shown in Figure 3-13.
5. Save the VI.
Testing
1. Display the front panel.
2. Set the AVI Path control to <Exercises>\LabVIEW Machine
Vision\Acquired Video.avi to create a new AVI file.
3. Run the VI.
4. In Windows Explorer, open Acquired Video.avi in the
<Exercises>\LabVIEW Machine Vision directory. Confirm that
the AVI file displays the images acquired by the VI.
5. Close the VI when finished.
End of Exercise 3-5
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 3 Acquiring and Displaying Images
LabVIEW Machine Vision and Image Processing 3-26 ni.com
Multiple Buffer Acquisition VIs
Sequences and rings are related to the other two types of acquisitions that
you have studiedsnaps and grabs.
A sequence acquisition uses multiple buffers, but only writes to them one
time. The image acquisition stops as soon as each buffer has been filled
once. Figure 3-14 illustrates a sequence acquisition.
Figure 3-14. Sequence Acquisition
The IMAQdx Sequence VI configures, starts, acquires, stops, and
unconfigures a sequence acquisition for IEEE 1394 and GigE Vision
cameras. Use this VI to capture multiple images. If you call this VI before
calling IMAQdx Open Camera VI, IMAQdx Sequence VI uses cam0 by
default.
The IMAQ Sequence VI fills multiple buffers with a series of images for
NI Frame Grabbers and NI Smart Cameras. Figure 3-15 shows an example
of a VI that uses IMAQ Sequence. Notice the For Loop in this block
diagram. The For Loop creates a number of image buffers based on the
control titled Number of images. Each buffer is given a different name
(image0, image1, image2) and so on.
Figure 3-15. Example Application of IMAQdx Sequence VI
Image
Acquisition
Device
Buffer 1
Buffer 2
Buffer 3
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 3 Acquiring and Displaying Images
National Instruments Corporation 3-27 LabVIEW Machine Vision and Image Processing
Note It is important to give each image buffer a unique name. If you call IMAQ Create
VI multiple times using the same Image Name, you will effectively create and manipulate
a single image buffer.
You can use NI-IMAQ functionality to specify a certain number of frames
to skip between each acquisition buffer. For example, if your camera
acquires 30 frames per second, you can use a sequence to acquire 30 images,
with no frames skipped, in an acquisition time of one second. You also can
acquire 30 images, skipping one frame after each buffer, with an acquisition
time of two seconds.
A ring acquisition uses and recycles multiple buffers in a continuous
acquisition. The interface copies images as they come from the camera into
Buffer 1, then Buffer 2, then Buffer 3, and then back to Buffer 1. Each buffer
is filled, one at a time, until all buffers contain an image. Then, the next
buffer writes over the very first image that was acquired. To process or
display one of these images, extract it from the ring while the ring continues
the acquisition in the background. Figure 3-16 illustrates a ring acquisition.
Figure 3-16. Ring Acquisition
Buffer 2
Buffer 1
Buffer 3
Image
Acquisition
Device
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 3 Acquiring and Displaying Images
LabVIEW Machine Vision and Image Processing 3-28 ni.com
Exercise 3-6 Sequence Acquisition
Goal
Learn how to acquire a single sequence of images.
Scenario
You need to perform a one-time acquisition of multiple concurrent images.
You must learn how to communicate with your frame grabber or FireWire
camera to accomplish this task.
Description
You will create a VI that acquires a finite number of images.
Implementation
1. Open a blank VI.
2. Save the VI as Sequence Acquisition.vi (or Sequence
Acquisition 1394.vi) in the <Exercises>\LabVIEW Machine
Vision directory.
3. Acquire a sequence of images.
Place the Vision Acquisition Express VI (Vision and Motion
Vision ExpressVision Acquisition) on the block diagram.
In the NI Vision Acquisition Express configuration window, select
NI-IMAQdx Devicescam0 in the left-hand pane if using an
IEEE 1394 camera. Select NI-IMAQ DevicesPCI-14xx
Channel0 in the left-hand pane if using a frame grabber.
Click Next.
Select Finite Acquisition with post processing for the acquisition
type.
Set Number of Images to Acquire to 5.
Click Next.
Click Test to verify that your acquisition is configured correctly.
Click Next.
In the Controls section, enable the Setup SettingsNumber of
Images checkbox.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 3 Acquiring and Displaying Images
National Instruments Corporation 3-29 LabVIEW Machine Vision and Image Processing
Click Finish to finish building the express VI. Your block diagram
should look like Figure 3-17.
Figure 3-17. Sequence Acquisition VI (express VI only) Block Diagram
4. Finish building the block diagram shown in Figure 3-18.
Figure 3-18. Sequence Acquisition VI Block Diagram
Right-click the Number of Images input of the Vision Acquisition
Express VI and select CreateControl.
Place a While Loop and a For Loop on the block diagram.
Place the Wait (ms), Unbundle By Name, and Or functions inside the
While Loop.
Create the Stop control by right-clicking the input of the conditional
terminal and selecting CreateControl.
Place a IMAQ Dispose VI (Vision and MotionVision Utilities
Image Management palette) in the For Loop.
Place a Simple Error Handler VI to the right of the For Loop.
Arrange and wire the block diagram as shown in Figure 3-18.
5. Save the VI.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 3 Acquiring and Displaying Images
LabVIEW Machine Vision and Image Processing 3-30 ni.com
Testing
1. Display the front panel.
2. Run the VI. View the images in the image array indicator.
3. Examine the Vision Acquisition Express VI.
Go to the block diagram.
Right-click the Vision Acquisition Express VI and select Open
Front Panel.
Click Convert button when prompted to convert to a subVI.
View the code generated by the Vision Acquisition Express VI.
Open the Context Help window by selecting HelpShow Context
Help.
Place your mouse cursor over each VI in the block diagram. The
Context Help window content changes to show information about
the object that your mouse is over. To see the detailed help for a VI,
right-click the VI and select Help.
Note Notice that the For Loop is creating an array of image buffers because a sequence
acquisition acquires multiple images into separate image buffers.
Close the generated subVI. Click Defer Decision if prompted to
save changes.
Go to the block diagram of Sequence Acquisition (or
Sequence Acquisition 1394.vi). Notice that the Vision
Acquisition Express VI is pale yellow because you converted it into
a subVI.
Select EditUndo Change Attribute.
Click Dont Save when prompted to save changes. Notice that the
converted subVI is now blue because it has changed back into the
original Vision Acquisition Express VI.
4. Save the VI.
5. Close the VI when finished.
End of Exercise 3-6
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 3 Acquiring and Displaying Images
National Instruments Corporation 3-31 LabVIEW Machine Vision and Image Processing
Exercise 3-7 Ring Acquisition
Goal
Learn how to perform a continuous multi-buffer acquisition.
Scenario
You need to perform a continuous acquisition. To avoid missing any frames,
you will buffer the acquisition.
Description
You will create a VI that performs a ring acquisition which acquires images
continuously using buffers.
Flowchart
Figure 3-19. Flowchart of Low Level Ring Acquisition
No
Initialize a
Session with the
Frame Grabber or
IEEE 1394
Camera
Create a List of
Processing Buffers
in System Memory
and Acquisition
Buffers in On-Board
Memory
Copy the Last Valid
Image from an
Acquisition Buffer
into a Processing
Buffer and
Display It
Yes
Stop
Acquisition?
Release All
Acquisition Buffers
and Close the
Session
Dispose of the
Processing
Buffers
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 3 Acquiring and Displaying Images
LabVIEW Machine Vision and Image Processing 3-32 ni.com
Implementation
You will create a block diagram similar to the one in Figure 3-20.
Figure 3-20. Block Diagram of Ring Acquisition VI
1. Open a blank VI.
2. Save the VI as Ring Acquisition.vi (or Ring Acquisition
1394.vi) in the <Exercises>\LabVIEW Machine Vision
directory.
3. Acquire live images in multiple buffers.
Place the Vision Acquisition Express VI (Vision and Motion
Vision ExpressVision Acquisition) on the block diagram.
In the NI Vision Acquisition Express configuration window, select
NI-IMAQdx Devicescam0 in the left-hand pane if using an
IEEE 1394 camera. Select NI-IMAQ DevicesPCI-14xx
Channel0 in the left-hand pane if using a frame grabber.
Click Next.
Select Continuous Acquisition with inline processing for the
acquisition type.
Set Acquire Image Type to Acquire Every Image.
Set Number of Images to Buffer to 10.
Click Next.
Click Test to verify that your acquisition is configured correctly.
Click Next.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 3 Acquiring and Displaying Images
National Instruments Corporation 3-33 LabVIEW Machine Vision and Image Processing
Click Finish to finish building the express VI.
On the front panel, right-click the Image Display control and select
Snapshot.
4. Add code for disposing of images and error handling.
Right-click the right border of the While Loop and select Add Shift
Register.
Wire the error out output of the Vision Acquisition Express VI to the
input of the shift register on the right side of the While Loop.
Wire the output of the shift register on the left side of the While Loop
to the error in input of the Vision Acquisition Express VI.
Right-click the input of the shift register on the left side of the While
Loop and select CreateConstant. This initializes the shift register
with no errors every time you start this VI.
Place the Unbundle By Name function (ProgrammingCluster,
Class, & VariantUnbundle By Name) inside the While Loop.
Wire the error out output from the Vision Acquisition Express VI to
the input of the Unbundle By Name function.
Place the Or (ProgrammingBooleanOr) function inside the
While Loop.
Wire the status element of the error cluster to the x input of the
Or function.
Place a For Loop on the block diagram to the right of the While
Loop.
Place an IMAQ Dispose VI (Vision and MotionVision Utilities
Image ManagementIMAQ Dispose) inside the For Loop.
Place a Simple Error Handler VI to the right of the For Loop.
Arrange and wire the block diagram as shown in Figure 3-20.
5. Save the VI.
Testing
1. Display the front panel.
2. Run the VI.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 3 Acquiring and Displaying Images
LabVIEW Machine Vision and Image Processing 3-34 ni.com
3. Examine the Vision Acquisition Express VI.
Go to the block diagram.
Right-click the Vision Acquisition Express VI and select Open
Front Panel.
Click Convert button when prompted to convert to a subVI.
View the code generated by the Vision Acquisition Express VI.
Open the Context Help window by selecting HelpShow Context
Help.
Place your mouse cursor over each VI in the block diagram. The
Context Help window content changes to show information about
the object that your mouse is over. To see the detailed help for a VI,
right-click the VI and select Help.
Note Notice in the first case structure that the code in the True case will only execute
the first time this subVI is called from the Ring Acquisition VI because of the First Call
function located to the left of the While Loop. As a result, the subVI opens a session,
creates an array of image buffers, and configures a ring acquisition only the first time the
subVI is called from the Ring Acquisition VI. In all subsequent calls to the subVI from
the calling VI, the subVI extracts an image from one of the image buffers and outputs the
image until the ring acquisition is stopped. Notice that the session and image buffers
created in the True case are stored in the two shift registers on the While Loop.
Close the generated subVI. Click Defer Decision if prompted to
save changes.
Go to the block diagram of Ring Acquisition.vi (or Ring
Acquisition 1394.vi). Notice that the Vision Acquisition
Express VI is pale yellow because you converted it into a subVI.
Select EditUndo Change Attribute.
Click Dont Save when prompted to save changes. Notice that the
converted subVI is now blue because it has changed back into the
original Vision Acquisition Express VI.
4. Save the VI.
5. Close the VI when finished.
End of Exercise 3-7
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 3 Acquiring and Displaying Images
National Instruments Corporation 3-35 LabVIEW Machine Vision and Image Processing
B. Property Nodes
The IMAQ Property Node and the IMAQdx Property Node get and/or set
image acquisition properties. The nodes are expandable. Evaluation starts
from the top and proceeds downward until an error or the final evaluation
occurs.
If you want to add items to the Property Node, right-click the node and
select Add Element or click and drag the node to expand the number of
items in the node.
The properties are changed, in order, from top to bottom. If an error occurs
on one of the properties, the node stops at that property and returns an error.
No further properties are handled. The error string reports which property
caused the error.
If the small direction arrow on a property is on the left, you are setting the
property value. If the small direction arrow on the property is on the right,
you are getting the property value. Each property name has a short or long
name that you can select by right-clicking and changing Name Format.
NI-IMAQ Property Node
The IMAQ Property Node contains the following groups of properties:
Table 3-1. IMAQ Property Node
Group Description
Analog Parameters Sets analog device parameters
Board Information Returns information concerning the image acquisition device
Color Properties Sets parameters associated with a color acquisition
Encoder Returns information concerning the quadrature encoder
Image Parameters Defines parameters that affect an image acquisition
Session Information Sets information about the maximum possible size for an
acquisition and set information about line filtering
Status Information Returns status information about an acquisition
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 3 Acquiring and Displaying Images
LabVIEW Machine Vision and Image Processing 3-36 ni.com
Figure 3-21 shows the IMAQ Property Node.
Figure 3-21. IMAQ Property Node
NI-IMAQdx Property Node
The IMAQdx Property Node contains the following groups of properties:
Figure 3-22 shows the IMAQdx Property Node.
Figure 3-22. IMAQdx Property Node
Table 3-2. IMAQdx Property Node
Group Description
Acquisition Attributes Defines parameters that affect an
image acquisition
Camera Attributes Controls camera-specific features
Camera Information Returns information concerning
the camera
Status Information Returns status information about an
acquisition
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 3 Acquiring and Displaying Images
National Instruments Corporation 3-37 LabVIEW Machine Vision and Image Processing
Exercise 3-8 Concept: Changing Acquisition Parameters
Goal
Adjust the parameters of an acquisition programmatically.
Description
You will create a new VI. The code you create will vary depending on
whether you acquire images from a frame grabber or an IEEE 1394 camera.
Flowchart
Figure 3-23. Flowchart of Snap with Options VI
Implementation
Instructors Note This exercise is designed for use with either a frame grabber and an
analog camera or an IEEE 1394 camera. If a frame grabber is installed in the computer,
complete Part A: Acquiring from an IEEE 1394 Camera. If an IEEE 1394 camera is
attached to the computer, complete Part B: Acquiring from a Frame Grabber.
Initialize a
Session with the
Frame Grabber or
IEEE 1394
Camera
Change
Acquisition
Properties
Through the
Session
Acquire and
Display an
Image
Close the
Session
Dispose of
the Image
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 3 Acquiring and Displaying Images
LabVIEW Machine Vision and Image Processing 3-38 ni.com
Part A: Acquiring from an IEEE 1394 Camera
1. Open a blank VI.
2. Save the VI as Snap with Options 1394.vi in the <Exercises>\
LabVIEW Machine Vision directory.
3. Use a Property Node to access the acquisition parameters of the camera.
Place an IMAQdx Open Camera VI (Vision and Motion
NI-IMAQdxOpen) on the block diagram.
Right-click the Session In input of the IMAQdx Open Camera VI
and select CreateConstant. Select cam0 from the constant
drop-down listbox.
Place a Property Node (Vision and MotionNI-IMAQdx
Property Node) on the block diagram.
Wire the Session Out output of the IMAQdx Open Camera VI to the
reference input of the Property Node.
Click a Property Node element and select Camera Attributes
Active Attribute.
Resize the Property Node to two terminals.
Click the second element of the Property Node and select Camera
AttributesValueDBL.
Right-click the Property Node and select Change All To Write.
Right-click the Active Attribute input of the Property Node and
select CreateConstant.
Set the constant to Gain::Value. This sets the active camera
attribute to the value of the gain.
Note You can find the other available camera attributes for your camera in the Camera
Attributes tab in MAX. You can also use the IMAQdx Enumerate Attributes VI.
Right-click the Value:DBL input of the Property Node and select
CreateControl.
Rename the control to Gain.
Place an IMAQ Create VI (Vision and MotionVision Utilities
Image ManagementIMAQ Create) on the block diagram.
Right-click the Image Name input of the IMAQ Create VI and select
CreateConstant.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 3 Acquiring and Displaying Images
National Instruments Corporation 3-39 LabVIEW Machine Vision and Image Processing
Set the constant to Image.
Place an IMAQdx Snap VI (Vision and MotionNI-IMAQdx
Snap) on the block diagram.
Place an IMAQdx Close Camera VI (Vision and Motion
NI-IMAQdxClose) on the block diagram.
Place an IMAQ Dispose VI (Vision and MotionVision Utilities
Image ManagementIMAQ Dispose) on the block diagram.
Place a Simple Error Handler VI (ProgrammingDialog & User
InterfaceSimple Error Handler) on the block diagram.
Place an Image Display indicator (VisionImage Display) on the
front panel.
On the front panel, right-click the Image Display control and select
Snapshot.
Wire the block diagram as shown in Figure 3-24.
Figure 3-24. Snap with Options 1394 VI Block Diagram
4. Force proper execution order in the VI.
Draw a Flat Sequence Structure (ProgrammingStructuresFlat
Sequence Structure) around everything except IMAQ Dispose and
the Simple Error Handler.
Right-click the Sequence Structure and select Add Frame After.
Move the IMAQ Dispose VI and the Simple Error Handler VI into
the second frame of the Sequence Structure. Reconnect the wires.
Your VI should look like Figure 3-25.
5. Save the VI.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 3 Acquiring and Displaying Images
LabVIEW Machine Vision and Image Processing 3-40 ni.com
Figure 3-25. Snap with Options 1394 VI with Sequence Structure
Testing
1. Display the front panel.
2. Run the VI with different values for Gain.
Note For the Basler scA640-70fm IEEE 1394 camera, the minimum value of Gain is
320 and the maximum value of Gain is 1023.
3. Close the VI when finished.
Part B: Acquiring from a Frame Grabber
1. Open a blank VI.
2. Save the VI as Snap with Options.vi in the <Exercises>\
LabVIEW Machine Vision directory.
3. Use a Property Node to access the acquisition parameters of the camera.
Place an IMAQ Init VI (Vision and MotionNI-IMAQInitialize)
on the block diagram.
Right-click the Interface Name input of the IMAQ Init VI and select
CreateConstant.
Place a Property Node (Vision and MotionNI-IMAQProperty
Node) on the block diagram.
Wire the IMAQ Session Out output of the IMAQ Init VI to the
reference input of the Property Node.
Click a Property Node element and select Image Parameters
Look-up Table.
Right-click the Property Node element and select Change to Write.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 3 Acquiring and Displaying Images
National Instruments Corporation 3-41 LabVIEW Machine Vision and Image Processing
Right-click the Look-Up Table input and select CreateControl.
Place an IMAQ Create VI (Vision and MotionVision Utilities
Image ManagementIMAQ Create) on the block diagram.
Right-click the Image Name input of the IMAQ Create VI and select
CreateConstant.
Set the constant to Image.
Place an IMAQ Snap VI (Vision and MotionNI-IMAQIMAQ
Snap) on the block diagram.
Place an IMAQ Close VI (Vision and MotionNI-IMAQIMAQ
Close) on the block diagram.
Place an IMAQ Dispose VI (Vision and MotionVision Utilities
Image ManagementIMAQ Dispose) on the block diagram.
Place a Simple Error Handler VI (ProgrammingDialog & User
InterfaceSimple Error Handler) on the block diagram.
Place an Image Display indicator (VisionImage Display) on the
front panel.
On the front panel, right-click the Image Display control and select
Snapshot.
Wire the block diagram as shown in Figure 3-26.
Figure 3-26. Snap with Options VI Block Diagram
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 3 Acquiring and Displaying Images
LabVIEW Machine Vision and Image Processing 3-42 ni.com
4. Force proper execution order in the VI.
Draw a Flat Sequence Structure (ProgrammingStructuresFlat
Sequence Structure) around everything except IMAQ Dispose and
the Simple Error Handler.
Right-click the Sequence Structure and select Add Frame After.
Move the IMAQ Dispose VI and the Simple Error Handler VI into
the second frame of the Sequence Structure. Reconnect the wires.
Your VI should look like Figure 3-27.
5. Save the VI.
Figure 3-27. Snap with Options VI with Sequence Structure
Testing
1. Display the front panel.
2. Select a look-up table and run the VI.
End of Exercise 3-8
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 3 Acquiring and Displaying Images
National Instruments Corporation 3-43 LabVIEW Machine Vision and Image Processing
Exercise 3-9 Concept: Changing the Palette During a Grab
Goal
View an acquisition under various color palettes to accentuate different
features.
Scenario
Although the camera acquires monochrome images, the interpretation of
these images may be represented more clearly in a non-grayscale palette.
For example, if you want to acquire infrared images and display them, the
Thermal or Rainbow palettes may be best suited to this task.
Description
You will modify an existing VI for this exercise. The code you will add is
the same whether you are acquiring from a frame grabber or an IEEE 1394
camera.
1. Open Grab and Display.vi (or Grab and Display 1394.vi)
located in the <Exercises>\LabVIEW Machine Vision directory.
Figure 3-28. Block Diagram of Grab and Display VI
2. Open the block diagram of Grab and Display VI.
3. Save the VI as Grab with Palette Options.vi (or Grab with
Palette Options 1394.vi) in the <Exercises>\LabVIEW
Machine Vision directory.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 3 Acquiring and Displaying Images
LabVIEW Machine Vision and Image Processing 3-44 ni.com
4. Use an Image Display Property Node to change the palette used to
display the acquired images.
Right-click the Image Display indicator and select Create
Property NodePalettePalette Type.
Place the Property Node after the Vision Acquisition Express VI.
Connect the error wire through the node.
Right-click the Property Node element and select Change to Write.
Right-click the Property Node element input and select Create
Control.
The block diagram of your modified VI should look like Figure 3-29.
Figure 3-29. Block Diagram of Grab with Palette Options VI
5. Display the front panel and run the VI. While the VI is running, use the
Palette control to change the display palette. Notice the different
features highlighted in each palette.
Note Remember that the acquired image is not changed. Only the interpretation of the
image data is changed when the image is displayed.
6. Save the VI.
End of Exercise 3-9
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 3 Acquiring and Displaying Images
National Instruments Corporation 3-45 LabVIEW Machine Vision and Image Processing
C. Triggering
Often, you may need to link or coordinate a machine vision action with
events external to the computer, such as receiving a strobe pulse for lighting
or a pulse from an infrared detector that indicates the position of an item on
an assembly line. A trigger on an NI Frame Grabber can be any TTL-level
signal. All of the trigger lines are fully bidirectional so that the NI Frame
Grabber can generate or receive the triggers on any line. Use the RTSI
triggers to coordinate your NI Frame Grabber with other National
Instruments products, such as DAQ or motion control devices.
Use the IMAQ Configure Trigger2 VI or Vision Acquisition Express VI to
configure the trigger conditions for an acquisition from an NI Frame
Grabber. Figure 3-30 shows an example using the IMAQ Configure
Trigger2 VI. You must call IMAQ Configure Trigger2 before calling an
acquisition VI, such as IMAQ Snap and IMAQ Grab. The Trigger line input
specifies which external or RTSI trigger receives the incoming trigger
signal. Each trigger line has a programmable polarity that is specified with
Trigger polarity. Frame timeout specifies the amount of time to wait for
the trigger.
Figure 3-30. Triggering a Frame Grabber Acquisition
Some IIDC-compliant IEEE 1394 cameras and GigE Vision cameras
support hardware triggering. In typical hardware triggered systems, a
proximity sensor or an encoder sends pulses to the camera to trigger an
acquisition. The DCAM specification defines the different triggering modes
and attributes for IEEE 1394 cameras. The GenICam standard defines
different triggering modes and attributes for some GigE Vision cameras. To
set and access the trigger modes and attributes of IEEE 1394 and GigE
Vision cameras, use IMAQdx property nodes or the Vision Acquisition
Express VI.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 3 Acquiring and Displaying Images
LabVIEW Machine Vision and Image Processing 3-46 ni.com
Notes
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
National Instruments Corporation 4-1 LabVIEW Machine Vision and Image Processing
4
Processing Images
In previous lessons, you learned how to acquire and display images. In this
lesson, you will learn about NI Vision VIs and how to process images using
NI Vision Assistant.
NI Vision helps you reduce both the cost and time-to-market of your
imaging applications. Transparent memory management and high-level
functions built to work together let you develop your application more
quickly, and logically named VIs and parameters make NI Vision easy to
use.
NI Vision Assistant is a tool for prototyping and testing image processing
algorithms or applications. After prototyping and testing, you can use the
LabVIEW VI Creation Wizard to create a block diagram of the algorithm,
and then use the NI Vision machine vision and image processing libraries to
implement the solution in LabVIEW. The Vision Assistant Express VI can
also be used to generate LabVIEW code after prototyping various image
processing steps.
Topics
A. NI Vision VIs
B. Prototyping Applications with NI Vision Assistant
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 4 Processing Images
LabVIEW Machine Vision and Image Processing 4-2 ni.com
A. NI Vision VIs
NI Vision VIs are divided into three categories: Vision Utilities, Image
Processing, and Machine Vision. The remainder of this course discusses
examples of these function types in more detail.
Vision Utilities VIs
The Vision Utilities VIs allow you to create and manipulate images to suit
the needs of your application. This category includes VIs for image
management and manipulation, file management, calibration, and region of
interest selection.
Image Processing VIs
You can use NI Vision Image Processing VIs to analyze, filter, and process
your images according to the needs of your application. This category
includes VIs for analysis, grayscale and binary image processing, color
processing, frequency processing, filtering, morphology, and operations.
Machine Vision VIs
Use the Machine Vision VIs to perform common machine vision inspection
tasks, including checking for the presence or absence of parts in an image
and measuring the dimensions of parts to see if they meet specifications.
Some examples of Machine Vision VIs are caliper VIs and coordinate
system VIs.
For This Course
Most of the VIs that you use in this course fall into these function categories:
edge detection, gauging, pattern matching, particle analysis, and statistics.
Edge Detection
Edge detection VIs find the edges of objects along any line that you select
in your image. Use the edge positions for alignment and gauging operations.
NI Vision edge detection VIs give you sub-pixel accuracy so that you can
accurately detect boundaries of an object in your image. Refer to Lesson 6,
Measuring Features, for more information about common edge detection
functions.
Gauging
Gauging functions automatically measure distances and angles between the
edges of an object. You can specify markers on an object and calculate
measurements between those markers; find the position, angle, and
sharpness of an edge; or measure critical distances to test against a
user-defined tolerance. NI Vision gauging VIs are ideal for testing
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 4 Processing Images
National Instruments Corporation 4-3 LabVIEW Machine Vision and Image Processing
manufactured items for flaws. Refer to Lesson 6, Measuring Features, for
more information about gauging functions.
Pattern Matching
Pattern matching VIs give you information about the presence or absence,
number, and location of template matches based on a template image that
you define offline. Pattern matching VIs locate patterns quickly and are
resistant to changes in uniform lighting, focus, part shifting, and part
rotation. Refer to Lesson 7, Using Machine Vision Techniques, for more
information about pattern matching concepts.
Particle Analysis
Use particle analysis functions to determine the features of binary particles,
including orientation, area, perimeter, and the x- and y-coordinates of the
center of mass. You can use these VIs to identify parts in a sorting
application or ensure that a manufactured part meets quality standards.
Particle analysis functions also include morphology functions, which allow
you to extract and alter the structure of particles to make your analysis more
efficient. Refer to Lesson 8, Processing Binary Images, for more
information about particle analysis functions.
Statistics
Statistical image processing functions give you a mathematical
representation of an image with valuessuch as the histogram of the pixel
intensities or the average and standard deviation of grayscale valuesso
that you can judge the quality of your lighting and focus.
Statistical functions provide the line profile and histogram of your image.
Refer to Lesson 8, Processing Binary Images, for more information about
basic statistical functions.
B. Prototyping Applications with NI Vision Assistant
NI Vision Assistant allows even the first-time vision developer to learn
image processing techniques and test inspection strategies. In addition,
more experienced developers can explore vision algorithms more quickly
and with less programming.
The following features make Vision Assistant an invaluable tool for
prototyping your vision applications:
Acquire, display, and process images
Save images to and load images from a file
Create an image processing script
Process a batch of images
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 4 Processing Images
LabVIEW Machine Vision and Image Processing 4-4 ni.com
Create a LabVIEW VI from your script
Integrate your prototype into LabVIEW, LabWindows/CVI, Visual
C++, or Visual Basic
Test different processing strategies on a variety of images
Quickly and easily explore what-if conditions
Immediately see the result of each step in your scriptwithout
programming
Benchmark the speed of each vision function
The Vision Assistant Express VI offers the same ability to prototype and test
the NI Vision functions from within LabVIEW. It has an interface similar to
the standalone Vision Assistant, but can be accessed entirely from within the
LabVIEW development environment. The Vision Assistant Express VI
provides the standard dialog-based, express VI interface, making it easy to
reconfigure processing steps.
Measuring a Bracket
Consider a typical manufacturing applicationmeasuring the dimensions
of a bracket. You are looking for two values: the distance between the two
holes in the bracket and the angle between the holes through the center of
the bracket.
How could you use each of the five image processing techniques discussed
in this lesson to solve this problem? What features of the image make it
unique? What features can you locate and measure? Exercise 4-1 takes you
through different NI Vision functions to answer these questions.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 4 Processing Images
National Instruments Corporation 4-5 LabVIEW Machine Vision and Image Processing
Exercise 4-1 Concept: Measure a Bracket
Goal
Open a solution and familiarize yourself with the NI Vision Assistant
environment, then see how a Vision Assistant script can be turned into a
LabVIEW VI.
Implementation
1. Launch Vision Assistant 8.5 and open the Bracket Inspection solution.
Select StartAll ProgramsNational InstrumentsVision
Assistant 8.5Vision Assistant 8.5.
Click Solution Wizard.
Figure 4-1. NI Vision Assistant Welcome Screen
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 4 Processing Images
LabVIEW Machine Vision and Image Processing 4-6 ni.com
2. Select TutorialBracket Inspection and then click Load Solution.
Figure 4-2. NI Vision Assistant Solution Wizard
3. On the bottom of the screen, there are steps recorded in a script. The
steps comprise an image-processing algorithm created with Vision
Assistant. Examine how each step is configured.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 4 Processing Images
National Instruments Corporation 4-7 LabVIEW Machine Vision and Image Processing
4. Double-click the step in the script called Pattern Matching 1. This step
looks for the predefined template pattern, which you can view in the
Template tab. Notice that the results of the search are shown in the table
in the middle of the window, and the location of the pattern is
highlighted by a red square.
Figure 4-3. Pattern Matching Step
5. Click OK.
6. Double-click the Pattern Matching 2 step. Notice that the step matched
the same pattern on the right side of the bracket.
7. Click OK.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 4 Processing Images
LabVIEW Machine Vision and Image Processing 4-8 ni.com
8. Double-click the Edge Detector 1 step. Notice the vertical line drawn
through the center of the bracket. The step located the top and bottom
edges of the bracket. In a moment, you will use these edges to find the
center of the bracket.
Figure 4-4. Edge Detector Step
9. Click OK.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 4 Processing Images
National Instruments Corporation 4-9 LabVIEW Machine Vision and Image Processing
10. Double-click the Caliper 1 step. This step finds the midpoint of the
two edges that the Edge Detector 1 step found.
Note A point labeled Caliper 1 has been added at the center of the bracket. The table
displays the coordinates of Caliper 1.
11. Click OK.
12. Double-click the Caliper 2 step. This step determines the angle by
which the bracket is bent between point 1, the center, and point 2.
Figure 4-5. Caliper Step
13. Click OK when you are finished.
14. Close Vision Assistant.
End of Exercise 4-1
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 4 Processing Images
LabVIEW Machine Vision and Image Processing 4-10 ni.com
Exercise 4-2 Metal Particle Analysis
Goal
Perform metal particle image analysis on a folder of images.
Description
Create a LabVIEW VI that uses the Vision Assistant Express VI with the
Vision Acquisition Express VI to perform image analysis on a folder of
images.
Implementation
1. Acquire images from a folder on the computer hard disk.
In LabVIEW, open a blank VI.
Save the VI as Metal Particle Analysis.vi in the
<Exercises>\LabVIEW Machine Vision directory.
Place a Vision Acquisition Express VI (Vision and MotionVision
ExpressVision Acquisition) on the block diagram.
In the NI Vision Acquisition Express configuration window, select
Simulated AcquisitionFolder of Images in the left-hand pane.
Click Next.
Select Finite Acquisition with inline processing for the acquisition
type.
Set Number of Images to Acquire to 4.
Click Next.
Click the browse button next to the Image Path textbox.
Navigate to <Program Files>\National Instruments\
Vision Assistant 8.5\Examples\metal\Metal1.jpg and
click OK.
Enable the Cycle through Folder of Images checkbox.
Click Test to test the acquisition.
Click Finish to finish building the express VI.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 4 Processing Images
National Instruments Corporation 4-11 LabVIEW Machine Vision and Image Processing
2. Configure the Vision Assistant Express VI to process the images.
Place a Vision Assistant Express VI (Vision and MotionVision
ExpressVision Assistant) inside the For Loop.
In the NI Vision Assistant configuration window, select Help
Solution Wizard.
Select TutorialMetal Particle Analysis and click Load Solution.
If the Vision Assistant prompts you to remove previously acquired
images, select Yes.
In the Script section of the NI Vision Assistant configuration
window, double-click the Particle Analysis 1 step at the end of the
script. Notice this step displays the number of objects.
Click Select Measurements.
Briefly examine the list of selectable measurements and click OK.
Click OK in the Particle Analysis Setup section.
Click Select Controls.
In the Indicators section, enable the Number of Particles checkbox
and the Particle Measurements (Pixels) checkbox. This will create
the output terminals on the Express VI so that these measurements
can be displayed to the user or passed on to subsequent steps.
Enable the Create Destination Image checkbox. This will allow the
Vision Assistant Express VI to automatically create a memory
location for the image as it is being modified, so that the original
image will remain in memory.
Click Finish.
3. Finish building and wiring the block diagram as shown in Figure 4-6.
Figure 4-6. Metal Particle Analysis VI Block Diagram
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 4 Processing Images
LabVIEW Machine Vision and Image Processing 4-12 ni.com
Right-click the Number of Particles output of the Vision Assistant
Express VI and select CreateIndicator.
Right-click the Particle Measurements output of the Vision Assistant
Express VI and select CreateIndicator.
Place a Wait (ms) VI inside the For Loop.
Right-click the milliseconds to wait input of the Wait (ms) VI and
select CreateConstant. Set the numeric constant to 2000.
Place a second For Loop to the right of the existing For Loop.
Right-click the border of the second For Loop and select Add Shift
Register.
Place an IMAQ Dispose VI inside the second For Loop.
Place a Simple Error Handler VI to the right of the second For Loop.
Wire the All Images Out output of the Vision Acquisition Express
VI to the border of the first For Loop. Right-click the tunnel created
and select Disable Indexing.
Wire the error out output of the Vision Assistant Express VI to the
border of the first For Loop. Right-click the tunnel created and select
Disable Indexing.
Wire the block diagram shown in Figure 4-6.
4. Test the VI.
Go to the front panel.
Run the VI. When the VI runs, it should cycle through the images
and display the number of particles found in each image and the
selected measurements for each particle.
5. Close the VI when finished.
End of Exercise 4-2
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 4 Processing Images
National Instruments Corporation 4-13 LabVIEW Machine Vision and Image Processing
Notes
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 4 Processing Images
LabVIEW Machine Vision and Image Processing 4-14 ni.com
Notes
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
National Instruments Corporation 5-1 LabVIEW Machine Vision and Image Processing
5
Enhancing Acquired Images
In this lesson, you will learn about calibration and filtering. Spatial
calibration is the process of computing pixel to real-world unit
transformations while accounting for errors inherent to the imaging setup.
Calibrating your imaging setup is important when you need to make
accurate measurements in real-world units.
Spatial filters serve a variety of purposes, such as detecting edges along a
specific direction, contouring patterns, reducing noise, and detail outlining
or smoothing. Filters smooth, sharpen, transform, and remove noise from an
image so that you can extract the information you need.
Topics
A. Using Spatial Calibration
B. Calibrating Images with NI Vision
C. Calibrating Your Imaging Setup
D. Using Spatial Filters
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 5 Enhancing Acquired Images
LabVIEW Machine Vision and Image Processing 5-2 ni.com
A. Using Spatial Calibration
An image contains information in the form of pixels. Spatial calibration
allows you to translate a measurement from pixel units into physical units.
This conversion can be a simple linear conversion between pixels and
real-world units. For example, if the pixel to inch ratio is 1:1, a length
measurement of ten pixels is equivalent to ten inches.
However, this conversion may be nonlinear because of perspective errors
and lens distortion. In Figure 5-1a, the camera is in the ideal position:
perpendicular to the image plane. If the camera is not perpendicular to the
image plane, as shown in Figure 5-1b, the image results can have
perspective errors and lens distortion errors.
Figure 5-1. Reasons for Calibrating Images
Perspective errors and lens errors cause images to appear distorted. This
distortion misplaces information in an image, but it does not necessarily
destroy the information in the image. Calibration accounts for possible
errors by constructing mappings that you can use to convert between pixel
and real-world units. You can also use the calibration information to correct
perspective errors and nonlinear distortion errors in image displays and
shape measurements.
Use the NI Vision calibration tools to perform the following operations:
Calibrate your imaging setup automatically by learning a standard
pattern (calibration template) or by providing reference points. A
calibration template is a user-defined grid of circular dots.
Apply a learned calibration mapping to correct an acquired image.
1 Lens Distortion 2 Perspective Error 3 Known Orientation Offset
3
2
1
a. b.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 5 Enhancing Acquired Images
National Instruments Corporation 5-3 LabVIEW Machine Vision and Image Processing
Assign an arbitrary coordinate system to measure positions in real-world
units.
Convert measurements (lengths, widths, areas) from real-world units to
pixel units and back.
B. Calibrating Images with NI Vision
You can use NI Vision VIs to convert pixel coordinates to real-world
coordinates in a calibrated image. In addition, you can transform a distorted
image into an image in which distortions are corrected. NI Vision also
allows you to save and load calibrated images for processing.
NI Vision has two types of image calibration: perspective calibration and
nonlinear calibration. Perspective calibration corrects for perspective errors
and nonlinear calibration corrects for perspective errors and nonlinear
distortion.
Figure 5-2 illustrates the types of errors your image can exhibit. Figure 5-2a
shows a grid of dots with no errors. Figure 5-2b illustrates perspective errors
caused by a camera imaging the grid from an angle. Figure 5-2c illustrates the
effect of lens distortion on the grid of dots. A typical camera lens introduces
radial distortion, which causes points that are away from the lenss optical
center to appear further away from the center than they really are.
Figure 5-2. Perspective and Distortion Errors
Use perspective calibration when your system exhibits perspective errors
only. Use nonlinear calibration when your system exhibits nonlinear lens
distortion. If your system exhibits perspective errors and nonlinear
distortion, use nonlinear calibration to correct for both. Applying
perspective calibration is less computationally intensive than nonlinear
calibration. However, perspective calibration is not designed to handle
highly nonlinear distortions.
a. No Distortion c. Nonlinear Distortion b. Perspective Projection
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 5 Enhancing Acquired Images
LabVIEW Machine Vision and Image Processing 5-4 ni.com
Perspective calibration computes one pixel to real-world mapping for the
entire image. You can use the mapping to convert the coordinates of any
pixel in the image to real-world units.
Nonlinear calibration computes pixel to real-world mappings in a
rectangular region centered around each dot in the calibration template.
NI Vision estimates the mapping information around each dot based on its
neighboring dots. You can convert pixel units to real-world units within the
area covered by the grid dots. Because NI Vision computes the mappings
around each dot, only the area in the image covered by the grid dots is
calibrated accurately.
C. Calibrating Your Imaging Setup
The following general steps explain how to calibrate your imaging setup
using a calibration template.
1. Create a calibration template appropriate for your field of view.
Note National Instruments provides a calibration template that you can use to calibrate
your image. However, this template may not be appropriate for all applications. Consider
the size of your object under inspection, as well as whether or not you need a calibration
template that has a certificate of accuracy. You can purchase highly accurate calibration
templates from optics suppliers, such as Edmund Optics.
2. Acquire an image of the calibration template using your current imaging
setup.
3. Enter the acquired image, the distances between the dots on the
calibration template, and the location and orientation of the coordinate
system to the IMAQ Learn Calibration Template VI. This VI produces
a calibrated image.
4. Acquire an image of the object of interest without the calibration
template.
5. Apply the calibration information to the acquired image by copying it
from the calibrated image. The IMAQ Set Calibration Info VI provides
the new image with the calibration transform equations.
6. Apply the calibration information to the pixel measurements using one
of these three methods:
Use the IMAQ Convert Pixel to Real World VI to correct individual
pixels for distance or edge locations.
Use the IMAQ Particle Analysis VI to return real-world
measurements on the calibrated image.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 5 Enhancing Acquired Images
National Instruments Corporation 5-5 LabVIEW Machine Vision and Image Processing
Use the IMAQ Correct Calibrated Image VI to correct the calibrated
image by applying a calibration template. This produces a spatially
correct image that you can use for particle or area analysis.
You have the option of generating an error map. An error map returns an
estimate of the worst-case error when a pixel coordinate is transformed into
a real-world coordinate.
Use the calibration information obtained from the calibration process to
convert any pixel coordinate to its real-world coordinate and back.
Common Calibration Misconceptions
You cannot calibrate images under poor lighting or insufficient resolution
conditions. Also, calibration does not affect image accuracy, which is
subject to your camera and lens selections. The following are some common
calibration misconceptions:
Calibration fixes any measurement to an arbitrary accuracy.
Calibrated images always need to be corrected.
Calibration can compensate for poor lighting or unstable conditions.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 5 Enhancing Acquired Images
LabVIEW Machine Vision and Image Processing 5-6 ni.com
Exercise 5-1 Calibration and Perspective Correction
Goal
Use NI Vision calibration and correction tools to solve a perspective or lens
distortion problem.
Scenario
Many machine vision applications are completely useless if they cannot
report information in real-world units. NI Vision calibration functions can
calibrate pixel separation in your images to a real-world distance.
Lens distortion and perspective distortion are also common problems found
in image acquisition. If careful consideration is not taken, measurement
accuracy will vary according to the location of the object in your image. NI
Vision calibration functions can account for distortion factors and correction
functions can adjust the image accordingly.
Description
In this exercise, you will create a script in Vision Assistant to correct lens
distortion and examine an example program to observe the perspective
calibration process in LabVIEW.
Implementation
Complete both parts of this exercise.
Correcting Lens Distortion using the Vision Assistant Express VI
1. Open a blank VI.
2. Save the VI as Lens Distortion Calibration.vi in the
<Exercises>\LabVIEW Machine Vision\Calibration
directory.
3. Acquire an image.
Place the Vision Acquisition Express VI (Vision and Motion
Vision ExpressVision Acquisition) on the block diagram.
In the NI Vision Acquisition Express configuration window, select
Simulated AcquisitionFolder of Images in the left-hand pane.
Click Next.
Select Single Acquisition with processing for the acquisition type.
Click Next.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 5 Enhancing Acquired Images
National Instruments Corporation 5-7 LabVIEW Machine Vision and Image Processing
Click the browse button next to the Image Path textbox.
Navigate to <Exercises>\LabVIEW Machine Vision\
Calibration and Perspective Correction\ELP mug.png
and click OK.
Click Test to test the acquisition.
Click Finish to finish building the express VI.
On the front panel, right-click the Image Display indicator and select
Snapshot.
4. Use the ELP cal template grid to calibrate the image to account for
nonlinear lens distortion.
Place a Vision Assistant Express VI (Vision and MotionVision
ExpressVision Assistant) on the block diagram.
In the NI Vision Assistant configuration window, select FileOpen
Image.
Browse to <Exercises>\LabVIEW Machine Vision\
Calibration and Perspective Correction, open the file
ELP mug.png and click OK. If the Vision Assistant prompts you to
remove previously acquired images, select Yes.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 5 Enhancing Acquired Images
LabVIEW Machine Vision and Image Processing 5-8 ni.com
Figure 5-3. Vision Assistant Express VI Configuration Window
Select Processing Functions: ImageImage Calibration. This
opens the Choose a calibration type window.
Select Grid Calibration and click OK. The Grid Calibration
Setup window opens.
Click Open Image and double-click the file
ELP cal template.png in the <Exercises>\LabVIEW
Machine Vision\Calibration and Perspective
Correction directory.
Click the Zoom Out button in order to see the entire image.
Select Nonlinear for the Distortion type.
Click Next.
Enter 0 for the Threshold Range Min and enter 110 for the Max.
This setting allows the algorithm to find most of the grid dots
without letting noise particles through.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 5 Enhancing Acquired Images
National Instruments Corporation 5-9 LabVIEW Machine Vision and Image Processing
Click Next.
Enter 0.375 for the X-Spacing and enter 0.375 for the Y-Spacing.
Set Unit to centimeter.
Click Next.
In the Axis Origin parameter, enter 0 for X and enter 0 for Y.
Set the Axis Reference to Indirect, as shown in Figure 5-4a.
The calibration procedure automatically determines the direction of
the horizontal axis. The vertical axis direction can either be indirect
or direct as shown in Figure 5-4.
Figure 5-4. Axis Direction
Figure 5-5. Grid Calibration Setup
X
Y X
Y
a. Indirect b. Direct
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 5 Enhancing Acquired Images
LabVIEW Machine Vision and Image Processing 5-10 ni.com
Click OK.
Save the calibrated image as ELP Calibrated.png in the
<Exercises>\LabVIEW Machine Vision\
Calibration and Perspective Correction directory and
click OK.
Click OK in the Image Calibration Setup section.
Note Although the image perspective has not been corrected, the image perspective is
fully calibrated at this point to accommodate lens distortion. You can take measurements
in real-world units, and the results will be spatially correct.
5. Correct the image perspective. The text in the image will appear without
curvature.
Select Processing Functions: ImageImage Correction.
Click OK in the Image Correction Setup section.
Click Finish in the NI Vision Assistant window.
Figure 5-6. Calibrated Image
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 5 Enhancing Acquired Images
National Instruments Corporation 5-11 LabVIEW Machine Vision and Image Processing
6. Finish building the block diagram shown in Figure 5-7.
Figure 5-7. Lens Distortion Calibration VI Block Diagram
7. Add image management and error handling to the VI.
Place a Flat Sequence Structure (ProgrammingStructuresFlat
Sequence Structure) around everything on the block diagram.
Right-click the Flat Sequence Structure and select Add Frame
After.
Place an IMAQ Dispose VI (Vision and MotionVision Utilities
Image ManagementImage Dispose) and Simple Error Handler
VI (ProgrammingDialog & User InterfaceSimple Error
Handler) in the second frame of the Flat Sequence Structure.
Wire the VI as shown in Figure 5-8.
Figure 5-8. Lens Distortion Calibration VI with Sequence Structure
8. Save the VI.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 5 Enhancing Acquired Images
LabVIEW Machine Vision and Image Processing 5-12 ni.com
Testing
1. Test the VI.
Go to the front panel.
Run the VI. You should see the corrected image in the image display.
2. Examine the code generated by the Vision Assistant Express VI.
Go to the block diagram.
Right-click the Vision Assistant Express VI and select Open Front
Panel.
Click Convert when prompted to convert to a subVI.
View the code generated by the Vision Assistant Express VI.
Note The IMAQ Read Image and Vision Info VI reads an image file, including any
extra vision information saved with the image. This includes calibration information. The
IMAQ Set Calibration Info VI sets calibration information from the calibrated image
to an uncalibrated image. The IMAQ Correct Calibrated Image VI corrects a
calibrated image by applying a calibration to create a spatially correct image.
Close the subVI when finished. Click Defer Decision.
3. Close the VI. Do not save changes.
End of Exercise 5-1
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 5 Enhancing Acquired Images
National Instruments Corporation 5-13 LabVIEW Machine Vision and Image Processing
D. Using Spatial Filters
Spatial filters alter pixel values with respect to variations in light intensity in
their neighborhood. The neighborhood of a pixel is defined by the size of a
matrix, or mask, centered on the pixel itself. These filters can be sensitive to
the presence or absence of light-intensity variations.
Filters are divided into two types: linear (also called convolution) and
nonlinear. A linear filter replaces each pixel by a weighted sum of its
neighbors. The matrix defining the neighborhood of the pixel also specifies
the weight assigned to each neighbor. This matrix is called the convolution
kernel. A nonlinear filter replaces each pixel value with a nonlinear function
of its surrounding pixels. Like the linear filters, the nonlinear filters operate
on a neighborhood.
Linear and nonlinear filters are divided into two categories:
Highpass filtersEmphasize significant variations of the light intensity
usually found at the boundary of objects. Highpass frequency filters help
isolate abruptly varying patterns that correspond to sharp edges, details,
and noise.
Lowpass filtersAttenuate variations of the light intensity. Lowpass
frequency filters help emphasize gradually varying patterns such as
objects and the background. They have the tendency to smooth images
by eliminating details and blurring edges.
Table 5-1. Spatial Filter Types
Lowpass Highpass
Linear Gaussian
Smoothing
Gradient
Laplacian
Nonlinear Lowpass
Median
Nth Order
Differentiation
Gradient
Prewitt
Roberts
Sigma
Sobel
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 5 Enhancing Acquired Images
LabVIEW Machine Vision and Image Processing 5-14 ni.com
Convolution Kernels
A convolution kernel defines a 2D filter that you can apply to a grayscale
image. A convolution kernel is a 2D structure whose coefficients define the
characteristics of the convolution filter that it represents. In a typical
filtering operation, the coefficients of the convolution kernel determine the
filtered value of each pixel in the image. NI Vision provides a set of
convolution kernels that you can use to perform different types of filtering
operations on an image. You can also define your own convolution kernels,
thus creating custom filters.
Refer to Chapter 5, Image Processing, of the NI Vision Concepts Manual for
more information about filtering and convolution kernels.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 5 Enhancing Acquired Images
National Instruments Corporation 5-15 LabVIEW Machine Vision and Image Processing
Exercise 5-2 Concept: Using Filters
Goal
Use filters to manipulate an image.
Scenario
Some images require filtering before they can be analyzed or displayed.
NI Vision provides multiple filters.
Design
In this exercise, you will acquire an image and use the Vision Assistant
express VI to apply smoothing and sharpening filters.
Flowchart
Figure 5-9. Flowchart of Using Filters VI
Snap an Image Apply Filters
Display the
Original Image
Generate Code Using Vision Assistant
Display the
Filtered Image
Dispose of
the Images
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 5 Enhancing Acquired Images
LabVIEW Machine Vision and Image Processing 5-16 ni.com
Implementation
1. Open Snap and Display.vi (or Snap and Display 1394.vi),
located in the <Exercises>\LabVIEW Machine Vision directory.
Figure 5-10. Snap and Display VI Block Diagram
2. Save the VI as Using Filters.vi in the <Exercises>\LabVIEW
Machine Vision directory.
3. Apply a Smoothing Filter to your image.
Place a Vision Assistant Express VI (Vision and MotionVision
ExpressVision Assistant) on the block diagram.
In the NI Vision Assistant configuration window, select FileOpen
Image.
Navigate to the <Exercises>\LabVIEW Machine Vision
directory and select the file Acquired Image.jpg and click
Open. If the Vision Assistant prompts you to remove previously
acquired images, select Yes.
Note This image will not be used when the VI runs, but it will be displayed while
configuring the Vision Assistant Express VI so that the effects of the processing steps can
be visualized.
Select Processing Functions: GrayscaleFilters in the bottom left
window of Vision Assistant.
Select Smoothing Low Pass.
Increase the Filter Size to 5.
Click OK.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 5 Enhancing Acquired Images
National Instruments Corporation 5-17 LabVIEW Machine Vision and Image Processing
Figure 5-11. Applying a Smoothing Low Pass Filter
Tip You can double-click the Smoothing Low Pass step to edit the filter. Vary the size
of the filter to see the effect on the image.
4. Apply a Convolution Filter to your image to make details in the image
stand out.
Select Processing Functions: GrayscaleFilters in the bottom left
window of Vision Assistant.
Select Convolution Highlight Details from the list of filters.
Increase the Kernel Size to 5 5.
Click OK.
Click Select Controls.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 5 Enhancing Acquired Images
LabVIEW Machine Vision and Image Processing 5-18 ni.com
Place a checkmark in the Convolution - Highlight Details 1
Kernel checkbox.
Click Finish.
Figure 5-12. Applying a Convolution Highlight Details Filter
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 5 Enhancing Acquired Images
National Instruments Corporation 5-19 LabVIEW Machine Vision and Image Processing
5. Build the block diagram shown in Figure 5-13.
Figure 5-13. Block Diagram of Using Filters VI
Note Create the array constant by right-clicking the Kernal input of the Vision Assistant
Express VI and selecting CreateConstant. Display multiple elements of the array
constant by clicking and dragging the bottom right corner of the array constant.
6. Examine and run the VI.
Run the VI and view the result in the Image Display indicator on the
front panel.
Run the VI with the different array constant values to see the results.
Challenge
1. Modify the block diagram to display both the filtered and original
images on the front panel at the same time.
2. Save and close the VI.
End of Exercise 5-2
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 5 Enhancing Acquired Images
LabVIEW Machine Vision and Image Processing 5-20 ni.com
Notes
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
National Instruments Corporation 6-1 LabVIEW Machine Vision and Image Processing
6
Measuring Features
In this lesson, you will learn about commonly used NI Vision Machine
Vision VIs. The NI Vision Machine Vision VIs provide a high-level
interface for constructing your inspection applications. These VIs provide
solutions instead of algorithmsan approach that requires less code while
maintaining a high degree of functionality.
Topics
A. NI Vision Machine Vision VIs
B. Regions of Interest
C. Nondestructive Overlays
D. Edge Detection
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 6 Measuring Features
LabVIEW Machine Vision and Image Processing 6-2 ni.com
A. NI Vision Machine Vision VIs
You can use NI Vision Machine Vision VIs, which are built on common
lower-level VIs, to perform a wide range of functions, such as measuring
parts, counting features on parts, detecting temperature, and locating
distinct features. The provided source code allows you to access the VIs
directly and modify them to suit the specific needs of your application.
Machine Vision functions are useful in applications for the automotive,
telecommunications, pharmaceutical, and manufacturing test industries.
The Machine Vision VIs are located on the Vision and MotionMachine
Vision palette. The VIs are divided into separate sub-palettes: Select Region
of Interest, Coordinate System, Count and Measure Objects, Measure
Intensities, Measure Distances, Locate Edges, Find Patterns, Searching and
Matching, Caliper, Analytic Geometry, Inspection, Classification, OCR,
and Instrument Readers.
B. Regions of Interest
Many machine vision applications are used to identify features and objects
within an image. You can use machine vision tools to identify these features,
but the tools are not aware of their surroundings.
Consider the image of a printed circuit board shown in Figure 6-1. If you
want to use a tool that finds circles in the image, the tool would try to find
every circle that meets the search criteria. But if you only want to find a
specific circle, or a circle found in a specific region of the image, you have
to tell the tool where to look.
Figure 6-1. Printed Circuit Board
A region of interest (ROI) is an area of an image in which you want to focus
your image analysis. All NI Machine Vision tools can be given an ROI to
fine-tune searches.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 6 Measuring Features
National Instruments Corporation 6-3 LabVIEW Machine Vision and Image Processing
In the printed circuit board example, you can use a rectangular ROI to
specify where to find the circle you want. By specifying a limited area of the
image, extraneous results will not be returned because the tool will not
search in the unspecified areas.
NI Machine Vision tools process results more quickly when the tools
examine a smaller area of the image, as shown in Figure 6-2. In Figure 6-2a,
no ROI was specified, so the shape detection function found each matching
shape in the image. In Figure 6-2b, the shape detection function found only
the matching shape within the rectangular ROI, and the execution time was
faster.
Figure 6-2. Execution Times
You can define an ROI interactively by drawing it with the mouse, or
programmatically by using the output of previous functions. You can also
define a compound ROI that consists of multiple disconnected contours.
Tip When drawing an ROI with the mouse, you can define a compound ROI by pressing
the <Ctrl> key while selecting contours.
1 ROI
a. b.
1
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 6 Measuring Features
LabVIEW Machine Vision and Image Processing 6-4 ni.com
Table 6-1 describes each of the NI Vision ROI tools and the manner in which
you use them.
Table 6-1. NI Vision ROI Tools
Icon Tool Name Function
Selection Tool Select an ROI in the image and adjust the position of its control
points and contours.
Action: Click ROI or control points.
Point Select a pixel in the image.
Action: Click the position of the pixel.
Line Draw a line in the image.
Action: Click the initial position and click again at the final
position.
Rectangle Draw a rectangle or square in the image.
Action: Click one corner and drag to the opposite corner.
Oval Draw an oval or circle in the image.
Action: Click the center position and drag to the required size.
Polygon Draw a polygon in the image.
Action: Click to place a new vertex and double-click to
complete the ROI element.
Freehand Region Draw a freehand region in the image.
Action: Click the initial position, drag to the required shape and
release the mouse button to complete the shape.
Annulus Draw an annulus in the image.
Action: Click the center position and drag to the required size.
Adjust the inner and outer radii, and adjust the start and end
angle.
Zoom Zoom-in or zoom-out in an image.
Action: Click the image to zoom in. Hold down the <Shift> key
and click to zoom out.
Pan Pan around an image.
Action: Click an initial position, drag to the required position
and release the mouse button to complete the pan.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 6 Measuring Features
National Instruments Corporation 6-5 LabVIEW Machine Vision and Image Processing
Broken Line Draw a broken line in the image.
Action: Click to place a new vertex and double-click to
complete the ROI element.
Freehand Line Draw a freehand line in the image.
Action: Click the initial position, drag to the required shape,
and release the mouse button to complete the shape.
Rotated Rectangle Draw a rotated rectangle in the image.
Action: Click one corner and drag to the opposite corner to
create the rectangle. Then click the lines inside the rectangle
and drag to adjust the rotation angle.
Table 6-1. NI Vision ROI Tools (Continued)
Icon Tool Name Function
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 6 Measuring Features
LabVIEW Machine Vision and Image Processing 6-6 ni.com
Exercise 6-1 Concept: Region of Interest
Goal
Select a region of interest in an image and display it.
Scenario
Regions of interest (ROIs) are frequently used in machine vision
applications to specify which part of an image is being inspected. For
instance, it is inefficient and cumbersome to search an entire image for a part
if you already know a smaller region where the part will exist.
Design
You will enhance a VI you previously created. The code you will add is the
same, regardless of whether or not you acquire images with a frame grabber
or an IEEE 1394 camera.
You will create an enhanced block diagram, similar to Figure 6-3.
Figure 6-3. Block Diagram of Extract Region of Interest VI
Implementation
1. Launch LabVIEW.
2. Open Snap and Display.vi (or Snap and Display 1394.vi),
located in the <Exercises>\LabVIEW Machine Vision directory.
Open the Snap and Display.vi block diagram.
3. Save the VI as Extract Region of Interest.vi in the
<Exercises>\LabVIEW Machine Vision directory.
4. Add the code to extract a region of interest and display on a second
Image Display.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 6 Measuring Features
National Instruments Corporation 6-7 LabVIEW Machine Vision and Image Processing
Expand the first frame of the sequence.
Add the IMAQ Select Rectangle VI (Vision and MotionMachine
VisionSelect Region of InterestIMAQ Select Rectangle).
Add the IMAQ Extract VI (Vision and MotionVision Utilities
Image ManipulationIMAQ Extract).
Add an Unbundle by Name function (ProgrammingCluster,
Class, & VariantUnbundle by Name). Click and drag the bottom
edge of the Unbundle by Name function to display four elements.
Add a Build Array function (ProgrammingArrayBuild Array).
You will use this to build the first four elements into an array for the
Optional Rectangle input.
Create a destination buffer for the IMAQ Extract VI. Add a new
IMAQ Create VI (Vision and MotionVision UtilitiesImage
ManagementIMAQ Create). Right-click the Image Name input
of the IMAQ Create VI and select CreateConstant. Set the string
constant to Selection.
Add a second IMAQ Dispose VI (Vision and MotionVision
UtilitiesImage ManagementIMAQ Dispose). This will clear the
second image buffer in this VI at the end of the program.
Add a second Image Display (VisionImage Display) to the front
panel.
Arrange and wire the block diagram as shown in Figure 6-3.
5. Save the VI.
Testing
1. Run the VI.
Draw a region of interest in the WindDraw window that opens.
You must use the Rectangle tool to draw your ROI. The Rotated
Rectangle tool will not work in this exercise.
2. Click OK. Your ROI is displayed in the second Image Display indicator.
3. Close the VI when finished.
End of Exercise 6-1
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 6 Measuring Features
LabVIEW Machine Vision and Image Processing 6-8 ni.com
C. Nondestructive Overlays
A nondestructive overlay allows you to annotate the display of an image with
useful information without actually modifying the image. You can overlay
text, lines, points, complex geometric shapes, and bitmaps on top of your
image without changing the underlying pixel values in your image; only the
display of the image is affected. Figure 6-4 shows how you can use the
overlay to depict the orientation of each particle in the image.
Figure 6-4. Nondestructive Overlay
Using Nondestructive Overlays
You can use nondestructive overlays for many purposes, such as the
following:
Highlighting the location in an image where objects have been detected
Adding quantitative or qualitative information to the displayed
imagelike the match score from a pattern matching function
Displaying ruler grids or alignment marks
Overlays do not affect the results of any analysis or processing
functionsthey affect only the display. The overlay is associated with
an image, so there are no special overlay data types. You need only to add
the overlay to your image. NI Vision clears the overlay anytime you change
the size or orientation of the image because the overlay ceases to have
meaning. You can save overlays with images using the PNG file format.
Angle = 150
Angle = 47
Angle = 124
Angle = 90
Angle = 3
Angle = 76
Angle = 65
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 6 Measuring Features
National Instruments Corporation 6-9 LabVIEW Machine Vision and Image Processing
D. Edge Detection
Edge detection clearly identifies the boundaries or edges of an object. Edges
signify intensity discontinuities in an image. When the software locates the
edges in the image, you can use those edge locations to compute the
distances between them or other geometric features. Because edge detection
works on one-dimensional data only, it is a very quick way to find object
boundaries or other areas of significant intensity changes.
What is an Edge?
An edge is defined as a significant change in the grayscale values of adjacent
pixels in an image. In NI Vision, edge detection works on a one-dimensional
profile of pixel values along a search region. The one-dimensional search
region can be a line, the perimeter of a circle or ellipse, the boundary of a
rectangle or polygon, or a freehand region. The software analyzes the pixel
values along the profile to detect significant intensity changes.
Figure 6-5. Edge Detection with a Line Profile
You can specify intensity characteristics to determine which intensity
changes constitute an edge. These characteristics include the following:
Edge strength, which defines the minimum difference in the grayscale
values between the background and the edge.
Edge length, or the maximum distance in which the desired grayscale
difference between the edge and the background must occur.
Edge polarity, which signifies whether an edge is rising or falling.
Edge position, which locates the x- and y-coordinates of an edge in the
image.
You can use these values to programmatically define criteria for finding
edges under different imaging environments.
1 Search Lines 2 Edges
1
2
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 6 Measuring Features
LabVIEW Machine Vision and Image Processing 6-10 ni.com
Edge Detection Methods
NI Vision provides two ways to perform edge detectionsimple edge
detection and advanced edge detection. Both methods compute the edge
strength at each pixel along the one-dimensional profile. An edge occurs
when the edge strength is greater than a minimum strength. Additional
checks find the correct location of the edge. You can specify the minimum
strength by using the contrast parameter in the software.
Simple Edge Detection
The software uses the pixel value at any point along the pixel profile to define
the edge strength at that point. To locate an edge point, the software scans the
pixel profile pixel-by-pixel from beginning to end. A rising edge is detected
at the first point at which the pixel value is greater than a threshold value plus
a hysteresis value. Set this threshold value to define the minimum edge
strength required for qualifying edges. Use the hysteresis value to declare
different edge strengths for the rising and falling edges. When a rising edge
is detected, the software looks for a falling edge. A falling edge is detected
when the pixel value falls below the specified threshold value. This process
is repeated until the end of the pixel profile. The first edge along the profile
can be either a rising or falling edge. The simple edge detection method
works best when there is little noise in the image and there is a distinct
demarcation between the object and the background.
Figure 6-6 shows the simple edge model.
Figure 6-6. Simple Edge Detection
1 Grayscale Profile
2 Threshold Value
3 Hysteresis
4 Rising Edge Location
5 Falling Edge Location
2
4 5
1
3
Gray Level
Intensities
Pixels
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 6 Measuring Features
National Instruments Corporation 6-11 LabVIEW Machine Vision and Image Processing
Advanced Edge Detection
To compute the edge strength at a given point along the pixel profile, the
software averages pixels before and after the analyzed point. The pixels that are
averaged after the point can be a certain pixel distance from the point, which
you define by setting the steepness parameter. This number corresponds to the
expected transition region in the edge profile. Define the number of pixels
averaged on each side by setting the Filter Width parameter. After computing
the average, the software computes the difference between these averages to
determine the contrast. Filtering reduces the effects of noise along the profile.
If you expect the image to contain a lot of noise, use a large filter width.
Figure 6-7 shows the relationship between the parameters and the edge
profile.
Figure 6-7. Advanced Edge Detection
Subpixel Accuracy
When the resolution of the image is high enough, most measurement
applications make accurate measurements using pixel accuracy only.
However, it is sometimes difficult to obtain the minimum image resolution
needed by a machine vision application because of the limits on the size of
the sensors available or because of price. In these cases, you need to find
edge positions with subpixel accuracy.
Subpixel analysis is a software method that estimates the pixel values that a
higher resolution imaging system would have provided. To compute the
location of an edge with subpixel precision, the edge detection software first
1 Pixels
2 Grayscale Values
3 Width
4 Steepness
5 Contrast
6 Edge Location
5
1
3 4 3
2
6
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 6 Measuring Features
LabVIEW Machine Vision and Image Processing 6-12 ni.com
fits a higher-order interpolating function, such as a quadratic or cubic
function, to the pixel intensity data.
The interpolating function provides the edge detection algorithm with pixel
intensity values between the original pixel values. The software then uses the
intensity information to find the location of the edge with subpixel accuracy.
Figure 6-8 illustrates how a cubic spline function fits to a set of pixel values.
Using this fit, values at locations in between pixels are estimated. The edge
detection algorithms use these values to estimate the location of an edge
with subpixel accuracy.
Figure 6-8. Obtaining Subpixel Information Using Interpolation
With the imaging system components and software tools available today,
you can reliably estimate one-fourth subpixel accuracy. However, the results
of the estimation depend heavily on the imaging setup, such as lighting
conditions and the camera lens. Before resorting to subpixel information, try
to improve the image resolution.
1 Known Pixel Value
2 Interpolating Function
3 Interpolated Value
4 Subpixel Location
3
4
2
1
Pixels
Gray
Level
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 6 Measuring Features
National Instruments Corporation 6-13 LabVIEW Machine Vision and Image Processing
Edge Detectors
NI Vision includes many predefined edge detectors that are based
on a simple point-to-point edge detector. These edge detectors include a
horizontal/vertical rake, a concentric rake, and a spoke. You can use these
edge detectors to find edges along a specified search area.
Edge Detector VIs
IMAQ Find Horizontal Edge locates a horizontal edge in a search area.
Use this VI when you expect the angle between the line you are
calculating and the search area to be less than 45. This VI locates the
intersection points between a set of parallel search lines, or rake, and the
edge of an object. The intersection points are determined by their
contrast and slope.
IMAQ Find Vertical Edge locates a vertical edge in a search area. Use
this VI when you expect the angle between the line you are calculating
and the search area to be more than 45. Figure 6-9 shows a Find Vertical
Edge example.
Figure 6-9. Finding a Vertical Edge
1 Search Region
2 Search Lines
3 Detected Edge Points
4 Line Fit to Edge Points
2
1
4
3
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 6 Measuring Features
LabVIEW Machine Vision and Image Processing 6-14 ni.com
IMAQ Find Circular Edge locates a circular edge in a search area. As
shown in Figure 6-10, this VI locates the intersection points between a
set of search lines defined by a spoke and the edge of an object. The
intersection points are determined by their contrast and slope.
Figure 6-10. Finding a Circular Feature
IMAQ Find Concentric Edge locates a straight edge in a circular search
area. This VI locates the intersection points between a set of concentric
search lines and the edge of an object. The intersection points are
determined by their contrast and slope.
1 Annular Search Region
2 Search Lines
3 Detected Edge Points
4 Circle Fit to Edge Points
2
3
4
1
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 6 Measuring Features
National Instruments Corporation 6-15 LabVIEW Machine Vision and Image Processing
Clamping
The clamping functions use the edge detector functions to measure a
distance between two edges. The clamping functions find edges by
searching from the outside inward or from the inside outward. The functions
then calculate the distance between the first and last edges.
Clamping VIs
IMAQ Clamp Horizontal Max measures the distance in the horizontal
direction from the vertical sides of the search area to the center of the
search area. This VI locates edges along a set of parallel search lines, or
rake. The edges are determined by their contrast and slope.
IMAQ Clamp Vertical Max measures the distance in the vertical
direction from the horizontal sides of the search area to the center of the
search area. As shown in Figure 6-11, this VI locates edges along a set
of parallel search lines, or rake. The edges are determined by their
contrast and slope.
Figure 6-11. Vertical Clamping
1 Rectangular Search Region
2 Search Lines for Edge Detection
3 Detected Edge Points
4 Line Fit to Edge Points
5 Measured Distance
1
3
5
4
2
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 6 Measuring Features
LabVIEW Machine Vision and Image Processing 6-16 ni.com
Gauging
Use gauging to make critical dimensional measurementssuch as lengths,
distances, diameters, angles, and countsto determine if the product under
inspection is manufactured correctly. In Figure 6-12, the angle between the
two holes in reference to the center of the bracket is being measured. The
component is either classified or rejected, depending on whether the gauged
parameters fall within the user-defined tolerance limits.
Figure 6-12. Gauging
Gauging is used during both inline and offline production processes. During
inline processes, each component is inspected as it is manufactured. Visual
inline gauging inspection is a widely used inspection technique in
applications such as mechanical assembly verification, electronic packaging
inspection, container inspection, glass vial inspection, and electronic
connector inspection.
Gauging applications also measure the quality of products offline. A sample
of products is extracted from the production line so that measured distances
between features on the product can be studied to determine if the sample
falls within the specified tolerance range. You can measure the distances
between object edges, as well as distances between objects whose positions
were obtained using particle analysis or pattern matching. Edges also can be
combined to derive best fit lines, projections, intersections, and angles.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 6 Measuring Features
National Instruments Corporation 6-17 LabVIEW Machine Vision and Image Processing
Exercise 6-2 Measure Distance VI
Goal
Find multiple edges in an image and use the edge locations to measure the
distance between the edges.
Scenario
Edge detection is the simplest and most commonly-used algorithm in
machine vision applications. In this situation, a manufacturer must verify
the molding process used to create pin jumpers for electronic circuits. The
jumper is backlit to create a silhouette, and the edges across the finest details
of the part are measured.
Design
In this exercise, you will use Vision Assistant to create a program that
measures the distance between the edges in an image.
Flowchart
Figure 6-13. Flowchart of Measure Distance VI
Implementation
1. Launch Vision Assistant.
Click Open Image on the splash screen. If Vision Assistant is
already open, click FileOpen Image.
Generate Code Using Vision Assistant
Open an
Image File
and Display It
Dispose of
the Images
Find Edges in
the Image Along
a Line
Measure the
Distance Between
Edges
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 6 Measuring Features
LabVIEW Machine Vision and Image Processing 6-18 ni.com
Navigate to the <Program Files>\National Instruments\
Vision Assistant 8.5\Examples\jumper directory. Enable
Select All Files to select all images.
Click Open.
2. Find edges that define the location of the jumper clips.
Select Processing Functions: Machine VisionEdge Detector.
Draw a straight line across the top part of the jumper.
In the Edge Detector setup window, select Simple Edge Tool from
the Edge Detector drop-down listbox. The grayscale variation along
the line drawn is shown in the Line Profile, and all the edges found
are numbered and highlighted in green.
Select All Edges from the Look For drop-down listbox.
Click OK.
Figure 6-14. Using the Vision Assistant Edge Detector
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 6 Measuring Features
National Instruments Corporation 6-19 LabVIEW Machine Vision and Image Processing
3. Measure the distance between the edges found.
Select Machine VisionCaliper. The caliper tool measures
quantities, such as distances and angles, and numbers the edges
found during the Edge Detection process.
Make sure the Geometric Feature is set to Distance.
Click the image on points 2 and 3. The points are automatically
selected and indicated in the Available Points list in the Caliper
Setup window.
Click Measure.
Click OK.
Figure 6-15. Measuring Distance with the Vision Assistant Caliper
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 6 Measuring Features
LabVIEW Machine Vision and Image Processing 6-20 ni.com
4. Create a batch processing file to measure all of the images in the
directory.
Select ToolsBatch Processing.
Select Browser from the Image Source options box. The batch
processing will be performed on all the images you opened in Vision
Assistant at the beginning of this exercise.
Click Caliper 1 in the Script Steps box. Enable the Save Results
checkbox.
In the Save Options options box, click Setup and browse to the
<Exercises>\LabVIEW Machine Vision directory. Click
Current Folder.
Enter distance.txt as the File Prefix.
Verify that One file for all results is selected.
Click OK.
5. Run the batch process and check the results
Click Run.
The script runs on all the images in the <Program Files>\
National Instruments\Vision Assistant 8.5\
examples\jumper directory. The results are logged to the file
named distance.txt in the <Program Files>\National
Instruments\Vision Assistant 8.5.
Click OK when the batch processing is finished.
Click Return to exit Batch Processing.
Open the specified file in any text editor to look at the output.
6. Save the script as Distance Batch Processing.scr in the
<Exercises>\LabVIEW Machine Vision directory.
7. Create a LabVIEW VI from Vision Assistant.
Select ToolsCreate LabVIEW VI.
Select <Exercises>\LabVIEW Machine Vision\Edge
Detection and Measurement.vi as the path to save the VI.
Click Finish.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 6 Measuring Features
National Instruments Corporation 6-21 LabVIEW Machine Vision and Image Processing
Add the IMAQ Dispose VI (Vision and MotionVision Utilities
Image ManagementIMAQ Dispose) as the final step in the block
diagram.
Right-click the Image Display control on the front panel and select
Snapshot.
Examine the block diagram of the VI. Refer to the LabVIEW Help
for more information about edge detection VIs.
8. Run the VI.
In the file prompt, choose one of the jumper images from the
<Program Files>\National Instruments\Vision
Assistant 8.5\examples\jumper directory.
Run the VI.
Examine the results on the front panel. You will need to expand the
array to view more than one result at a time.
9. Save and close the VI.
10. Close Vision Assistant.
End of Exercise 6-2
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 6 Measuring Features
LabVIEW Machine Vision and Image Processing 6-22 ni.com
Exercise 6-3 Concept: Machine Vision Tools
Goal
Learn about additional machine vision tools available in NI Vision and use
them to measure various aspects of an image.
Scenario
Edge detection is one of the simplest machine vision functions. NI Vision
provides many more advanced functions that elaborate on this basic idea.
Implementation
1. Open a blank VI.
2. Save the VI as Machine Vision Tools.vi in the <Exercises>\
LabVIEW Machine Vision directory.
3. Read an image from file.
Place the Vision Acquisition Express VI (Vision and Motion
Vision ExpressVision Acquisition) on the block diagram.
In the NI Vision Acquisition Express configuration window, select
Simulated AcquisitionFolder of Images in the left-hand pane.
Click Next.
Select Single Acquisition with processing for the acquisition type.
Click Next.
Click the browse button next to the Image Path textbox.
Navigate to <Program Files>\National Instruments\
Vision\Examples\Images\Holes.tif and click OK.
Click Test to test the acquisition.
Click Finish to finish building the express VI.
On the front panel, right-click the Image Display control and select
Snapshot.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 6 Measuring Features
National Instruments Corporation 6-23 LabVIEW Machine Vision and Image Processing
4. Find the center of the hole on the right using the spoke tool.
Place a Vision Assistant Express VI (Vision and MotionVision
ExpressVision Assistant) on the block diagram.
In the NI Vision Assistant configuration window, select Processing
Functions: Machine VisionCircular Edge (Spoke).
Use the annulus tool to draw a circular region of interest around the
right top hole.
Tip You can maneuver the Region of Interest (ROI) after it has been placed by dragging
the cross at its center. You also can expand or contract either the inner or outer circle
radius by clicking on it and dragging. The circular edge of the hole should be between
the inner and outer circle of the ROI. Experiment with moving the ROI around and
changing the size of both the inner and outer circles.
Figure 6-16. Drawing a Circular ROI Around the Right Hole
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 6 Measuring Features
LabVIEW Machine Vision and Image Processing 6-24 ni.com
On the Settings tab, disable the Auto Setup checkbox and explore
the different settings of each parameter.
Set the Direction to Inside to Outside.
Figure 6-17. Setting the Distance to Inside to Outside
After you find the center of the hole, click OK.
5. Find the center of the hole on the top left.
Repeat Step 6 for the second hole. This will result in two Circular
Edge (Spoke) steps in your script.
6. Measure the width of the plate.
Select Processing Functions: Machine VisionClamp.
Draw a square ROI encompassing the bottom half of the object, as
shown in Figure 6-18.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 6 Measuring Features
National Instruments Corporation 6-25 LabVIEW Machine Vision and Image Processing
Figure 6-18. Using the Vision Assistant Clamp
Click OK.
Note The parameters for Clamp are nearly identical to those of Find Circular Edge.
Examine these settings and determine the correct values for the application. Notice that
the distance between the detected edges (in pixels) is displayed at the bottom.
7. Use the Performance Meter to measure how long your Vision Assistant
script takes to process images.
Select ToolsPerformance Meter.
Click Details to see how long each step in the script takes to run.
Time estimations become increasingly significant as your script
increases in size.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 6 Measuring Features
LabVIEW Machine Vision and Image Processing 6-26 ni.com
Figure 6-19. Vision Assistant Performance Meter
Note When creating an image processing application, remember that your own
computer may be faster or slower than the target machine. It is best to benchmark your
scripts on the target machine to ensure that the scripts will be fast enough in the real
application.
Click OK.
8. Select the results to display on your VI.
Click Select Controls.
Enable the Circular Edge (Spoke 1) 1Best Circle checkbox to
display the coordinates of the first circle.
Enable the Circular Edge (Spoke 2) 1Best Circle checkbox to
display the coordinates of the second circle.
Enable the Clamp 1Distance checkbox to display the length of the
object.
Click Finish to finish building the express VI.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 6 Measuring Features
National Instruments Corporation 6-27 LabVIEW Machine Vision and Image Processing
9. Finish building the block diagram shown in Figure 6-20.
Figure 6-20. Machine Vision Tools VI Block Diagram
10. Save the VI.
11. Run the VI.
End of Exercise 6-3
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 6 Measuring Features
LabVIEW Machine Vision and Image Processing 6-28 ni.com
Notes
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
National Instruments Corporation 7-1 LabVIEW Machine Vision and Image Processing
7
Using Machine Vision Techniques
This lesson teaches you how to use pattern matching and geometric
matching, and how to set up a coordinate system.
Pattern matching locates regions of a grayscale image that match a
predetermined template. Pattern matching finds template matches
regardless of poor lighting, blur, noise, shifting of the template, or rotation
of the template.
Geometric matching locates regions in a grayscale image that match a
model, or template, of a reference pattern. Geometric matching is
specialized to locate templates that are characterized by distinct geometric
or shape information.
Coordinate systems are important in taking measurements in a machine
vision system because all measurements are defined with respect to a
coordinate system. A coordinate system is based on a characteristic feature
of the object under inspection, which is used as a reference for the
measurements.
Topics
A. Pattern Matching
B. Geometric Matching
C. Coordinate Systems
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 7 Using Machine Vision Techniques
LabVIEW Machine Vision and Image Processing 7-2 ni.com
A. Pattern Matching
Use pattern matching to quickly locate known reference patterns or fiducials
in an image. With pattern matching, you create a model, or template, that
represents the object for which you are searching. Your machine vision
application then searches for the model in each acquired image, calculating
a score for each match. The score relates how closely the model matches the
pattern found.
Creating Good Template Images
Selecting a proper template image is crucial to obtaining good results with
the pattern matching tool. Because the template image represents the pattern
that you want to find, make sure that all the important and unique
characteristics of the pattern are well defined in the template image. Several
factors are critical in creating a template image.
SymmetryA rotationally symmetric template is less sensitive to
changes in rotation than one that is rotationally asymmetric.
Feature detailA template with relatively coarse features is less
sensitive to variations in size and rotation than a model with fine
features. However, the model must contain enough distinctive features
to be identified in your application.
Positional informationA template with strong edges in both the
x and y directions is easier to locate.
Background informationUnique background information in a
template improves search performance and accuracy.
Pattern matching algorithms are some of the most important functions in
image processing because of their use in varying applications. You can use
pattern matching in the following three general applications.
AlignmentDetermines the position and orientation of a known object
by locating fiducials. Use the fiducials as points of reference on the
object.
GaugingMeasures lengths, diameters, angles, and other critical
dimensions. If the measurements fall outside set tolerance levels, the
component is rejected. Use pattern matching to locate the object you
want to gauge.
InspectionDetects simple flaws, such as missing components or
unreadable printing.
Pattern matching is important to many applications. Pattern matching
provides your application with information about the number of instances
and location of the template within an image.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 7 Using Machine Vision Techniques
National Instruments Corporation 7-3 LabVIEW Machine Vision and Image Processing
For example, you may search an image containing a printed circuit board for
one or more alignment marks (fiducials). The machine vision application
uses the marks to align the board for chip placement from a chip mounting
device. Figure 7-1a shows part of a circuit board. Figure 7-1b shows a
common fiducial used in printed circuit board (PCB) inspections or chip
pick-and-place applications.
Figure 7-1. Example of a Common Fiducial
Using a Pattern Matching Tool
Because pattern matching is the first step in many machine vision applications,
it should work reliably under various conditions. In automated machine vision
applications, the visual appearance of materials or components under
inspection can change due to factors such as orientation of the component,
scale changes, and lighting changes. The pattern matching tool maintains its
ability to locate the reference patterns despite these changes.
The following sections describe common situations in which the pattern
matching tool gives accurate results.
Pattern Orientation and Multiple Instances
A pattern matching tool can locate the reference pattern in an image even
if the pattern in the image is shifted, rotated, or slightly scaled. When a
pattern is rotated or scaled in the image, the pattern matching tool can detect
the following:
The position of the pattern in the image
The orientation of the pattern
Multiple instances of the pattern in the image (if applicable)
a. b.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 7 Using Machine Vision Techniques
LabVIEW Machine Vision and Image Processing 7-4 ni.com
Figure 7-2a shows a template image, or pattern. Figure 7-2b shows the
template shifted in the image. Figure 7-2c shows the template rotated in the
image. Figure 7-2d shows the template scaled in the image.
Figure 7-2. Pattern Orientation and Multiple Instances
Ambient Lighting Conditions
The pattern matching tool can find the reference pattern in an image
under conditions of uniform changes in the lighting across the image.
Figure 7-3 illustrates the typical conditions under which pattern matching
works correctly. Figure 7-3a shows the original template image. Figure 7-3b
shows the same pattern under bright light. Figure 7-3c shows the pattern
under poor lighting.
Figure 7-3. Examples of Lighting Conditions
Blur and Noise Conditions
Pattern matching can find patterns that have undergone some transformation
because of blurring or noise. Blurring usually occurs because of incorrect
focus or depth of field changes. Figure 7-4 illustrates typical blurring and
noise conditions under which pattern matching works correctly. Figure 7-4a
shows the original template image. Figure 7-4b shows changes to the image
caused by blurring. Figure 7-4c shows changes to the image caused by noise.
Figure 7-4. Examples of Blur and Noise
a. b. c. d.
a. b. c.
a. b. c.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 7 Using Machine Vision Techniques
National Instruments Corporation 7-5 LabVIEW Machine Vision and Image Processing
Improving Match Speed
Every pattern matching algorithm makes assumptions about the images and
pattern matching parameters used in machine vision applications. These
assumptions work for most applications, but there may be cases where these
assumptions are not accurate and the default values of the parameters used
in the algorithm are not optimal. NI Vision gives you the flexibility to adjust
the pattern matching parameters according to the images you want to
acquire and the requirements of your application. The NI Vision pattern
matching tool contains the following parameters that influence pattern
matching:
Minimum ContrastUse the minimum contrast parameter to
potentially increase speed of the pattern matching tool. The pattern
matching tool ignores any image regions in which contrast values fall
below a set minimum contrast value. If the image you are searching
contains high contrast but also has some low contrast regions, set a high
minimum contrast value. Using a high minimum contrast value excludes
all areas in the image with low contrast, significantly reducing the search
region. If the image you are searching contains low contrast throughout,
set a low minimum contrast parameter to ensure that the pattern
matching tool searches for the template in all regions of the image.
Rotation Angle RangesFor matching objects that may vary in
orientationsuch as cases where the template pattern can be at any
orientation in the imagethe pattern matching algorithm assumes by
default that the pattern can take any orientation between 0 and 360. If
you know that the pattern rotation in your application is restricted to a
certain rangefor example, between 15 and 15 you can provide
this information to the pattern matching algorithm. This speeds up your
search time because the pattern matching tool does not have to look for
the pattern at all angles.
Shift and Rotation Learning and SearchingWhen you learn a
template, you must specify either shift-invariant or rotation-invariant
learning and matching. Rotation-invariant searches consume more
processing time to learn and search for a template than shift-invariant
searches because they must search 360.
Template Size and Search AreaA larger template requires more
processing time for the learning stage but actually takes less time for the
actual search process. A smaller search area increases your match speed.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 7 Using Machine Vision Techniques
LabVIEW Machine Vision and Image Processing 7-6 ni.com
B. Geometric Matching
When using geometric matching, you create a template that represents the
object for which you are searching. Your machine vision application then
searches for instances of the template in each inspection image and
calculates a score for each match. The score relates how closely the template
resembles the located matches.
Geometric matching helps you quickly locate objects with good geometric
information in an inspection image. Figure 7-5 shows examples of objects
with good geometric or shape information.
Figure 7-5. Objects on which Geometric Matching is Designed to Work
You can use geometric matching in the following application areas:
GaugingMeasures lengths, diameters, angles, and other critical
dimensions. If the measurements fall outside set tolerance levels, the
object is rejected. Use geometric matching to locate the object, or areas
of the object, you want to gauge. Use information about the size of the
object to preclude geometric matching from locating objects whose
sizes are too big or small.
InspectionDetects simple flaws, such as scratches on objects, missing
objects, or unreadable print on objects. Use the occlusion score returned
by geometric matching to determine if an area of the object under
inspection is missing. Use the curve matching scores returned by
geometric matching to compare the boundary (or edges) of a reference
object to the object under inspection.
AlignmentDetermines the position and orientation of a known object
by locating points of reference on the object or characteristic features of
the object.
SortingSorts objects based on shape and/or size. Geometric matching
returns the location, orientation, and size of each object. You can use the
location of the object to pick up the object and place it into the correct
bin. Use geometric matching to locate different types of objects, even
when objects may partially occlude each other.
Geometric matching provides your application with the number of object
matches and their locations within the inspection image. Geometric
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 7 Using Machine Vision Techniques
National Instruments Corporation 7-7 LabVIEW Machine Vision and Image Processing
matching also provides information about the percentage change in size
(scale) of each match and the amount by which each object in the match is
occluded.
For example, you can search an image containing multiple automotive parts
for a particular type of part in a sorting application. Figure 7-6a shows an
image of the part that you need to locate. Figure 7-6b shows an inspection
image containing multiple parts and the located part that corresponds to the
template. Figure 7-7 shows the use of geometric matching in an alignment
application.
Figure 7-6. Example of a Part Sorting Application that Uses Geometric Matching
Figure 7-7. Example of an Alignment Application that Uses Geometric Matching
a. b.
a. b.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 7 Using Machine Vision Techniques
LabVIEW Machine Vision and Image Processing 7-8 ni.com
When Not to Use Geometric Matching
The geometric matching algorithm is designed to find objects that have
distinct geometric information. The fundamental characteristics of some
objects may make other searching algorithms more optimal than geometric
matching. For example, the template image in some applications may be
defined primarily by the texture of an object, or the template image may
contain numerous edges and no distinct geometric information. In these
applications, the template image does not have a good set of features for the
geometric matching algorithm to model the template. Instead, the pattern
matching algorithm would be a better choice.
In some applications, the template image may contain sufficient geometric
information, but the inspection image may contain too many edges. The
presence of numerous edges in an inspection image can slow the
performance of the geometric matching algorithm because the algorithm
tries to extract features using all the edge information in the inspection
image. In such cases, if you do not expect template matches to be scaled or
occluded, use pattern matching to solve the application.
Using a Geometric Matching Tool
Because geometric matching is an important tool for machine vision
applications, it must work reliably under various, sometimes harsh,
conditions. In automated machine vision applicationsespecially those
incorporated into manufacturing processesthe visual appearance of
materials or components under inspection can change because of factors
such as varying part orientation, scale, and lighting. The geometric
matching tool must maintain its ability to locate the template patterns
despite these changes. The following sections describe common situations
in which the geometric matching tool needs to return accurate results.
Part Quantity, Orientation, and Size
The geometric matching algorithm can detect the following items in an
inspection image:
One or more template matches
Position of the template match
Orientation of the template match
Change in size of the template match compared to the template image
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 7 Using Machine Vision Techniques
National Instruments Corporation 7-9 LabVIEW Machine Vision and Image Processing
You can use the geometric matching algorithm to locate template matches
that are rotated or scaled by certain amounts. Figure 7-8a shows a template
image. Figure 7-8b shows the template match rotated and scaled in the
image.
Figure 7-8. Examples of a Rotated and Scaled Template Match
Non-Linear or Non-Uniform Lighting Conditions
The geometric matching algorithm can find a template match in an
inspection image under conditions of non-linear and non-uniform lighting
changes across the image. These lighting changes include light drifts, glares,
and shadows. Figure 7-9a shows a template image. Figure 7-9b shows the
typical conditions under which geometric matching correctly finds template
matches.
Figure 7-9. Examples of Lighting Conditions
a. b.
a. b.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 7 Using Machine Vision Techniques
LabVIEW Machine Vision and Image Processing 7-10 ni.com
Contrast Reversal
The geometric matching algorithm can find a template match in an
inspection image even if the contrast of the match is reversed from the
original template image. Figure 7-10 illustrates a typical contrast reversal.
Figure 7-10a shows the original template image. Figure 7-10b shows an
inspection image with the contrast reversed. The geometric matching
algorithm can locate the part in Figure 7-10b with the same accuracy as the
part in Figure 7-10a.
Figure 7-10. Example of Contrast Reversal
Partial Occlusion
The geometric matching algorithm can find a template match in an
inspection image even when the match is partially occluded because of
overlapping parts or the part under inspection not fully being within the
boundary of the image. In addition to locating occluded matches, the
algorithm returns the percentage of occlusion for each match.
In many machine vision applications, the part under inspection may be
partially occluded by other parts that touch or overlap it. Also, the part may
seem partially occluded because of degradations in the manufacturing
process. Figure 7-11 illustrates different scenarios of occlusion under which
geometric matching can find a template match. Figure 7-9a represents the
template image for this example.

Figure 7-11. Examples of Matching Under Occlusion
a.
b.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 7 Using Machine Vision Techniques
National Instruments Corporation 7-11 LabVIEW Machine Vision and Image Processing
Different Image Backgrounds
The geometric matching algorithm can find a template match even if the
inspection image has a background that is different from the background in
the template image. Figure 7-12 shows examples of geometric matching
locating a template match in inspection images with different backgrounds.
Figure 7-9a represents the template image for this example.
Figure 7-12. Example of Matching with Different Backgrounds
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 7 Using Machine Vision Techniques
LabVIEW Machine Vision and Image Processing 7-12 ni.com
Exercise 7-1 Pattern Matching VI
Goal
Learn how to use pattern matching to locate objects in an image.
Scenario
Pattern matching is heavily used in semiconductor inspections. Pattern
matching can help verify part placement and product yield. In this exercise,
you will locate text for a jumper on a printed circuit board so the part can be
placed properly.
Design
You will create a VI that uses the Vision Assistant Express VI to implement
pattern matching.
Flowchart
Figure 7-13. Flowchart of Pattern Matching VI
Description
Open sample images that ship with NI Vision. Create a pattern to find.
Display the matches. Create a LabVIEW VI that will find the pattern.
Implementation
1. Open a blank VI.
2. Save the VI as Pattern Matching.vi in the <Exercises>\
LabVIEW Machine Vision directory.
3. Acquire images.
Place a Vision Acquisition Express VI (Vision and MotionVision
ExpressVision Acquisition) on the block diagram.
Open an
Image File
Extract a Pattern
to Match from
the Image
Look for Instances
of the Pattern
in the Image
Display
Matches
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 7 Using Machine Vision Techniques
National Instruments Corporation 7-13 LabVIEW Machine Vision and Image Processing
In the NI Vision Acquisition Express configuration window, select
Simulated AcquisitionFolder of Images in the left-hand pane.
Click Next.
Select Continuous Acquisition with inline processing for the
acquisition type.
Set Acquire Image Type to Acquire Most Recent Image.
lick Next.
Click the browse button next to the Image Path textbox.
Navigate to <Program Files>\National Instruments\
Vision\Examples\Images\Pcb\PCB03-01.png and
click OK.
Enable the Cycle through Folder of Images checkbox.
Click Test to verify that your acquisition is configured correctly.
Click Stop when finished.
Click Finish to return to the block diagram.
4. Create a pattern template and find it in the image.
Place a Vision Assistant Express VI (Vision and MotionVision
ExpressVision Assistant) on the block diagram inside the While
Loop.
In the NI Vision Assistant configuration window, select FileOpen
Image.
Navigate to the <Program Files>\National Instruments\
Vision\Examples\Images\Pcb directory. Enable the Select All
Files checkbox and click Open. If the Vision Assistant prompts you
to remove previously acquired images, select Yes.
Select Processing Functions: Machine VisionPattern
Matching.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 7 Using Machine Vision Techniques
LabVIEW Machine Vision and Image Processing 7-14 ni.com
Click Create Template. The Select a Template in the Image
window opens.
Draw a rectangular ROI around the inverted J3.
Figure 7-14. Selecting a Template in an Image
Click OK. Vision Assistant prompts you to save the template file.
Save the template as Pattern Matching Template.png in the
<Exercises>\LabVIEW Machine Vision directory.
Note A portable network graphic (.PNG) file must be used for templates because the
PNG format allows NI Vision to attach extra data to its header. The template vector map
is saved in this header.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 7 Using Machine Vision Techniques
National Instruments Corporation 7-15 LabVIEW Machine Vision and Image Processing
Figure 7-15. Pattern Matching Setup
The selected template opens in the Pattern Matching Setup window. The
table under the image window describes where each match was found,
the score of the match (where a score of 1000 represents a perfect
match), and the angle of rotation of the pattern. A red overlay highlights
the patterns found in the image.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 7 Using Machine Vision Techniques
LabVIEW Machine Vision and Image Processing 7-16 ni.com
5. Change the settings and test your pattern matching on different images.
Click the Settings tab and enable the Search for Rotated Patterns
checkbox.
Figure 7-16. Pattern Matching Setup Window
Click OK.
In the upper left corner of the NI Vision Assistant window, click the
Next Image button.
Click the Make Image Active button to make the next image in the
browser the active image.
Click the Run Once button to run the script on the active image.
Click OK when the script is complete and the pattern is found.
Run the script on each image in the image browser by repeating the
steps above for making an image active and running the script.
Notice the score for each match and how the score is affected by changes
in rotation. If you have an image in which the pattern is not found, try
lowering the Minimum Score to 500. Change the Minimum Score in the
Pattern Matching Setup window, on the Settings tab. As you lower the
score, Vision Assistant reports matches that are less likely to be correct.
Click Select Controls.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 7 Using Machine Vision Techniques
National Instruments Corporation 7-17 LabVIEW Machine Vision and Image Processing
In the Indicators section, place a check in the Pattern Matching 1
Matches checkbox and the Pattern Matching 1Number of
Matches checkbox.
Click Finish.
6. Finish building the block diagram shown in Figure 7-17.
Figure 7-17. Pattern Matching Block Diagram
7. The front panel should look similar to Figure 7-18.
Figure 7-18. Pattern Matching Front Panel
8. Save the VI.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 7 Using Machine Vision Techniques
LabVIEW Machine Vision and Image Processing 7-18 ni.com
Testing
1. Test the VI.
Display the front panel and run the VI. The template you created and
saved is automatically loaded and found in the images.
Check the score from the Matches indicator to see how well the
located pattern matched the template.
2. Examine the code generated by the Vision Assistant Express VI.
Go to the block diagram.
Right-click the Vision Acquisition Express VI and select Open
Front Panel.
Click Convert button when prompted to convert to a subVI.
View the code generated by the Vision Acquisition Express VI.
Examine the file path constant wired to the IVA Match Pattern
Algorithm VI. This control determines the path to the template
pattern you are matching. Notice that the path is automatically
populated with the location of the pattern you saved in the Vision
Assistant Express VI.
Double-click IVA Match Pattern Algorithm.vi to open it.
Open the block diagram. Turn on the context help by pressing
<Ctrl-H>.
Display Frame 0 of the Sequence structure.
The parameters from the Setup Match Parameters cluster are
unbundled and passed into the IMAQ Setup Match Pattern 2 VI.
Refer to the LabVIEW context help for more information about
this VI.
Display Frame 1 of the Sequence structure.
Frame 1 performs the pattern matching that was configured in
Frame 0. The Match Parameters cluster is unbundled and passed into
the IMAQ Match Pattern 2 VI. The optional rectangle input is used
to specify the location to search for the pattern.
Close the IVA Match Pattern Algorithm VI.
Close the generated subVI. Click Defer Decision if prompted to
save changes.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 7 Using Machine Vision Techniques
National Instruments Corporation 7-19 LabVIEW Machine Vision and Image Processing
Go to the block diagram of Pattern Matching.vi. Notice that
the Vision Acquisition Express VI is pale yellow because you
converted it into a subVI.
Select EditUndo Change Attribute.
Click Dont Save when prompted to save changes. Notice that the
converted subVI is now blue because it has changed back into the
original Vision Acquisition Express VI.
3. Save and close the VI.
Challenge
1. Add code to interactively select a template when running this VI, rather
than loading a saved template. Refer to the Exercise 6-1, Concept:
Region of Interest, for ideas.
2. Save and close the VI.
End of Exercise 7-1
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 7 Using Machine Vision Techniques
LabVIEW Machine Vision and Image Processing 7-20 ni.com
C. Coordinate Systems
In a typical machine vision application, measurements are extracted from
an ROI instead of the entire image. The feature under inspection must
always appear inside the defined ROI in order to extract measurements from
that ROI.
When the location and orientation of the object under inspection is the same
in every inspection image, you can take measurements directly without
locating the object in each image.
In many applications, the object under inspection is not positioned in the
cameras field of view consistently enough to use fixed search areas. Using
a coordinate system allows you to define search areas that can shift and
rotate with the object under inspection. A coordinate system is defined by a
reference point (origin) and a reference angle in the image or by the lines
that make up its two axes.
Using Coordinate Systems
All measurements are defined with respect to a coordinate system. A
coordinate system is based on a characteristic feature of the object under
inspection, which is used as a reference for the measurements. When you
inspect an object, first locate the reference feature in the inspection image.
Choose a feature on the object that the software can reliably detect in every
image. Do not choose a feature that may be affected by manufacturing errors
that would make the feature impossible to locate in images of defective
parts.
You can restrict the region in the image in which the software searches
for the feature by specifying an ROI that encloses the feature. Defining a
ROI in which you expect to find the feature can prevent mismatches if the
feature appears in multiple regions of the image. An ROI also may improve
the locating speed.
Defining a Coordinate System
Complete the following general steps to define a coordinate system and
make measurements based on the new coordinate system.
1. Define a reference coordinate system.
a. Define a search area that encompasses the reference feature or
features on which you base your coordinate system. Make sure that
the search area encompasses the features in all your inspection
images.
b. Locate an easy-to-find reference feature of the object under
inspection. That feature serves as the base for a reference coordinate
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 7 Using Machine Vision Techniques
National Instruments Corporation 7-21 LabVIEW Machine Vision and Image Processing
system in a reference image. You can use two main techniques to
locate the featureedge detection or pattern matching.
c. The software builds a coordinate system to keep track of the location
and orientation of the object in the image.
2. Set up measurement areas within the reference image in which you want
to make measurements.
3. Acquire an image of the object to inspect or measure.
4. Update the coordinate system. During this step, NI Vision locates the
features in the search area and builds a new coordinate system based on
the new location of the features.
5. Make measurements.
a. NI Vision computes the difference between the reference coordinate
system and the new coordinate system. Based on this difference, the
software moves the new measurement areas with respect to the new
coordinate system.
b. Make measurements within the updated measurement area.
Figure 7-19a illustrates a reference image with a defined reference
coordinate system. Figure 7-19b illustrates an inspection image with an
updated coordinate system.
Figure 7-19. Coordinate Systems of a Reference Image and Inspection Image
1 Search Area for the Coordinate System
2 Object Edges
3 Origin of the Coordinate System
4 Measurement Area
1
4
1
4
a. b.
2
3
3
2
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 7 Using Machine Vision Techniques
LabVIEW Machine Vision and Image Processing 7-22 ni.com
Exercise 7-2 Measurement with Coordinate System.vi
Goal
Define a single set of ROIs for any object placement within the image.
Scenario
In many inspections, it can be difficult to take a picture of every part in the
same area and position of the cameras field of view. This causes problems
when trying to define an ROI that applies to every part under inspection.
However, if a coordinate system is defined on some feature of the part first,
then the ROI can be defined on that coordinate system.
Description
In this exercise you will develop a simple inspection application that
measures features of a battery clamp. The location of the clamp is not static
in every image, so you must create a coordinate system to ensure that the
ROIs are inspecting the correct sections of the part.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 7 Using Machine Vision Techniques
National Instruments Corporation 7-23 LabVIEW Machine Vision and Image Processing
Flowchart
Figure 7-20. Flowchart of Measurement with Coordinate System VI
You can use Vision Assistant to generate the machine vision algorithms and
later add decision-making code in LabVIEW.
No
Open a Pattern
Matching
Template
Open the
Reference
Image of the
Clamp
Find the Template
Pattern in the
Reference Image
and Set Up a
Coordinate System
Open an Image
of the Clamp
Find the Template
Pattern in the
Image and Update
the Coordinate
System
Measure
the Clamps
Circularity
Measure the
Prong
Separation
Generate Code Using Vision Assistant
Yes
Measure
Another
Image?
Dispose of
the Images N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 7 Using Machine Vision Techniques
LabVIEW Machine Vision and Image Processing 7-24 ni.com
Implementation
1. In Vision Assistant, click FileOpen Image.
2. Browse to <Program Files>\National Instruments\Vision\
Examples\Images\Battery and select Image00.jpg. Click Open.
3. Measure the circularity of the battery clamp and the separation between
its prongs.
Select Processing Functions: Machine VisionCircular Edge
(Spoke).
Using the annulus tool, draw an ROI over the C-shaped opening of
the battery clamp. Adjust the ROI so that the ROI edges do not cover
the opening of the battery clamp. The ROI should not form a
complete circle.
Figure 7-21. Drawing an Annular ROI with the Circular Edge (Spoke)
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 7 Using Machine Vision Techniques
National Instruments Corporation 7-25 LabVIEW Machine Vision and Image Processing
Disable the Auto Setup checkbox in the Circular Edge (Spoke)
Setup window.
Set the Direction to Inside to Outside.
Figure 7-22. Circular Edge Setup Window
Click OK.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 7 Using Machine Vision Techniques
LabVIEW Machine Vision and Image Processing 7-26 ni.com
Select Machine VisionClamp. Draw a rectangular ROI over the
two parallel end pieces of the battery clamp.
Figure 7-23. Drawing a Rectangular ROI with the Clamp
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 7 Using Machine Vision Techniques
National Instruments Corporation 7-27 LabVIEW Machine Vision and Image Processing
Set the Process parameter to Vertical Max Caliper.
Figure 7-24. Clamp Setup Window
Click OK when you are finished.
4. Save your script.
Create a new folder in the <Exercises>\LabVIEW Machine
Vision directory called Measurement with Coordinate
System.
Select FileSave Script and save the script as Measure Battery
Clamp.scr in the <Exercises>\LabVIEW Machine Vision\
Measurement with Coordinate System directory.
5. In Vision Assistant, select ToolsCreate LabVIEW VI and create a
LabVIEW VI based on your Vision Assistant script.
Save the VI as Measure Battery Clamp.vi in the
<Exercises>\LabVIEW Machine Vision\Measurement
with Coordinate System directory.
Click Next.
Ensure that Current Script is selected.
Click Next.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 7 Using Machine Vision Techniques
LabVIEW Machine Vision and Image Processing 7-28 ni.com
Select Image Control as the image source.
Click Finish.
6. Examine your LabVIEW block diagram. Right-click any VI and select
Help to display more information about that VI and its parameters.
The VI that you have created makes two measurements on the reference
imagethe circularity of the battery clamp opening and the distance
between the parallel end pieces of the battery clamp. These
measurements are subject to movement and rotation during an assembly
and inspection process. However, because the position of the ROI is
static, the ROI does not automatically adjust for any movement or
rotation in the position of the battery clamp. To compensate for changes
in position, you must set up a coordinate system based on the orientation
of a reference image. After setting up the coordinate system, use pattern
matching to compare the orientation of parts under inspection to the
orientation of a template image so that the parts can be measured
accurately.
The Battery directory contains several images of a battery clamp that
reflect the kinds of movement and rotation that are common in an
inspection environment. The directory also contains a template image,
Template.png. This template image was installed with NI Vision and
contains pattern matching information in the header of a PNG file. Only
PNG files can store information about spatial calibration, pattern
matching, and overlays for each image.
You will edit your block diagram to look like Figure 7-25.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 7 Using Machine Vision Techniques
National Instruments Corporation 7-29 LabVIEW Machine Vision and Image Processing
Figure 7-25. Block Diagram of Measurement with Coordinate System VI
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 7 Using Machine Vision Techniques
LabVIEW Machine Vision and Image Processing 7-30 ni.com
7. Use NI Vision to build a coordinate system by matching the template
pattern.
On the block diagram of the new VI, delete the Image In and error
in controls.
Place the IMAQ Find Coord Sys (Pattern) 2 VI (Functions
Vision and MotionMachine VisionCoordinate System
Find Coord Sys (Pattern) 2) on the block diagram to the left of the
IMAQ Find Circular Edge VI.
Connect the Image and Error cluster wires between the Find Coord
Sys (Pattern) 2 VI and the IMAQ Find Circular Edge VI.
Connect the Coordinate System wires between the IMAQ Find
CoordSys (Pattern) 2 VI, the IMAQ Find Circular Edge VI, and the
IMAQ Clamp Vertical Max VI.
Create a constant on the Settings input of the Find CoordSys
(Pattern) 2 VI and change the Match Mode enumeration to Rotation
Invariant.
8. Replace the Image Out indicator with an Image Display indicator.
Open the front panel.
Right-click the Image Out indicator and select ReplaceVision
Image Display.
9. Add code to open the template file and open the image files to be
inspected.
Place the IMAQ ReadFile VI (Vision and MotionVision Utilities
Files) on the block diagram to the left of the IMAQ Find Coord Sys
(Pattern) 2 VI.
Connect the Image and Error Cluster wires from the IMAQ ReadFile
VI to the IMAQ Find Coord Sys (Pattern) 2 VI.
Place the IMAQ Create VI to the left of the IMAQ ReadFile VI.
Connect the Image and Error Cluster wires from IMAQ Create to
IMAQ ReadFile.
Create a constant on the Image Name input of the IMAQ Create VI.
Enter Inspection for the input text.
Place a second IMAQ ReadFile VI on the block diagram, to the left
of the IMAQ Create VI.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 7 Using Machine Vision Techniques
National Instruments Corporation 7-31 LabVIEW Machine Vision and Image Processing
Connect the error out output of the second IMAQ ReadFile VI to the
error in input of the IMAQ Create VI.
Connect the Image Out output of the second IMAQ ReadFile VI to
the Template input of the IMAQ Find Coord Sys (Pattern) 2 VI.
Create a constant on the File Path input of the second IMAQ
ReadFile VI and specify the following path to the template file:
<Program Files>\National Instruments\Vision\
Examples\Images\Battery\template.png.
Place a second IMAQ Create VI on the block diagram, to the left of
the second IMAQ ReadFile VI.
Connect the Image and Error Cluster wires from the IMAQ Create
VI to the IMAQ ReadFile VI.
Create a constant on the Image Name input of the second IMAQ
Create VI. Enter Template for the input text.
The code you added should look like Figure 7-26, which shows only the
left side of the VI.
Figure 7-26. Left Side of Measurement with Coordinate System VI
10. Add code to make the program analyze a batch of images in succession.
Draw a For Loop around the following VIs: the right-most IMAQ
ReadFile VI, IMAQ Find CoordSys (Pattern) 2 VI, IMAQ Find
Circular Edge VI, IMAQ Clamp Vertical Max VI. The loop should
encompass all inputs and outputs to the VIs except the final Error
cluster indicator.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 7 Using Machine Vision Techniques
LabVIEW Machine Vision and Image Processing 7-32 ni.com
Place the Wait (ms) function inside the For Loop. Create a constant
value of 500 on the input.
Place the IMAQ Load Image Dialog VI (Vision and Motion
Vision UtilitiesFilesIMAQ Load Image Dialog) to the left of the
For Loop. Connect the Paths output through an indexing loop tunnel
to the File Path input of the IMAQ ReadFile VI inside the For Loop.
Create a True constant on the Multiple Files? (No) input.
Create a string constant on the Prompt input. Enter the string
Select all images for inspection.
Create a file path constant on the Start Path input. Enter the
following path: <Program Files>\National Instruments\
Vision\Examples\Images\Battery.
Place the Equal to 0? function (ProgrammingComparison
Equal to 0?) inside the For Loop. Connect the loop index to the
input of the Equal to 0? function.
Draw a Case Structure (ProgrammingStructures) to the right of
the Equal to 0? function. Connect the output of the Equal to 0?
function to the Case Selector input. This creates a true/false
structure.
In the True case, place a Mode constant (created from the input on
the IMAQ Find CoordSys (Pattern) 2 VI) and select Find
Reference.
In the False case, place another Mode constant and select Update
CoordSys.
Right-click the error cluster input tunnel on the For Loop and select
Replace with Shift Register. Select the corresponding output
tunnel as the matching shift register.
Right-click the For Loop and select Add Shift Register.
On the left side of the For Loop, connect the output of the second
Shift Register to the Coordinate System In input of the IMAQ Find
CoordSys (Pattern) 2 VI.
On the right side of the For Loop, connect the Coordinate System
(duplicate) output of the IMAQ Clamp Vertical Max VI to the input
of the second Shift Register.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 7 Using Machine Vision Techniques
National Instruments Corporation 7-33 LabVIEW Machine Vision and Image Processing
Connect the Coordinate System Out output of the IMAQ Find
CoordSys (Pattern) 2 VI to the Coordinate System input of the
IMAQ Find Circular Edge VI.
Connect the Coordinate System (duplicate) output of the IMAQ
Find Circular Edge VI to the Coordinate System input of the IMAQ
Clamp Vertical Max VI.
Replace the error out indicator with the Simple Error Handler VI
(ProgrammingDialog and User InterfaceSimple Error
Handler).
Check your code with Figure 7-27 to verify that it is written
correctly.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 7 Using Machine Vision Techniques
LabVIEW Machine Vision and Image Processing 7-34 ni.com
Figure 7-27. Block Diagram of Measurement with Coordinate System VI
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 7 Using Machine Vision Techniques
National Instruments Corporation 7-35 LabVIEW Machine Vision and Image Processing
You have added the VIs necessary for learning the coordinate system
from the original image. The ROIs for IMAQ Find Circular Edge and
IMAQ Clamp Vertical Max are now referenced to the pattern found in the
image. The ROIs can reposition themselves correctly in a new image if
the pattern is found at a different location or angle.
Testing
1. Display the front panel and run your VI.
2. When prompted, select the images in the Battery directory. Because
Image00.jpg is your reference image, be sure to select Image00.jpg
first.
End of Exercise 7-2
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 7 Using Machine Vision Techniques
LabVIEW Machine Vision and Image Processing 7-36 ni.com
Notes
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
National Instruments Corporation 8-1 LabVIEW Machine Vision and Image Processing
8
Processing Binary Images
This lesson teaches you how to collect information with histograms, how to
perform a threshold, how to use binary morphology in NI Vision, and how
to make particle measurements.
The histogram is a fundamental image analysis tool that describes the
distribution of the pixel intensities in an image. You can use the histogram
to determine if the overall intensity in the image is high enough for your
inspection task.
You can use particle analysis to detect connected regions of pixels in an
image and then make selected measurements of those regions. These
regions are commonly referred to as particles. A particle is a contiguous
region of nonzero pixels. You can extract particles from a grayscale image
by thresholding the image into background and foreground states. Zero
valued pixels are in the background state, and nonzero valued pixels are in
the foreground.
A binary image is an image containing particle regions with pixel values of
1 and a background region with pixel values of 0. Binary images are the
result of the thresholding process.
Because thresholding is a subjective process, the resulting binary image
may contain unwanted information, such as noise particles, particles
touching the border of images, particles touching each other, and particles
with uneven borders. By affecting the shape of particles, morphological
functions can remove the unwanted information and improve the
information in the binary image.
After the thresholding process, you can use particle measurements to make
shape measurements on particles in a binary image.
Topics
A. Collecting Image Information with Histograms
B. Thresholds
C. Morphology
D. Making Particle Measurements
E. Using the Golden Template
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 8 Processing Binary Images
LabVIEW Machine Vision and Image Processing 8-2 ni.com
A. Collecting Image Information with Histograms
A histogram provides a general description of the contents of an image and
helps identify various components such as the background, objects,
and noise. Use the histogram to determine if the overall quality of the image
is high enough for your inspection task. You can also use the histogram to
determine whether an image contains distinct regions of certain grayscale
values or to tune the image acquisition conditions.
A histogram is usually presented as a plot, or histograph, of the number of
occurrences of a grayscale value in relation to the intensity of that value.
Figure 8-1 illustrates a histogram report and its histograph in NI Vision
Assistant.
Figure 8-1. Histogram and Histograph
You can detect two common problems, saturation and lack of contrast, by
looking at the histogram.
Saturation
Too little light in the imaging environment leads to underexposure of the
imaging sensor, whereas too much light causes overexposure, or saturation,
of the imaging sensor. Images acquired in underexposed or saturated
conditions do not contain all the information about the scene that you need
to inspect. It is important to detect these imaging conditions and correct for
them during setup of your imaging system.
You can detect whether a sensor is underexposed or saturated by looking at
the histogram of an image acquired with the sensor. An underexposed image
contains a large number of pixels with low grayscale values, which are
depicted by a peak at the lower end of the histogram, as shown in Figure 8-3.
An overexposed, or saturated, image contains a large number of pixels with
very high gray-level values. This condition is represented by a peak at the
upper end of the histogram, as shown in Figure 8-4.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 8 Processing Binary Images
National Instruments Corporation 8-3 LabVIEW Machine Vision and Image Processing
Figure 8-2. Original Image and Histogram
Figure 8-3. Underexposed Image and Histogram
Figure 8-4. Saturated Image and Histogram
Lack of Contrast
A widely used type of imaging application involves inspecting and counting
parts of interest in a scene. A strategy to separate the objects from the
background relies on a difference between the object and background
intensities (for example, a bright part and a darker background). The
histogram of an image with high contrast reveals two or more well-separated
intensity populations (an example can be seen in Figure 8-2). Tune your
imaging setup until the histogram of your acquired images has the contrast
required by your application.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 8 Processing Binary Images
LabVIEW Machine Vision and Image Processing 8-4 ni.com
B. Thresholds
A binary image is an image containing particle regions with pixel values
of 1 and a background region with pixel values of 0. The thresholding
process defines which values (ranging from 0 to 255) constitute the white
objects, and which values (ranging from 0 to 255) constitute the black
background in your image. You can use information from the image
histogram to determine your threshold values, as shown in Figure 8-5.
The threshold process lets you specify an upper and lower limit for pixel
intensity values in your image. Pixels inside the bounds of the limits are set
to a pixel value of 1, and pixels outside the bounds of the limits are set to a
pixel value of 0. The result is a binary image containing intensity levels of
only ones and zeros. You can use binary images in many other image
processing functions, including masks, logical operations such as AND and
OR, binary morphology operations, and particle analysis.
Figure 8-5a illustrates the original image. Figure 8-5b illustrates the
histograph of the image. The histograph contains three spikes representing
the dark, medium, and light areas of the image, which are positioned from
left to right, respectively. Notice that the threshold line on the histograph is
drawn between the medium and light areas of the histograph. This line
determines the appearance of the thresholded image, shown in Figure 8-5c.
Figure 8-5. The Thresholding Process
0 166 255
Image Histogram
Threshold
Interval
a. b. c.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 8 Processing Binary Images
National Instruments Corporation 8-5 LabVIEW Machine Vision and Image Processing
Exercise 8-1 Histogram Grab VI
Goal
Use a graphical histogram to measure pixel intensity during a live image
acquisition.
Scenario
You will enhance a VI you previously created, Grab and Display VI, which
is located in the <Exercises>\LabVIEW Machine Vision directory.
Description
You will edit the block diagram of the Grab and Display VI to look like
Figure 8-6.
.
Figure 8-6. Block Diagram of Histogram Grab VI
Implementation
1. Launch LabVIEW and open Grab and Display.vi (or Grab and
Display 1394.vi).
2. Save the VI as Histogram Grab.vi (or Histogram Grab
1394.vi) in the <Exercises>\LabVIEW Machine Vision
directory.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 8 Processing Binary Images
LabVIEW Machine Vision and Image Processing 8-6 ni.com
3. Use the IMAQ Histogram VI to measure the pixel intensity of every
displayed image.
Place a new Waveform Graph control (ModernGraph
Waveform Graph) on the front panel. Name the graph Histogram
Graph.
On the block diagram, add the IMAQ Histograph VI (Vision and
MotionImage ProcessingAnalysisIMAQ Histograph) to the
While Loop after the Vision Acquisition Express VI.
Connect the Image Out output of the Vision Acquisition Express VI
to the Image input of the IMAQ Histograph VI.
Connect the Histogram Graph output of IMAQ Histograph to the
terminal of the Waveform Graph you placed on the front panel.
Arrange and wire the block diagram as shown in Figure 8-6.
4. Save the VI.
5. Run the VI to display a histograph as you acquire images.
6. Leave the Histogram Grab VI and LabVIEW open for the next exercise.
End of Exercise 8-1
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 8 Processing Binary Images
National Instruments Corporation 8-7 LabVIEW Machine Vision and Image Processing
Exercise 8-2 Concept: Histogram and Threshold Grab
Goal
Use your histogram analysis to adjust a live thresholding operation.
Scenario
You will enhance a VI you previously created, Histogram Grab.vi,
which is located in the C:\Exercises\LabVIEW Machine Vision
directory.
Description
You will edit the block diagram of Histogram Grab.vi to look like
Figure 8-7.
Figure 8-7. Block Diagram of Histogram and Threshold Grab VI
Implementation
1. Add thresholding to Histogram Grab.vi.
Open Histogram Grab.vi (or Histogram Grab 1394.vi) if
it is not already open.
Save the VI as Histogram and Threshold Grab.vi in the
<Exercises>\LabVIEW Machine Vision directory.
On the front panel, place another Image Display indicator (Vision
Image Display) and name it Binary Image.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 8 Processing Binary Images
LabVIEW Machine Vision and Image Processing 8-8 ni.com
Right-click the new Image Display control and select Palette
Binary.
Place a Vision Assistant Express VI (Vision and MotionVision
ExpressVision Assistant) on the block diagram.
In the NI Vision Assistant configuration window, select FileOpen
Image.
Navigate to Acquired Image.jpg in the <Exercises>\
LabVIEW Machine Vision directory and click Open. If the
Vision Assistant prompts you to remove previously acquired
images, select Yes.
Select Processing Functions: GrayscaleThreshold.
Your original image displays in the browser in the upper left corner of
the window, while the result of the threshold is shown in the main
display window. With this interactive display, you can adjust the
threshold settings until you see meaningful results.
In the bottom left corner of the window, a histogram shows the pixel
values in the original image. When the threshold is set to Look for Bright
Objects, the pixels in the range above the marker will be given values
of 1, and the pixels with values below the marker will be assigned a
value of 0. When the threshold is set to Look for Dark Objects, pixels
below the marker value will be assigned a value of 1 and pixels above
the marker value will be assigned a value of 0. If the threshold is set to
Look for Gray Objects, two markers will appear. Pixels within the
marked range will be assigned a value of 1 and pixels outside of the
marked ranged will be assigned a value of 0. Using the threshold
settings, you can adjust your values to separate the area of interest in the
image.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 8 Processing Binary Images
National Instruments Corporation 8-9 LabVIEW Machine Vision and Image Processing
Figure 8-8. Performing a Threshold in Vision Assistant
Set the Threshold to Look for Gray Objects.
Move the black and white markers so that desired portions of the
original image are displayed in red.
Click OK. The Threshold 1 step is now listed in your script. All of
the pixels in red now have a pixel intensity of 1, while everything
else in the image is black, indicating a pixel value of 0.
Note To make changes to any step in the script, double-click that step.
Experiment with the other options.
Click Cancel to return to the original step settings and exit the
Threshold step.
Click Select Controls.
In the Controls section, enable the Threshold 1Range checkbox.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 8 Processing Binary Images
LabVIEW Machine Vision and Image Processing 8-10 ni.com
Click Finish.
2. Finish building the block diagram shown in Figure 8-9.
Figure 8-9. Histogram and Threshold Grab VI Block Diagram
Right-click the Range(Threshold 1) input of the Vision Assistant
Express VI and select CreateControl.
Place an IMAQ Create VI (Vision and MotionVision Utilities
Vision ManagementIMAQ Create) on the block diagram to the
left of the While Loop. Right-click the Image Name input and select
CreateConstant. Set the string constant to Threshold. Wire the
New Image output of the IMAQ Create VI to the Image Dst input of
the Vision Assistant Express VI.
Delete the error cluster constant to the left of the While Loop.
Place a second IMAQ Dispose VI (Vision and MotionVision
UtilitiesVision ManagementIMAQ Dispose) to the right of the
While Loop.
Wire the block diagram as shown in Figure 8-9.
3. Save the VI.
4. Run the VI.
Run the VI to display continuous thresholds of images.
End of Exercise 8-2
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 8 Processing Binary Images
National Instruments Corporation 8-11 LabVIEW Machine Vision and Image Processing
C. Morphology
As you learned, a binary image is an image containing particle regions with
pixel values of 1 and a background region with pixel values of 0 that result
from the thresholding process. Thresholding can cause the resulting binary
image to contain unwanted information, such as noise particles, particles
touching the image border, particles touching each other, and particles with
uneven borders. Morphological functions can remove this unwanted
information, thus refining the information in the binary image.
Binary morphological operations extract and alter the structure of particles
in a binary image. Use these operations during your inspection application
to improve the information in a binary image before making particle
measurements, such as the area, perimeter, and orientation.
Primary Morphology Functions
The primary morphology functionserosions and dilationsapply to
binary images in which particles have been set to 1 and the background is
equal to 0. Other transformations are combinations of these two functions.
A single VI (IMAQ Morphology) performs the primary operations:
erosions, dilations, openings, closings, and contour extractions. The
advanced operations are performed by multiple VIs, each of which is
responsible for a single type of operation. These operations can separate
particles, remove either small or large particles, keep or remove particles
that were filtered according to morphological parameters, fill holes in
particles, remove particles that touch the boundary of the image border, and
create the outline of particles.
You also can use these transformations to prepare particles for quantitative
analysis, to observe the geometry of regions, and to extract the simplest
forms for modeling and identification purposes.
This section describes the following primary morphology transformations:
Erosion
Dilation
Opening
Closing
Note In the following descriptions, the term pixel denotes a pixel with a grayscale value
of 1, and the term particle denotes a group of neighboring pixels.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 8 Processing Binary Images
LabVIEW Machine Vision and Image Processing 8-12 ni.com
Connectivity
After you identify the pixels belonging to a specified intensity threshold,
NI Vision groups them into particles. This process introduces adjacent
pixels, or connectivity. In some functions, you can set the pixel connectivity
to specify how NI Vision determines whether two adjacent pixels are
included in the same particle.
With connectivity-4, two pixels are considered part of the same particle if
they are horizontally or vertically adjacent. With connectivity-8, two pixels
are considered part of the same particle if they are horizontally, vertically, or
diagonally adjacent. Figure 8-10 illustrates the two types of connectivity.
If your binary image contains particles that touch at one point, use
connectivity-4 to ensure that NI Vision recognizes each particle. If you have
particles that contain narrow areas, use connectivity-8. When you select a
connectivity setting, you should use the same connectivity setting
throughout your application.
Figure 8-10. Connectivity Settings
Figure 8-10 illustrates how connectivity-4 and connectivity-8 affect the way
the number of particles in an image are determined. In Figure 8-10a, the
image has two particles with connectivity-4. In Figure 8-10, the same image
has one particle with connectivity-8.
Erosion and Dilation Functions
Erosion decreases the size of the objects in the image. An erosion operation
removes a layer of pixels along the boundary of the particle. The number of
layers removed from the boundary depends on the size of the structuring
element used for the operation. A 3 3 element removes one layer of pixels,
whereas a 5 5 element removes two layers, and so on. Erosion eliminates
pixels isolated in the background and removes peninsulas of small width.
Figure 8-11 illustrates erosion.
a. b.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 8 Processing Binary Images
National Instruments Corporation 8-13 LabVIEW Machine Vision and Image Processing
Figure 8-11. Erosion on an Image
Dilation increases the size of the objects in the image. You can think of each
dilation operation as adding a layer of pixels around the boundary of the
object, including the inside boundary if the object has a hole inside it. The
number of layers added by each operation depends on the size of the
structuring elements. A 3 3 element adds one layer of pixels, whereas a
5 5 element adds two layers, and so on. Dilation has the reverse effect of
an erosion, because dilating objects is equivalent to eroding the background.
Dilation eliminates tiny holes isolated in objects and also removes gaps or
bays of insufficient width. Figure 8-12 illustrates dilation.
Figure 8-12. Dilation on an Image
Use erosion to separate particles for counting and to eliminate one-pixel
particles that constitute noise. Use dilation to fill in small holes of a
particles. Calculate the area by counting the pixels in a neighborhood which
have a value of one. If you do not want to significantly alter the size of your
object, use the Open and Close functions.
Open and Close Functions
The Open function is an erosion followed by a dilation. This function
removes small particles and smooths boundaries. This operation does not
significantly alter the area and shape of particles because erosion and
dilation are dual transformations in which borders removed by the erosion
process are restored during dilation. However, small particles eliminated
during the erosion are not restored by the dilation. Figure 8-13 illustrates the
Open function.
a. Original Image b. Erosion
a. Original Image b. Dilation
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 8 Processing Binary Images
LabVIEW Machine Vision and Image Processing 8-14 ni.com
Figure 8-13. Open Function on an Image
The Close function is a dilation followed by an erosion. This function fills
tiny holes and smooths boundaries. This operation does not significantly
alter the area and shape of particles because dilation and erosion are
morphological complements in which borders expanded by the dilation
function are then reduced by the erosion function. However, erosion does
not restore any tiny holes filled during dilation. Figure 8-14 illustrates the
Close function.
Figure 8-14. Close Function on an Image
Neither Open nor Close affect the size of the object. Open and Close are
only used when insignificant objects need smoothing. You also can obtain
more effective smoothers by sequencing the open and close operators to give
an open-close operator or a close-open operator. Both operations remove
extraneous structures without greatly affecting the size of the object. A
notable difference between these operations is that the Open-Close
operation tends to link neighboring objects together, but the Close-Open
operation tends to link neighboring holes together.
a. Original Image b. Open
a. Original Image b. Close
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 8 Processing Binary Images
National Instruments Corporation 8-15 LabVIEW Machine Vision and Image Processing
D. Making Particle Measurements
After you create a binary image and improve it, you can make particle
measurements. With these measurements you can determine the location of
particles and their shape features. Use the following VIs to perform particle
measurements:
IMAQ Particle Analysis ReportThis VI returns the number of
particles in an image and a report containing the area, number of holes,
center of mass, orientation, dimensions, and bounding rectangle of the
particles in either pixel or real-world values. You can use the bounding
rectangle or center of mass to determine the location of the particle in
the image.
IMAQ Particle AnalysisThis VI returns the number of particles in an
image and a user-selectable report containing the most commonly used
measurements, including the area, bounding rectangle, and perimeter of
particles.
Table 8-1 lists some of the measurements that the IMAQ Particle Analysis
VI returns.
Table 8-1. Particle Measurements
Measurement Description
Area (pixels) Area of the particle in pixels
Area (calibrated) Area of the particle in user-defined units
Number of holes Number of holes in the particle
Holes area (pixels) Sum of the areas of each hole in the
particle, in pixels
Total area Area of the particle and its holes
Width Width of bounding rectangle in
user-defined units
Height Height of bounding rectangle in
user-defined units
Longest segment length Length of longest horizontal line segment
in a particle
Perimeter Length of a boundary of a region
Holes perimeter Perimeter of all holes in user-defined units
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 8 Processing Binary Images
LabVIEW Machine Vision and Image Processing 8-16 ni.com
Exercise 8-3 Locate Bracket Holes VI
Goal
Use morphological functions to analyze an object that could change position
and shape.
Scenario
Although coordinate systems are useful for tracking objects that change
position within an image, there are some situations in which using a
coordinate system is inappropriate. In this example, the shape of a bracket
must be verified before it is shipped.
You could assign a coordinate system to one of its distinct features, but this
assumes that the other features of the bracket have the same relative location
in every image. Instead, you can use morphological functions to find the
features of the bracket regardless of their location. The features can be
analyzed to determine the shape of the bracket.
Design
In this exercise, you will create a VI that inspects an image of a bracket by
thresholding the image. Then you will improve the binary image and
measure the desired particles to determine the straightness of the bracket.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 8 Processing Binary Images
National Instruments Corporation 8-17 LabVIEW Machine Vision and Image Processing
Flowchart
Figure 8-15. Flowchart of Locate Bracket Holes VI
Implementation
1. In LabVIEW, open a blank VI.
2. Save the VI as Locate Bracket Holes.vi in the <Exercises>\
LabVIEW Machine Vision directory.
3. Acquire an image.
Place a Vision Acquisition Express VI (Vision and MotionVision
ExpressVision Acquisition) on the block diagram.
In the NI Vision Acquisition Express configuration window, select
Simulated AcquisitionFolder of Images in the left-hand pane.
Click Next.
Select Single Acquisition with processing for the acquisition type.
No
Open an Image
of a Bracket
Isolate the
Background Color
to Find the
Mounting Holes
Remove Extra
Particles to
Isolate the Holes
Generate Code Using Vision Assistant
Find the Center
of Each Hole
Another
Image to
Examine?
Yes
Generate
a Report
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 8 Processing Binary Images
LabVIEW Machine Vision and Image Processing 8-18 ni.com
Click Next.
Click the browse button next to the Image Path textbox.
Navigate to <Program Files>\National Instruments\
Vision Assistant 8.5\Examples\bracket\
Bracket1.jpg and click OK.
Click Test to test the acquisition.
Click Finish to finish building the express VI.
On the front panel, right-click the Image Display indicator and select
Snapshot.
4. Threshold the image to define the features of the bracket.
Place a Vision Assistant Express VI (Vision and MotionVision
ExpressVision Assistant) on the block diagram.
Select Processing Functions: GrayscaleThreshold.
Set the Look For drop-down listbox to Dark Objects.
Set the Maximum to 165.
Click OK.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 8 Processing Binary Images
National Instruments Corporation 8-19 LabVIEW Machine Vision and Image Processing
Figure 8-16. Thresholding the Bracket
5. Use binary morphological operations to isolate the bracket holes.
Select Processing Functions: BinaryAdv. Morphology.
Select Remove border objects from the list. This option eliminates
particles that touch the edges of the image.
Click the Connectivity button. Notice the effect that pushing the
button has on the process.
Click OK to save the step.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 8 Processing Binary Images
LabVIEW Machine Vision and Image Processing 8-20 ni.com
6. Fill in the holes of the bracket to create closed particles.
Select Processing Functions: BinaryAdv. Morphology.
Select Fill holes from the list.
Click OK.
Figure 8-17. Filling the Holes of the Bracket
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 8 Processing Binary Images
National Instruments Corporation 8-21 LabVIEW Machine Vision and Image Processing
7. Use erosion and dilation to clean up noise in the image without affecting
the size of the desired particles.
Select Processing Functions: BinaryBasic Morphology.
Select Erode objects from the drop-down listbox at the bottom of
the screen.
Change the Size parameter to 7 7 to increase the size of the
structuring element.
Click OK.
Figure 8-18. Using Erosion and Dilation
Select Processing Functions: BinaryBasic Morphology.
Select Dilate objects from the drop-down listbox at the bottom of
the screen.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 8 Processing Binary Images
LabVIEW Machine Vision and Image Processing 8-22 ni.com
Change the Size parameter to 7 7 to increase the size of the
structuring element.
Click OK.
Note Instead of using Erode objects and Dilate objects, you can try using the Particle
Filter. Vision Assistant contains more than 50 different Particle Filter parameters that you
can use to remove unwanted particles from your image. To use the Particle Filter in this
exercise, select an Area filter and set the minimum value to 1000 and the maximum to
3000. You can evaluate the difference in processing time with the Performance Meter
(select ToolsPerformance Meter).
8. Now that the particles representing the holes of the bracket have been
isolated, determine the size and location of the particles in the image.
Select Processing Functions: BinaryParticle Analysis.
Figure 8-19. Using Particle Analysis on the Bracket
The table that opens shows an array of values that describe various
features of the particles in the image. In this exercise, you only need to
know the area and the location of the centers of the particles.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 8 Processing Binary Images
National Instruments Corporation 8-23 LabVIEW Machine Vision and Image Processing
Click the Select Measurements button to display a list of available
measurements.
Click the Deselect All Pixel Measurements button to disable the
measurements.
Enable the Center of Mass X, Center of Mass Y, and Area
checkboxes.
Click OK.
Click OK in the Particle Analysis Setup to save the step.
9. After you are finishing adding image processing steps, choose the
controls and indicators that you want to display on the front panel of
the VI.
Click Select Controls.
In the Indicators section, enable the Particle Analysis 1Number of
Particles checkbox and the Particle Analysis 1Particle
Measurements (Pixels) checkbox.
Click Finish.
10. Add memory management and error handling functions to the VI.
Right-click the Number of Particles output of the Vision Assistant
Express VI and select CreateIndicator.
Right-click the Particle Measurements output of the Vision Assistant
Express VI and select CreateIndicator.
Place a Flat Sequence Structure (ProgrammingStructuresFlat
Sequence Structure) around everything on the block diagram.
Right-click the Flat Sequence Structure and select Add Frame
After.
Place an IMAQ Dispose VI (Vision and MotionVision Utilities
Image ManagementIMAQ Dispose) and Simple Error Handler
VI (ProgrammingDialog & User InterfaceSimple Error
Handler) in the second frame of the Flat Sequence Structure.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 8 Processing Binary Images
LabVIEW Machine Vision and Image Processing 8-24 ni.com
Wire the VI as shown in Figure 8-20.
Figure 8-20. Block Diagram of Locate Bracket Holes VI
11. On the front panel, right-click the Image Display indicator and select
PalettesBinary. This allows you to view a binary display of the image.
Testing
1. Display the front panel.
2. Run the VI.
The analysis results are displayed in an array. You can expand the array
to view all of the results. The rows represent individual particles. The
columns represent the X-coordinate, Y-coordinate, and Area of the
particle respectively.
End of Exercise 8-3
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 8 Processing Binary Images
National Instruments Corporation 8-25 LabVIEW Machine Vision and Image Processing
E. Using the Golden Template
Golden template comparison compares the pixel intensities of an image
under inspection to a golden template. A golden template is an image
containing an ideal representation of an object under inspection. A pixel in
an inspection image is returned as a defect if it does not match the
corresponding pixel in the golden template within a specified tolerance.
Inspection based on golden template comparison is a common vision
application. Use golden template comparison when you want to inspect for
defects and other methods of defect detection are not feasible. To use golden
template comparison, you must be able to acquire an image that represents
the ideal inspection image for your application.
Example applications in which golden template comparison would be
effective include validating a printed label or a logo stamped on a part.
Conceptually, inspection based on golden template comparison is simple:
Subtract an image of an ideal part and another image of a part under
inspection.
Using simple subtraction to detect flaws does not take into account several
factors about the application that may affect the comparison result. The
following sections discuss these factors and explain how NI Vision
compensates for them during golden template comparison.
Alignment
In most applications, the location of the part in the golden template and the
location of the part in the inspection image differ.
Aligning the part in the template with the part in the inspection image is
necessary for an effective golden template comparison. To align the parts,
you must specify a location, angle, and scale at which to superimpose the
golden template on the inspection image. You can use the position, angle,
and scale defined by other NI Vision functions, such as pattern matching, or
geometric matching, or edge detection. Figure 8-21 shows how differing
part locations affect inspection.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 8 Processing Binary Images
LabVIEW Machine Vision and Image Processing 8-26 ni.com
Figure 8-21. Misalignment of the Template and Inspection Image
Figure 8-21a shows the golden template. Figure 8-21b shows the inspection
image. The label in the inspection image is identical to the label in the
golden template. However, the part in the inspection image is located
slightly higher and to the right compared to the part in the golden template.
Figure 8-21c shows the resulting defect image. The top and right areas of the
label are detected as dark defects compared to their corresponding pixels in
the template, which are white background pixels. Similarly, the left and
bottom appear as bright defects. The text and logo inside the label also
appear as defects because of the part misalignment.
Perspective Correction
The part under inspection may appear at a different perspective in the
inspection image than the perspective of the part in the golden template.
Golden template comparison corrects for perspective differences by
correlating the template and inspection image at several points. Not only
does this correlation compute a more accurate alignment, but it can also
correct for errors of up to two pixels in the input alignment. Figure 8-22
shows how differing image perspectives affect inspection.
Figure 8-22. Perspective Differences Between the Template and Inspection Image
Figure 8-22a shows the golden template. Figure 8-22b shows the inspection
image. The label in the inspection image is identical to the label in the
a. b. c.
a. b. c.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 8 Processing Binary Images
National Instruments Corporation 8-27 LabVIEW Machine Vision and Image Processing
golden template. However, the left side of the part in the inspection image is
closer to the camera than the right side of the part, giving the part a warped
perspective appearance.
Figure 8-22c shows the resulting defect image. Although the angles and
scales of the labels are the same, the template is still misaligned because of
the perspective difference.
Histogram Matching
The inspection images may be acquired under different lighting conditions
than the golden template. As a result the intensities between a pixel in the
golden template and its corresponding pixel in an inspection image may
vary significantly Figure 8-23 shows how differing pixel intensities affect
inspection.
Figure 8-23. Lighting Differences Between the Template and Inspection Image
Figure 8-23a shows the golden template. Figure 8-23b shows the inspection
image. The label in the inspection image is identical to the label in the
golden template. However, the inspection image was acquired under
dimmer lighting. Although the images are aligned and corrected for
perspective differences, the defect image, shown in Figure 8-23c, displays a
single, large, dark defect because of the shift in lighting intensity
Golden template comparison normalizes the pixel intensities in the
inspection image using histogram matching. Figure 8-24a shows the
histogram of the golden template, which peaks in intensity near 110 and then
stays low until it saturates at 255. Figure 8-24b shows the histogram of the
inspection image, which peaks in intensity near 50 and peaks again near 200.
Using a histogram matching algorithm, golden template comparison
computes a lookup table to apply to the inspection image. After the lookup
table is applied, the histogram of the resulting defect image, shown in
Figure 8-24, exhibits the same general characteristics as the template
histogram. Notice the peak near 110 and the saturation at 255.
a. b. c.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 8 Processing Binary Images
LabVIEW Machine Vision and Image Processing 8-28 ni.com
Figure 8-24. Histogram Matching in Golden Template Comparison
Ignoring Edges
Even after alignment, perspective correction, and histogram matching, the
defect image may return small defects even when the part under inspection
seems identical to the golden template. These small defects are usually
confined to edges, or sharp transitions in pixel intensities.
Figure 8-25a shows the golden template. Figure 8-25b shows the inspection
image. The label in the inspection image is almost identical to the label in
the golden template. Figure 8-25c shows insignificant defects resulting from
of a small, residual misalignment or quantization errors from the image
acquisition. Although these minor variations do not affect the quality of the
inspected product, a similarly sized scratch or smudge not on an edge would
be a significant defect.
Figure 8-25. Small Edge Differences Between the Template and Inspection Image
a . b.
c .
0 50 100 150 200 255
0 50 100 150 200 255
0 50 100 150 200 255
N
u
m
b
e
r

o
f

P
i
x
e
l
s
N
u
m
b
e
r

o
f

P
i
x
e
l
s
N
u
m
b
e
r

o
f

P
i
x
e
l
s
a. b. c.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 8 Processing Binary Images
National Instruments Corporation 8-29 LabVIEW Machine Vision and Image Processing
To distinguish minor edge defects from significant defects, you can define
edge areas for golden template comparison to ignore using the NI Vision
Template Editor. Differences in areas you want to ignore are not returned as
defects. You can preview different edge thicknesses in the training interface,
and optionally change edge thickness during runtime.
Using Defect Information for Inspection
Golden template comparison isolates areas in the inspection image that
differ from the golden template. To use the defect information in a machine
vision application, you need to analyze and process the information using
other NI Vision functions. Examples of functions you can use to analyze and
process the defect information include particle filters, binary morphology,
particle analysis, and binary particle classification.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 8 Processing Binary Images
LabVIEW Machine Vision and Image Processing 8-30 ni.com
Notes
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
National Instruments Corporation 9-1 LabVIEW Machine Vision and Image Processing
9
Identifying Images
In addition to taking measurements of parts under inspection, you can also
identify parts using classification, optical character recognition (OCR), and
barcode reading. In this lesson, you will learn about classification, how to
read text and characters in an image using OCR, and about 2D barcodes in
images.
Topics
A. Binary Particle Classification
B. Optical Character Recognition
C. 2D Barcode Functions
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 9 Identifying Images
LabVIEW Machine Vision and Image Processing 9-2 ni.com
A. Binary Particle Classification
Binary particle classification identifies an unknown binary sample
by comparing a set of its significant features to a set of features that
conceptually represent classes of known samples. Classification involves
two phases: training and classifying.
Training is a phase during which you teach the machine vision software the
types of samples you want to classify during the classifying phase. You can
train any number of samples to create a set of classes, which you later
compare to unknown samples during the classifying phase. You store the
classes in a classifier file. Training might be a one-time process, or it might
be an incremental process you repeat to add new samples to existing classes
or to create several classes, thus broadening the scope of samples you want
to classify.
Classifying is a phase during which your custom machine vision application
classifies an unknown sample in an inspection image into one of the classes
you trained. The classifying phase classifies a sample according to how
similar the sample features are to the same features of the trained samples.
Typical applications involving particle classification include the following:
SortingSorts samples of varied shapes. For example, a particle
classifier can sort different mechanical parts on a conveyor belt into
different bins. Example outputs of a sorting or identification application
could be user-defined labels of certain classes.
InspectionInspects samples by assigning each sample an
identification score and then rejecting samples that do not closely match
members of the training set. Example outputs of a sample inspection
application could be Pass or Fail.
Ideal Images for Classification
Images of samples acquired in a backlit environment are ideal for particle
classification. Figures 9-1 and 9-2 show example images of backlit samples.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 9 Identifying Images
National Instruments Corporation 9-3 LabVIEW Machine Vision and Image Processing
Figure 9-1. Mechanical Parts
Figure 9-2. Animal Crackers
Images that contain several unconnected parts or are grayscale and have an
internal pattern are not ideal for particle classification. Figure 9-3 and 9-4
show example images with unconnected parts and grayscale shapes with
internal patterns.
Figure 9-3. Binary Shapes Composed of Several Unconnected Parts
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 9 Identifying Images
LabVIEW Machine Vision and Image Processing 9-4 ni.com
Figure 9-4. Grayscale Shapes with Internal Patterns
General Classification Procedure
Consider an example application whose purpose is to sort nuts and bolts.
The classes in this example are Nut and Bolt.
Before you can train a classification application, you must determine a set
of features, known as a feature vector, on which to base the comparison of
the unknown sample to the classes of known samples. Features in the feature
vector must uniquely describe the classes of known samples. An appropriate
feature vector for the example application would be {Heywood Circularity,
Elongation Factor}.
The following table shows good feature values for the nuts and bolts shown
in Figure 9-5. The closer the shape of a sample is to a circle, the closer its
Heywood circularity factor is to 1. The more elongated the shape of a
sample, the higher its elongation factor.
The class Nut is characterized by a strong circularity feature and a weak
elongation feature. The class Bolt is characterized by a weak circularity
feature and a strong elongation feature.
After you determine a feature vector, gather examples of the samples you
want to classify. A robust classification system contains many example
samples for each class. All the samples belonging to a class should have
similar feature vector values to prevent mismatches.
Table 9-1. Features of Nuts and Bolts
Class
Average Heywood
Circularity
Average Elongation
Factor
Nut 1.109 1.505
Bolt 1.914 3.380
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 9 Identifying Images
National Instruments Corporation 9-5 LabVIEW Machine Vision and Image Processing
After you have gathered the samples, train the classifier by computing the
feature vector values for all of the samples. Then, you can begin to classify
samples by calculating the same feature vector for the unknown sample and
comparing those values to the feature vector values of the known samples.
The classifier assigns the unknown sample a class name based on how
similar its feature values are to the values of a known sample.
Figure 9-5a shows a binary image of nuts and bolts. Figure 9-5b shows these
samples classified by circularity and elongation.
Figure 9-5. Classification of Nuts and Bolts
B. Optical Character Recognition
NI Vision provides machine vision functions you can use in an application
to perform optical character recognition (OCR). OCR is the process in
which the machine vision software reads text and/or characters in an image.
OCR consists of the following two parts:
An application for training characters
Tools such as the NI Vision Assistant or libraries of LabVIEW VIs. Use
these tools to create a machine vision application that analyzes an image
and compares objects in that image to the characters you trained to
determine if they match. The machine vision application returns the
matching characters that it read.
1 Circularity 2 Elongation 3 Bolts 4 Nuts
1
2
4
3
a. b.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 9 Identifying Images
LabVIEW Machine Vision and Image Processing 9-6 ni.com
Training characters is the process in which you teach the machine vision
software the types of characters and/or patterns you want to read in the
image during the reading procedure. You can use OCR to train any number
of characters, creating a character set. The set of characters is later compared
with objects during the reading and verifying procedures. You store the
character set in a character set file. Training might be a one-time process, or
it might be a process you repeat several times, creating several character sets
to broaden the scope of characters you want to detect in an image.
Reading characters is the process by which the machine vision application
you create analyzes an image to determine if the objects match the
characters you trained. The machine vision application reads characters
in an image using the character set that you created when you trained
characters.
Verifying characters is a process in which the machine vision application
you create inspects an image to verify the quality of the characters it read.
The application verifies characters in an image using the reference
characters of the character set you created during the training process.
Using Optical Character Recognition
Typically, machine vision OCR is used in automated inspection applications
to identify or classify components. For example, you can use OCR to detect
and analyze the serial number on an automobile engine that is moving along
a production line. Using OCR in this instance helps you identify the part
quickly, which in turn helps you quickly select the appropriate inspection
process for the part.
You can use OCR in a wide variety of other machine vision applications,
such as the following:
Inspecting pill bottle labels and lot codes in pharmaceutical applications
Verifying wafers and IC package codes in semiconductor applications
Controlling the quality of stamped machine parts
Sorting and tracking mail packages and parcels
Reading alphanumeric characters on automotive parts
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 9 Identifying Images
National Instruments Corporation 9-7 LabVIEW Machine Vision and Image Processing
Exercise 9-1 Concept: Training an OCR Character Set
Goal
Use the NI OCR Training Interface to create a new character set file and use
the character set file to read a set of test labels.
Scenario
Optical character recognition (OCR) applications are prevalent in many
industries. NI provides a simple, powerful interface for configuring and
executing these applications.
Description
In this exercise you will develop a character set used to identify text in a set
of images. You will create a VI that refers to your character set file to read
text in new images.
Implementation
Part A: Creating a Character Set File
1. Launch the NI OCR Training Interface by clicking Start
All ProgramsNational InstrumentsVisionOCR Training.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 9 Identifying Images
LabVIEW Machine Vision and Image Processing 9-8 ni.com
Figure 9-6. NI OCR Training Interface
2. Click FileOpen Images. Navigate to <Program Files>\
National Instruments\Vision\Images\OCR Tutorial and
open the following image files:
NIOCRExample1.tif
NIOCRExample2.tif
NIOCRExample3.tif
NIOCRExample4.tif
NIOCRExample5.tif
Tip You can select multiple files by pressing the <Ctrl> key and click the files you want
to open.
3. Select the text in the image and configure the OCR Training Interface.
Draw an ROI around the text in NIOCRExample1.tif.
Tip Because the position of the text will vary from image to image, ensure that the ROI
you draw is large enough to encompass possible locations of characters in the other
images you analyze.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 9 Identifying Images
National Instruments Corporation 9-9 LabVIEW Machine Vision and Image Processing
Figure 9-7. Drawing an ROI Around the Text in the Image
Notice the Text Read textbox. Text Read displays recognized
characters. Because you have not trained any characters, Text Read
displays a substitution character for each of the segmented objects in
the ROI.
In the Threshold tab, enable the Reject Particles Touching the ROI
checkbox. Enabling this parameter removes any sections of the
barcode encompassed in the ROI.
Set the Remove Particles (Erosions) control to 1.
Setting this control to 1 performs one erosion on the image. Recall
from Lesson 8 that an erosion decreases the size of the objects in the
image by removing a layer of pixels along the boundary of the
particle. Performing an erosion helps to separate the characters in the
image from the background of the image.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 9 Identifying Images
LabVIEW Machine Vision and Image Processing 9-10 ni.com
4. Train the correct characters.
Select the Train All Characters option button.
When you train all characters, every character is added to the
character set file. If you train more than one instance of the same
character, both instances will be added to the character set file.
Enter BACE30 in the Correct String textbox.
Click the Train button to train the correct characters. The characters
you entered in Correct String display in the Text Read textbox.
Figure 9-8. Characters Display in the Text Read Textbox
Click the Next Image button to navigate to NIOCRExample2.tif.
Enter B8F1E9 in Correct String and click Train.
Click the Next Image button to navigate to NIOCRExample3.tif.
Click the Train Incorrect Characters option button.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 9 Identifying Images
National Instruments Corporation 9-11 LabVIEW Machine Vision and Image Processing
Train incorrect characters when the ROI you draw contains
characters you already trained or when the OCR Training Interface
displays the wrong character value for a segmented object. You
already analyzed images that contain the characters B, F, and 0. For
this exercise, you will not train any more instances of these
characters.
Although you must enter character values for all segmented objects
in the Correct String textbox, the OCR Training Interface trains
only objects that do not have a match or have an incorrect match in
Text Read.
Enter B70DF4 in Correct String and click Train.
Figure 9-9. Training Incorrect Characters
Click the Next Image button to navigate to NIOCRExample4.tif.
The two instances of the letter A are incorrectly segmented in
NIOCRExample4.tif. Because the character size and spacing in
this image is different from the character size and spacing in the
previous three images, the OCR Training Interface incorrectly
segments the two instances of the letter A.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 9 Identifying Images
LabVIEW Machine Vision and Image Processing 9-12 ni.com
To correct the character size and spacing, click the Results tab to
view statistics for the letters. The width of the fourth character,
which is the width of the combined As, is about 100 pixels. Also
take note of the height of the characters.
Figure 9-10. Viewing Character Information in the Results Tab
Click the Size & Spacing tab and enter 50 (100 divided by 2) as the
Max for the Bounding Rect Width. Both letter As will be correctly
segmented. Enter 50 as the Max for the Bounding Rect Height.
Enter B85AA6 in Correct String and click Train.
Click the Next Image button to navigate to NIOCRExample5.tif.
Enter B8CE72 in Correct String and click Train.
5. Examine the character set you created.
Click the Edit Character Set File tab to view your character set.
In this tab, you can refine your character set by deleting any
unwanted characters.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 9 Identifying Images
National Instruments Corporation 9-13 LabVIEW Machine Vision and Image Processing
Figure 9-11. Editing the Character Set File
6. Save your character set file.
Click FileSave Character Set File and save your character set file
as My Character Set.abc in the C:\Exercises\LabVIEW
Machine Vision directory. You can now use the character set file
to recognize characters in this set.
7. Close the OCR Training Interface.
Part B: Create a VI to Read Text
1. Open a blank VI.
2. Save the VI as OCR.vi in the <Exercises>\LabVIEW Machine
VIsion directory.
3. Acquire an image.
Place a Vision Acquisition Express VI (Vision and MotionVision
ExpressVision Acquisition) on the block diagram.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 9 Identifying Images
LabVIEW Machine Vision and Image Processing 9-14 ni.com
In the NI Vision Acquisition Express configuration window, select
Simulated AcquisitionFolder of Images in the left-hand pane.
Click Next.
Select Continuous Acquisition with inline processing for the
acquisition type.
Set Acquire Image Type to Acquire Most Recent Image.
Click Next.
Click the browse button next to the Image Path textbox.
Navigate to <Program Files>\National Instruments\
Vision\Images\OCR Tutorial\NIOCRExample1.tif and
click OK.
Enable the Cycle Through Folder of Images checkbox.
Click Test to test the acquisition. Click Stop when finished.
Click Finish to finish building the express VI.
On the front panel, right-click the Image Display indicator and select
Snapshot.
4. Use the character set file to read a set of test labels.
Place the Vision Assistant Express VI (Vision and MotionVision
ExpressVision Assistant) on the block diagram.
In the NI Vision Assistant configuration window, select FileOpen
Image.
Select the image NIOCRExample1.tif, located in the <Program
Files>\National Instruments\Vision\Images\
OCR Tutorial directory and click Open.
Select Processing Functions: IdentificationOCR/OCV.
5. Load your character set file.
In the OCR/OCV Setup window, click the browse button.
Navigate to your character set file, located at <Exercises>\
LabVIEW Machine Vision\My Character Set.abc.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 9 Identifying Images
National Instruments Corporation 9-15 LabVIEW Machine Vision and Image Processing
Draw an ROI around the text in the image. The recognized text
should appear in the image and in the Text Read textbox.
Note The ROI you draw in Vision Assistant is the ROI that will be used on each image
in the VI you are about to create. Ensure that the ROI you draw is large enough to
encompass possible locations of characters in the other images you analyze. Refer to the
following image for an example of the ROI size.
Figure 9-12. Draw an ROI Large Enough to Encompass Possible Characters
Click OK.
Click Select Controls.
In the Indicators sections, enable the OCR/OCV 1String
checkbox.
Click Finish.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 9 Identifying Images
LabVIEW Machine Vision and Image Processing 9-16 ni.com
6. Add image management and error handling code to the VI.
Right-click the String (OCR/OCV) output of the Vision Assistant
Express VI and select CreateIndicator.
Place the Unbundle By Name function (ProgrammingCluster,
Class, & VariantUnbundle By Name) on the block diagram
inside the While Loop.
Place the Or function (ProgrammingBooleanOr) on the block
diagram inside the While Loop.
Place the IMAQ Dispose VI (Vision and MotionVision Utilities
Image ManagementIMAQ Dispose) on the block diagram to the
right of the While Loop.
Place the Simple Error Handler VI (ProgrammingDialog and
User InterfaceSimple Error Handler) on the block diagram to
the right of the IMAQ Dispose VI.
Wire the block diagram as shown in Figure 9-13.
Figure 9-13. OCR VI Block Diagram VI
7. Save the VI.
8. Close the VI when finished.
End of Exercise 9-1
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 9 Identifying Images
National Instruments Corporation 9-17 LabVIEW Machine Vision and Image Processing
C. 2D Barcode Functions
The term 2D barcode refers to both matrix barcodes and multi-row
barcodes. Matrix barcodes encode data based on the position of square,
hexagonal, or round cells within a matrix. Multi-row barcodes are barcodes
that consist of multiple stacked rows of barcode data. NI Vision currently
supports the PDF417, Data Matrix, QR Code, and Micro QR Code formats.
The process used to recognize 2D barcodes consists of two phases:
A coarse locating phase during which the user specifies an ROI in the
image, which helps localize the region occupied by the 2D barcode. This
phase is optional, but it can increase the performance of the second
phase by reducing the size of the search region.
A locating and decoding phase during which the software searches the
ROI for one or more 2D barcodes and decodes each located 2D barcode.
2D Barcode Algorithm Limits
The following factors can cause errors in the search and decoding phase:
Very low resolution of the image.
Very high horizontal or vertical light drift.
Contrast along the bars of the image.
High level of noise or blurring.
Inconsistent printing or stamping techniquessuch as misaligned
barcode elementsinconsistent element size, or elements with
inconsistent borders.
In PDF417 barcodes, a quiet zone that is too small or contains too much
noise.
Data Matrix
A Data Matrix code is a matrix built on a square or rectangular grid with a
finder pattern around the perimeter of the matrix. Each cell of the matrix
contains a single data cell. The cells can be either square or circular.
Locating and decoding Data Matrix codes requires a minimum cell size
of 2.5 pixels. Locating and decoding Data Matrix codes also requires a quiet
zone of at least one cell width around the perimeter of the code. However, a
larger quiet zone increases the likelihood of successful location. Each
symbol character value is encoded in a series of data cells called a code
word.
Data Matrix codes use one of two error checking and correction (ECC)
schemes. Data Matrix codes that use the ECC schemes 000 to 140 are based
on the original specification. These codes use a convolution error correction
scheme and use a less efficient data packing mechanism that often requires
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 9 Identifying Images
LabVIEW Machine Vision and Image Processing 9-18 ni.com
only encoding characters from a particular portion of the ASCII character
set. Data Matrix codes that use the ECC 200 scheme use a Reed-Solomon
error correction algorithm and a more efficient data packing mechanism.
The ECC 200 scheme also allows for the generation of multiple connected
matrices, which enables the encoding of larger data sets.
Figure 9-14. Data Matrix Code
Quality Grading
NI Vision can assess the quality of a Data Matrix barcode based on how well
the barcode meets certain parameters. For each parameter, NI Vision returns
one of the following letter grades: A, B, C, D, or F. An A indicates that the
barcode meets the highest standard for a particular parameter. An F
indicates that the barcode is of the lowest quality for that parameter.
Data Matrix barcodes are graded on the following parameters:
DecodeTests whether the Data Matrix features are correct enough to
be readable when the barcode is optimally imaged.
Symbol ContrastTests whether the light and dark pixels in the image
are sufficiently and consistently distinct throughout the barcode.
Figure 9-15 shows a Data Matrix barcode with a symbol contrast value
of 8.87%, which returns a grade of F.
Figure 9-15. Data Matrix with Poor Contrast
1 Quiet Zone 2 Finder Pattern 3 Data Cell
1
3
2
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 9 Identifying Images
National Instruments Corporation 9-19 LabVIEW Machine Vision and Image Processing
Print GrowthTests the extent to which dark or light markings
appropriately fill their cell boundaries. Figure 9-16 shows a Data Matrix
barcode with a print growth value of 0.79, which returns a grade of C.
Figure 9-16. Data Matrix with Print Growth
Axial NonuniformityMeasures and grades the spacing of the cell
centers. Figure 9-17 shows a Data Matrix barcode with an axial
nonuniformity value of 0.2714, which returns a grade of F.
Figure 9-17. Data Matrix with Axial Nonuniformity
Unused Error CorrectionTests the extent to which regional or spot
damage in the symbol has eroded the reading safety margin that the error
correction provides. Figure 9-18 shows a Data Matrix barcode with an
unused error correction value of 0.00, which returns a grade of F.
Figure 9-18. Data Matrix with Little Unused Error Correction
Overall Grade SymbolThe lowest of the grades from the other symbol
parameters.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 9 Identifying Images
LabVIEW Machine Vision and Image Processing 9-20 ni.com
PDF417
A PDF417 code is a multi-row barcode in which each data element is
encoded in a code word. Each row consists of a start pattern, a left row
indicator code word, one to 30 data code words, a right row indicator code
word, and a stop pattern. Each code word consists of 17 cells and encodes
four bars and four spaces. Each bar and each space has a maximum width of
six cells.
Locating and decoding PDF417 codes requires a minimum cell size of
1.5 pixels and a minimum row height of 4.5 pixels. Locating and decoding
PDF417 codes also requires a quiet zone of at least one cell width around
the perimeter of the code. However, a larger quiet zone increases the
likelihood of successful location.
Figure 9-19. PDF417 Code
QR Code
A QR Code is a matrix built on a square grid with a set of finder patterns
located at three corners of the matrix. Finder patterns consist of alternating
black and white square rings. The size of the matrix can range from a
minimum size of 21 21 up to a maximum size of 177 177. Each cell of
the matrix contains a single data cell. Matrix cells are square and represent
a single binary 0 or 1.
Locating and decoding QR Codes requires a minimum cell size of
2.5 pixels. Locating and decoding PDF417 codes also requires a quiet zone
of at least one cell width around the perimeter of the code. However, a larger
quiet zone increases the likelihood of successful location. Each symbol
character value is encoded in a unit called a code word consisting of 8 cells
or one byte of data.
1 Quiet Zone
2 Start Pattern
3 Left Row Indicator
4 Data Code Words
5 Right Row Indicator
6 Stop Pattern
1
2 3 5 4 6
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 9 Identifying Images
National Instruments Corporation 9-21 LabVIEW Machine Vision and Image Processing
QR Codes have built in error checking and correction (ECC) using the
standard Reed-Solomon scheme for error correction. The amount of error
correction capability of each code is selectable during the printing process.
In general, the QR Code can correct for anywhere from 7% to 30% of error
depending upon the selection made at print time.
Figure 9-20. QR Code
Micro QR Code
A Micro QR Code is a smaller version of the standard QR Code. Micro QR
Codes have only one finder pattern located at one corner of the matrix. The
size of a Micro QR Code can range from a minimum size of 11 11 up to a
maximum size of 17 17.
Figure 9-21. Micro QR Code
1 Quiet Zone 2 Finder Pattern 3 Data Cell
1 Quiet Zone 2 Finder Pattern 3 Data Cell
1
3
2
1
3
2
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Lesson 9 Identifying Images
LabVIEW Machine Vision and Image Processing 9-22 ni.com
Notes
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
National Instruments Corporation A-1 LabVIEW Machine Vision and Image Processing
A
Using Color Tools
This appendix teaches you the basics of color image representation and how
to use the color matching, color location, and color pattern matching tools
in Vision Assistant.
Topics
A. Introduction to Color
B. Using Color Tools
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Appendix A Using Color Tools
LabVIEW Machine Vision and Image Processing A-2 ni.com
A. Introduction to Color
Color is the wavelength of the light we receive in our eyes when we look at
an object. In theory, the color spectrum is infinite. The human eye, however,
can see only a small portion of this spectrumthe portion that starts at the
red edge of infrared light (the longest wavelength) and ends at the blue edge
of ultraviolet light (the shortest wavelength). This continuous spectrum is
called the visible spectrum.
White light is a combination of all colors. The spectrum of white light is
continuous and goes from ultraviolet to infrared in a smooth transition. You
can represent a good approximation of white light by selecting a few
reference colors and weighting them appropriately. The most common way
to represent white light is to use three reference components, such as red,
green, and blue (R, G, and B primaries). You can simulate most colors of the
visible spectrum using these primaries. For example, video projectors use
red, green, and blue light generators, and RGB cameras use red, green, and
blue sensors.
The perception of a color depends on many factors, including the following:
Hue, which is the perceived dominant color. Hue depends directly on the
wavelength of a color.
Saturation, which is dependent on the amount of white light present in
a color. Pastels typically have a low saturation while very rich colors
have a high saturation. For example, pink typically has a red hue but a
low saturation.
Luminance, which is the brightness information in the video picture.
The luminance signal amplitude varies in proportion to the brightness of
the video signal and corresponds exactly to the monochrome picture.
Intensity, which is the brightness of a color and is usually expressed as
light or dark. For example, orange and brown may have the same hue
and saturation, whereas orange has a greater intensity than brown.
Image Representations
Color images can be represented in several different color spaces. These
color spaces can contain all color information from the image or they can
consist of just one aspect of the color information, such as hue or luminance.
The most common image representation is 32-bit RGB format. In this
representation, the three 8-bit color planesred, green, and blueare
packed into an array of 32-bit integers. This representation is useful for
displaying the image on your monitor. The 32-bit integer is organized as
0 RED GREEN BLUE
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Appendix A Using Color Tools
National Instruments Corporation A-3 LabVIEW Machine Vision and Image Processing
where the high-order byte is not used, and blue is the low-order byte.
HSL is a 32-bit color space based on hue, saturation, and luminance. This
color space is used in image processing applications.
Each color plane can be returned individually. The red, green, or blue plane
is extracted from the RGB image and represented as an array of 8-bit
integers. The hue, saturation, luminance, and intensity planes also can be
returned individually if you want to analyze the image.
B. Using Color Tools
Color Matching
Color matching can be used for applications such as color identification,
color inspection, color object location and other applications that require the
comparison of color information to make decisions.
How Color Matching Works
Color matching is performed in two steps. In the first step, the machine
vision software learns a reference color distribution. In the second step,
the software compares color information from other images to the reference
image and returns a score as an indicator of similarity.
Color Identification
Color identification identifies an object by comparing the color information
in the image of the object to a database of reference colors that correspond
to predefined object types. The object is assigned a label corresponding to
the object type with closest reference color in the database. Use color
matching to first learn the color information of all the predefined object
types. The color spectrums associated with each of the predefined object
types become the reference colors. Your machine vision application then
uses color matching to compare the color information in the image of the
object to the reference color spectrums. The object receives the label of the
color spectrum with the highest match score.
Several industries use color identification. One example is the automotive
industry, where color identification helps verify the presence of the correct
components during the assembly process.
Color Inspection
Color inspection detects simple flaws such as missing or misplaced color
components and defects on the surfaces of color objects. You can use color
matching for these applications if known regions of interest predefine the
object or areas to be inspected in the image. You can define these regions or
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Appendix A Using Color Tools
LabVIEW Machine Vision and Image Processing A-4 ni.com
they can be the output of some other machine vision tool, such as pattern
matching used to locate the components to be inspected.
The layout of the fuses in junction boxes in automotive assemblies is easily
defined by regions of interest. Color matching determines if all of the fuses
are present and in the correct locations. Color matching compares the color
of the fuse in each region to the color that is expected to be in that region.
Learning Color Distribution
The machine vision software learns a color distribution by generating a
color spectrum. You provide the software with an image or regions in the
image containing the color information that you want to use as a reference
in your application. The machine vision software then generates a color
spectrum based on the information you provide. The color spectrum
becomes the basis of comparison during the matching phase.
Comparing Color Distributions
During the matching phase, the color spectrum obtained from the target
image or region is compared to the reference color spectrum taken during
the learning step. A match score is computed based on the similarity
between these two color spectrums using the Manhattan distance between
two vectors. The match score, ranging from 0 to 1,000, defines the similarity
between the color spectrums. A score of zero represents no similarity
between the color spectrums, while a score of 1,000 represents a perfect
match.
Color Location
Use color location to quickly locate known color regions in an image. With
color location, you create a model or template that represents the colors that
you are searching. Your machine vision application then searches for the
model in each acquired image, and calculates a score for each match. The
score indicates how closely the color information in the model matches the
color information in the found regions.
When to Use
Color can simplify a monochrome visual inspection problem by improving
contrast or separating the object from the background. Color location
algorithms provide a quick way to locate regions in an image with specific
colors.
Use color location when your application has the following characteristics:
Requires the location and the number of regions in an image with their
specific color information
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Appendix A Using Color Tools
National Instruments Corporation A-5 LabVIEW Machine Vision and Image Processing
Relies on the cumulative color information in the region, instead of how
the colors are arranged in the region
Does not require the orientation of the region
Does not require the location with subpixel accuracy
The color location tools in NI Vision measure the similarity between an
idealized representation of a feature, called a model, and a feature that may
be present in an image. A feature for color location is defined as a region in
an image with specific colors.
Color location is useful in many applications. Color location provides your
application with information about the number of instances and locations of
the template within an image. Use color location in the following general
applicationsinspection, identification, and sorting.
Inspection
Inspection detects flaws such as missing components, incorrect printing,
and incorrect fibers on textiles. A common pharmaceutical inspection
application is inspecting a blister pack for the correct pills. Blister pack
inspection involves checking that all the pills are of the correct type, which
is easily performed by checking that all the pills have the same color
information. Because your task is to determine if there are a fixed number
of the correct pills in the pack, color location is a very effective tool.
Identification
Identification assigns a label to an object based on its features. In many
applications, the color-coded identification marks are placed on the objects.
In these applications, color matching locates the color code and identifies
the object. In a spring identification application, different types of
springs are identified by a collection of color marks painted on the coil.
If you know the different types of color patches that are used to mark the
springs, color location can find which color marks appear in the image.
You then can use this information to identify the type of spring.
Sorting
Sorting separates objects based on attributes such as color, size, and shape.
In many applications, especially in the pharmaceutical and plastic
industries, objects are sorted according to color, such as pills and plastic
pellets. Using color templates of the different candies in the image,
color location quickly locates the positions of the different candies.
What to Expect from a Color Location Tool
In automated machine vision applications, the visual appearance of
inspected materials or components changes because of factors such as
orientation of the part, scale changes, and lighting changes. The color
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Appendix A Using Color Tools
LabVIEW Machine Vision and Image Processing A-6 ni.com
location tool maintains its ability to locate the reference patterns despite
these changes. The color location tool provides accurate results during the
following common situations: pattern orientation and multiple instances,
ambient lighting conditions, and blur and noise conditions.
Pattern Orientation and Multiple Instances
A color location tool locates the reference pattern in an image even if the
pattern in the image is rotated or scaled. When a pattern is rotated or slightly
scaled in the image, the color location tool can detect the following:
The pattern in the image
The position of the pattern in the image
Multiple instances of the pattern in the image (if applicable)
Because color location only works on the color information of a region and
does not use any kind of shape information from the template, it does not
find the angle of the rotation of the match. It only locates the position of a
region in the image whose size matches a template containing similar color
information.
Ambient Lighting Conditions
The color location tool finds the reference pattern in an image under
conditions of uniform changes in the lighting across the image. Color
location also finds patterns under conditions of non-uniform light changes,
such as shadows.
Blur and Noise Conditions
Color location finds patterns that have undergone some transformation
because of blurring or noise. Blurring usually occurs because of incorrect
focus or depth of field changes.
Color Pattern Matching
Use color pattern matching to quickly locate known reference patterns or
fiducials in a color image. With color pattern matching, you create a model
or template that represents the object you are searching for. Your machine
vision application then searches for the model in each acquired image,
calculating a score for each match. The score indicates how closely the
model matches the color pattern found. Use color pattern matching to locate
reference patterns that are fully described by the color and spatial
information in the pattern.
Using Color Pattern Matching
Grayscale, or monochrome, pattern matching is a well-established tool for
alignment, gauging, and inspection applications. In all of these application
areas, color simplifies a monochrome problem by improving contrast or
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Appendix A Using Color Tools
National Instruments Corporation A-7 LabVIEW Machine Vision and Image Processing
separation of the object from the background. Color pattern matching
algorithms provide a quick way to locate objects when color is present.
Use color pattern matching in the following cases:
The object you want to locate contains color information that is very
different from the background, and you want to find the location of the
object in the image very precisely. For these applications, color pattern
matching provides a more accurate solution than color
locationbecause color location does not use shape information during
the search phase, finding the locations of the matches with pixel
accuracy is very difficult.
The object you want to locate has grayscale properties that are very
difficult to characterize or that are very similar to other objects in the
search image. In such cases, grayscale pattern matching may not give
accurate results. If the object has some color information that
differentiates it from the other objects in the scene, color provides the
machine vision software with the additional information to locate the
object.
The color pattern matching tools in NI Vision measure the similarity
between an idealized representation of a feature, called a template, and the
feature that may be present in an image. A feature is defined as a specific
pattern of color pixels in an image.
Color pattern matching is the key to many applications. Color pattern
matching provides your application with information about the number of
instances and location of the template within an image. Use color pattern
matching in the following three general applications: gauging, inspection,
and alignment.
Gauging
Many gauging applications locate and then measure or gauge the distance
between objects. Searching and finding a feature is the key processing task
that determines the success of many gauging applications. If the
components you want to gauge are uniquely identified by their color, then
color pattern matching provides a fast way to locate the components.
Inspection
Inspection detects simple flaws, such as missing parts or unreadable
printing. A common application is inspecting the labels on consumer
product bottles for printing defects. Because most of the labels are in color,
color pattern matching is used to locate the labels in the image before a
detailed inspection of the label is performed.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Appendix A Using Color Tools
LabVIEW Machine Vision and Image Processing A-8 ni.com
Alignment
Alignment determines the position and orientation of a known object by
locating fiducials. Use the fiducials as points of reference on the object.
Grayscale pattern matching is sufficient for most applications, but some
alignment applications require color pattern matching for quick location of
specific color fiducials.
What to Expect from a Color Pattern Matching Tool
In automated machine vision applications, the visual appearance of
materials or components under inspection can change due to factors such
as orientation of the part, scale changes, and lighting changes. The color
pattern matching tool maintains its ability to locate the reference patterns
and gives accurate results despite these changes.
Pattern Orientation and Multiple Instances
A color pattern matching tool locates the reference pattern in an image even
when the pattern in the image is shifted, rotated, or slightly scaled. The color
pattern matching tool detects the following:
The pattern in the image
The position of the pattern in the image
The orientation of the pattern
Multiple instances of the pattern in the image (if applicable)
Ambient Lighting Conditions
The color pattern matching tool finds the reference pattern in an image
under conditions of uniform changes in the lighting across the image.
Color analysis is more robust when dealing with variations in lighting than
grayscale processing. Color pattern matching performs better under
conditions of non-uniform light changes, such as in the presence of
shadows, than grayscale pattern matching.
Blur and Noise Conditions
Color pattern matching finds patterns that have undergone some
transformation because of blurring or noise. Blurring usually occurs
because of incorrect focus or depth of field changes.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Appendix A Using Color Tools
National Instruments Corporation A-9 LabVIEW Machine Vision and Image Processing
Exercise A-1 Concept: Color Pattern Matching
Goal
Use color pattern matching to search for a defined pattern in an image.
Scenario
Open a color image and learn how to configure a color pattern match in
Vision Assistant. Lean how to use the color pattern match functionality with
LabVIEW.
Implementation
1. Launch Vision Assistant.
Select Open Image. Navigate to the C:\Program Files\
National Instruments\Vision\Examples\
Images\PCBColor.
Enable the Select all Files checkbox, and then click Open. The
images now are loaded into the Vision Assistant browser.
Double-click the first image to make the image active.
2. Use Vision Assistant to perform color pattern matching.
Select Processing Functions: ColorColor Pattern Matching.
Click Create Template.
Draw an ROI around one of the yellow electrical components.
Click OK.
Save the template pattern to the desktop as colortemplate.png.
Notice how the template is found in the image.
Click the Settings tab. Set the parameters so that they are the same
as the parameters in the following illustration.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Appendix A Using Color Tools
LabVIEW Machine Vision and Image Processing A-10 ni.com
Notice that Vision Assistant found more instances of the template.
Click OK. The Color Pattern Matching step is added to your script.
3. Examine the efficiency of the algorithm.
Select ToolsPerformance Meter. This meter illustrates how fast
the script operates under the present conditions.
Click OK.
4. Test the color pattern matching script on all of the images in your
browser.
Select ToolsBatch Processing. You can see your script with the
first and only step highlighted. There are several options from this
view. You can view the panel results and save the results to disk.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Appendix A Using Color Tools
National Instruments Corporation A-11 LabVIEW Machine Vision and Image Processing
Enable the Open Results Panel checkbox.
Figure A-1. Batch Processing Window
Click Run.
Vision Assistant applies the script to all twelve images in the
browser one at a time. If the pattern is found correctly, click OK to
move on to the next image. If you have an image in which the pattern
is not found, try lowering the value of Minimum Score. As you
lower the score, Vision Assistant continues to find the pattern unless
it is not fully within the image.
When the batch processing is finished, click OK, and then click
Return.
5. Generate a LabVIEW VI and examine the VI.
Select ToolsCreate LabVIEW VI. Follow the onscreen
instructions to generate the VI.
Save the VI as Color Pattern Matching.vi in the
C:\Exercises\LabVIEW Machine Vision directory.
Add a True constant to the All Images? (No) terminal of the IMAQ
Dispose VI.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Appendix A Using Color Tools
LabVIEW Machine Vision and Image Processing A-12 ni.com
On the front panel, right-click the Image Display control and select
Snapshot.
Examine the inputs to the IMAQ Setup Match Color Pattern VI and
the IMAQ Match Color Pattern VI.
6. Run the VI.
At the file prompt, select an image from the C:\Program Files\
National Instruments\Vision\Examples\Images\
PCBColor\ directory.
Modify the Color Pattern Match settings and see how the results
change.
Save and close the VI when you are finished.
Challenge
You may notice that the located component is not highlighted when the VI
runs as it was in Vision Assistant. You must add the functionality yourself
to the block diagram by adding Overlay Matches Position
Color.vi, which is located in the following directory: C:\National
Instruments\Program Files\LabVIEW 8.5\examples\Vision\
2. Functions\Color Pattern Matching\Color Pattern
Matching Example.llb.
End of Exercise A-1
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Appendix A Using Color Tools
National Instruments Corporation A-13 LabVIEW Machine Vision and Image Processing
Notes
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Appendix A Using Color Tools
LabVIEW Machine Vision and Image Processing A-14 ni.com
Notes
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
National Instruments Corporation B-1 LabVIEW Machine Vision and Image Processing
B
Additional Information and Resources
This appendix contains additional information about National Instruments
technical support options and NI Vision resources.
National Instruments Technical Support Options
Visit the following sections of the National Instruments Web site at ni.com
for technical support and professional services.
SupportOnline technical support resources at ni.com/support
include the following:
Self-Help ResourcesFor answers and solutions, visit the
award-winning National Instruments Web site for software drivers
and updates, a searchable KnowledgeBase, product manuals,
step-by-step troubleshooting wizards, thousands of example
programs, tutorials, application notes, instrument drivers, and so on.
Free Technical SupportAll registered users receive free Basic
Service, which includes access to hundreds of Application
Engineers worldwide in the NI Developer Exchange at
ni.com/exchange. National Instruments Application Engineers
make sure every question receives an answer.
For information about other technical support options in your area,
visit ni.com/services or contact your local office at
ni.com/contact.
System IntegrationIf you have time constraints, limited in-house
technical resources, or other project challenges, National Instruments
Alliance Partner members can help. The NI Alliance Partners joins
system integrators, consultants, and hardware vendors to provide
comprehensive service and expertise to customers. The program ensures
qualified, specialized assistance for application and system
development. To learn more, call your local NI office or visit
ni.com/alliance.
If you searched ni.com and could not find the answers you need, contact
your local office or NI corporate headquarters. Phone numbers for our
worldwide offices are listed at the front of this manual. You also can visit the
Worldwide Offices section of ni.com/niglobal to access the branch
office Web sites, which provide up-to-date contact information, support
phone numbers, email addresses, and current events.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Appendix B Additional Information and Resources
LabVIEW Machine Vision and Image Processing B-2 ni.com
Other National Instruments Training Courses
National Instruments offers several training courses for LabVIEW users.
These courses continue the training you received here and expand it to other
areas. Visit ni.com/training to purchase course materials or sign up for
instructor-led, hands-on courses at locations around the world.
National Instruments Certification
Earning an NI certification acknowledges your expertise in working with
NI products and technologies. The measurement and automation industry,
your employer, clients, and peers recognize your NI certification credential
as a symbol of the skills and knowledge you have gained through
experience. areas. Visit ni.com/training for more information about the
NI certification program.
NI Vision Resources
The following documents contain information that you may find helpful:
NI Vision Concepts ManualDescribes the basic concepts of image
analysis, image processing, and machine vision. This document also
contains in-depth discussions about imaging functions for advanced
users.
NI Vision for LabVIEW VI Reference HelpContains reference
information about NI Vision for LabVIEW palettes and VIs.
NI Vision Assistant TutorialDescribes the NI Vision Assistant
software interface and guides you through creating example image
processing and machine vision applications.
NI Vision Assistant HelpContains descriptions of the NI Vision
Assistant features and functions and provides instructions for using
them.
These documents can be downloaded at ni.com/manuals.
The following Web sites contain information that you may find helpful:
ni.com/vision
ni.com/camera
ni.com/labview
ni.com/dataacquisition
www.machinevisiononline.org
www.edmundoptics.com
www.graftek.com
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
National Instruments Corporation G-1 LabVIEW Machine Vision and Image Processing
Glossary
B
binary image An image in which the pixels have only one of two intensity values.
Objects in the image usually have a pixel intensity of 1 (or 255), and
the background has a pixel intensity of 0.
C
CCD Charge Coupled Device. A chip that converts light into electronic
signals.
closing A dilation followed by an erosion. Closing fills small holes in objects
and removes boundaries in objects.
color Wavelength of the light that is received by the human eye when looking
at an object.
connectivity Defines which of the surrounding pixels of a given pixel constitute its
neighborhood.
connectivity-4 Only pixels adjacent in the horizontal and vertical directions are
considered neighbors.
connectivity-8 All adjacent pixels are considered neighbors.
D
data line Parallel wires that carry digital signals corresponding to pixel values.
depth of field The maximum object depth that remains in focus.
dilation Increases the size of an object along its boundary and removes tiny
holes in the object.
E
edge A sharp change or transition in the pixel intensities in an image.
enable lines Lines that indicate when data lines are carrying valid values.
erosion Reduces the size of an object along its boundary and eliminates
isolated points in the image.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Glossary
LabVIEW Machine Vision and Image Processing G-2 ni.com
F
fiducial A reference pattern on a part that helps a machine vision application
find the location and orientation of the part in an image.
field of view The area of inspection that the camera can acquire.
G
grab Acquisition technique that loops continually on one buffer.
H
histogram Indicates the quantitative distribution of pixels in an image based on
their pixel intensities.
histograph In LabVIEW, a histogram that can be wired directly into a graph.
HSL Color encoding scheme using hue, saturation, and luminance
information.
hue Represents the dominant color of a pixel.
I
image mask A binary image that isolates parts of a source image for further
processing.
interlaced video Acquisition method that splits a video into two images called fields.
One field contains the odd-numbered lines, while the other contains
the even-numbered lines. This acquisition method reduces visible
flicker during screen updates.
L
lens distortion Geometric aberration caused by optical errors in the camera lens.
linear filter An algorithm that calculates the value of a pixel based on its own pixel
value as well as pixel values of its neighbors. The sum of this
calculation is divided by the sum of the elements in the matrix to obtain
a new pixel value.
luminance The brightness information in a video picture.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Glossary
National Instruments Corporation G-3 LabVIEW Machine Vision and Image Processing
M
morphological operations Operations that extract and alter the structure of objects in an image.
N
neighbor A pixel whose value affects the value of a nearby pixel when an image
is processed. The neighbors of a pixel are usually defined by a kernel
or a structuring element.
neighborhood operations Operations on a point in an image that take into consideration the
values of the pixels neighboring that point.
O
open An erosion followed by a dilation. Opening removes small objects and
smooths boundaries of objects in the image.
P
particle A connected region or grouping of pixels in an image in which all
pixels have the same intensity level.
pattern matching A machine vision technique that locates regions of a grayscale image
that match a predefined template.
perspective error Changes in the magnification of an object caused by errors in the
imaging setup and determined by the distance between the object and
the lens.
pixel clock Divides the incoming horizontal video line into pixels.
R
rake The rake algorithm computes the best line or circle, given the edge
coordinates detected. Finds edges along parallel, concentric, or radial
lines inside a region of interest.
region of interest An area of the image selected for further processing.
resolution The smallest feature size on your object that the imaging system can
distinguish.
RGB Color encoding scheme using red, green, and blue color information.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Glossary
LabVIEW Machine Vision and Image Processing G-4 ni.com
ring Acquisition technique that continually acquires images using a circular
buffer of images.
ROI See region of interest.
S
saturation The amount of white added to a pure color. Saturation relates to the
richness of a color.
sensor resolution The number of columns and rows of CCD pixels in the camera sensor.
sensor size The size of a sensors active area.
sequence Acquisition technique that acquires a sequence of images.
snap Acquisition technique that acquires a single frame or field to a buffer.
spatial calibration Assigning physical dimensions to the area of a pixel in an image.
subpixel analysis Finds the location of the edge coordinates in terms of fractions of a
pixel.
T
tap A group of data lines that carry one pixel each. Also known as a
channel.
template image A color, shape, or pattern selected for matching in an image. A
template can be a region selected from a part of an image or an entire
image.
threshold Separates objects from the background by assigning all pixels whose
intensities fall within a specified range to the object and assigning the
rest of the pixels to the background. In the resulting binary image,
object pixels have a pixel intensity of 1 (or 255) and background pixels
have a pixel intensity of 0.
W
working distance The distance from the front of the camera lens to the object under
inspection.
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
Course Evaluation
Course _______________________________________________________________________________________
Location _____________________________________________________________________________________
Instructor _________________________________________ Date ____________________________________
Student Information (optional)
Name ________________________________________________________________________________________
Company _________________________________________ Phone ___________________________________
Instructor
Please evaluate the instructor by checking the appropriate circle. Unsatisfactory Poor Satisfactory Good Excellent
Instructors ability to communicate course concepts
Instructors knowledge of the subject matter
Instructors presentation skills
Instructors sensitivity to class needs
Instructors preparation for the class
Course
Training facility quality
Training equipment quality
Was the hardware set up correctly? Yes No
The course length was Too long Just right Too short
The detail of topics covered in the course was Too much Just right Not enough
The course material was clear and easy to follow. Yes No Sometimes
Did the course cover material as advertised? Yes No
I had the skills or knowledge I needed to attend this course. Yes No If no, how could you have been
better prepared for the course? ____________________________________________________________________
_____________________________________________________________________________________________
What were the strong points of the course? __________________________________________________________
_____________________________________________________________________________________________
What topics would you add to the course? ___________________________________________________________
_____________________________________________________________________________________________
What part(s) of the course need to be condensed or removed? ____________________________________________
_____________________________________________________________________________________________
What needs to be added to the course to make it better? ________________________________________________
_____________________________________________________________________________________________
How did you benefit from taking this course? ________________________________________________________
_____________________________________________________________________________________________
Are there others at your company who have training needs? Please list. ____________________________________
_____________________________________________________________________________________________
_____________________________________________________________________________________________
Do you have other training needs that we could assist you with? _________________________________________
_____________________________________________________________________________________________
How did you hear about this course? NI Web site NI Sales Representative Mailing Co-worker
Other _____________________________________________________________________________________
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n
N
a
t
i
o
n
a
l

I
n
s
t
r
u
m
e
n
t
s

N
o
t

F
o
r

D
i
s
t
r
i
b
u
t
i
o
n

You might also like