Professional Documents
Culture Documents
Digital Images
Digital Images
1
Basics of Machine Vision
1.1
Digital Images
1.1.1
Grayscale Image
Practical Guide to Machine Vision Software: An Introduction with LabVIEW, First Edition.
Kye-Si Kwon and Steven Ready.
2015 Wiley-VCH Verlag GmbH & Co. KGaA. Published 2015 by Wiley-VCH Verlag GmbH & Co. KGaA.
2 1 Basics of Machine Vision
Due to its simple representation as single pixel values, grayscale images are
often used in machine vision applications as a starting point to measure the
length or size of an object and to find a similar image pattern via pattern match-
ing. The gray images can be acquired from digital monochrome or color cam-
eras. When the color image is acquired, the color image can easily be converted
to a grayscale image by using the color plane extraction function that is provided
by NI Vision Development Module.
1.1.2
Binary Image
The most commonly used image format for finding the existence of the object,
location, and size information is binary image. The binary image pixel has two
digit values, where object has the value of 1 and background has the value of 0
in most cases. Since there are only two values used, it is often called a 1 bit
image (bit depth of 1, or 21). To make a binary image, the grayscale image is
commonly used as a starting point. In general, we use a threshold value to con-
vert a grayscale image to a binary image. In the case that the object of interest in
an image is bright against a dark background (the imaged object’s pixel value is
larger than a chosen threshold value), it is classified as the object (a pixel image
value of 1) and if the image value is less than the threshold value, it can be classi-
fied as the background (the pixel image value of 0). However, it should be noted
that there will be cases where the dark parts of an image may represent the
object with the bright part comprising the background.
Once the grayscale image is converted to a binary image, various image proc-
essing functions can be used. For example, we can use the particle analysis func-
tion from which the size, area, and the center of the object can be easily
obtained. Prior to particle analysis, the morphology functions are often used to
modify aspects of the image for better or more reliable results. For example, we
may want to remove unnecessary parts from the binary image or repair parts
1.1 Digital Images 3
1.1.3
Color Image
Digital color images from digital cameras are usually described by three color
values: R (red), G (green), and B (blue). The three color values that represent an
image pixel describe the color and brightness of the pixel. In other words, the
brightness and color of the pixels in an image obtained from a digital color cam-
era are generally defined by the combination of the R, G, and B values. All possi-
ble colors can be represented by these three primary colors. The digital color
image is often referred to as a 24 or 32 bit image. Figure 1.2 shows the basic
concept of a 32 bit color image. Among four possible 8 bit values in a 32 bit
word, we use 8 bits for each of the R, G, and B components. The other 8 bit
component is not used. This is due to the computer’s natural representation of
an integer as a 32 bit number.
Figure 1.3 shows an example of a color image. The total size of the image is
800 × 600. The X direction has 800 (0–799) pixels and Y direction has 600
(0–599) pixels. Each pixel has three component values representing R, G, and B.
For example, the image value at X = 600, Y = 203, f(600, 203), is R = 196, G = 176,
B = 187.
For a better explanation, a USB camera was used to acquire the images via
a LabVIEW VI, as shown in Figure 1.4. As seen in the lower part of Fig-
ure 1.4, the total size of the image (the number of pixels) is 640 × 480. The
pixel location is defined by the X and Y locations, where upper left is (0,0)
and lower right is (639,479). Each of the RGB values in a pixel has an 8 bit
value, which corresponds to an integer range of 0–255. When we move the
mouse cursor over the acquired image, the pixel’s RGB values pointed to by
the mouse cursor are shown at the bottom of the window. In the example as
seen in Figure 1.4, the RGB values at the mouse X/Y image position (257,72)
are (255,253,35).
Each pixel color and brightness is the combination of RGB values. For exam-
ple, R (red) has the range of values between 0 and 255. If the value is close to 0,
the R image becomes dark red, which can be seen as black. On the other hand, if
the image value of R becomes 255, then the R component becomes the brightest,
which is seen as bright red. The green and blue pixel component values have
same property. If the R = 255, G = 0, and B = 0, then the pixel appears to be
bright red. If all three RGB values are 255, then the pixel appears to be white
(bright pixel), whereas if the RGB values are 0, then the pixel becomes dark
(black).
One alternative method for color image representation, HSL (hue, satura-
tion, and luminance), can be used instead of RGB (Table 1.1). The three HSL
values are also generally represented by 8 bit values for each component.
By using proper values of HSL, any color and brightness can be displayed in a
pixel.
1.2 Components of Imaging System 5
Hue defines the color Saturation refers to the Luminance is closely related
of a pixel such as red, amount of white added to with the brightness of image.
yellow, green, and blue or the hue and represents the Extracting the luminance
combination of two of them. relative purity of a color. values of an HSL color
It is related to wavelength If the saturation increases, image results in a good con-
of a light. color becomes pure. If version of a color image to a
colors are mixed, the satura- grayscale representation.
tion decreases. For example,
red has higher saturation
compared with pink.
1.2
Components of Imaging System
Figure 1.5 shows the basic components of imaging systems. Imaging acquisition
hardware requires a camera, lens, and lighting source. To get an image from the
camera to the computer, we need to select the most appropriate camera com-
munication interface (bus), which connects the camera to the computer. Some
cameras require specific types of standardized communication busses integrated
into computer interface cards called frame grabbers. Examples of a few standard-
ized frame grabber communication busses are Analog, Camera Link, and Gigabit
Ethernet (GigE). Other cameras connect to the computer over more common
communication interfaces such as USB, Ethernet, or Fire Wire that are provided
as standard configurations in most computers.
Software is also needed to display and extract information from images.
In this book, image processing techniques will be described for the purpose of