Download as pdf or txt
Download as pdf or txt
You are on page 1of 96

Image Processing

Lecture 1
Introduction and Application

Dharmendra Kumar
CSE Dept. AKGEC
SYLLABUS
REFERENCES:
1. Digital Image Processing 2nd Edition, Rafael C. Gonzalvez and Richard E.
Woods. Published by: Pearson Education.
2. Digital Image Processing and Computer Vision, R.J. Schalkoff. Published by:
John Wiley and Sons, NY.
3. Fundamentals of Digital Image Processing, A.K. Jain. Published by Prentice
Hall, Upper Saddle River, NJ.
4. Sonka, Digital Image Processing and Computer Vision, Cengage Learning
5. Gonzalez and Woods, Digital Image Processing, Addison Wesley.
6. B.Chanda and D. Dutta Majumder, Digital Image Processing and Analysis, PHI
DIGITAL IMAGE PROCESSING
Image processing is a method to perform some operations on an
image, in order to get an enhanced image or to extract some useful
information from it.

Image processing basically includes the following three steps:


 Importing the image via image acquisition tools;

 Analysing and manipulating the image;

 Output in which result can be altered image or report that is based


on image analysis
TYPES OF IMAGES
Based on attributes

 Raster images: they are pixel-based. Quality is


dependent on no of pixels. Operations such
as enlarging or blowing up results in quality
reduction.
 Vector graphics: use basic geometric
attributes such as lines and circles to
describe an image.
Types of Images Based on Colour

Grey scale images are different from binary


images as they have many shades of grey
between black and white. These images are
also called monochromatic as there is no
colour component in the image, like in binary
images. Grey scale is the term that refers to
the range of shades between white and black
or vice versa.
Types of Images

 In binary images, the pixels assume a value of 0


or 1. So one bit is sufficient to represent the pixel
value. Binary images are also called bi-level
images.

 In true colour images, the pixel has a colour that


is obtained by mixing the primary colours red,
green, and blue. Each colour component is
represented like a grey scale image using eight
bits. Mostly, true colour images use 24 bits to
represent all the colours.
RGB image
 An RGB (red, green, blue) image is a three-dimensional
byte array that explicitly stores a color value for each
pixel.
 RGB image arrays are made up of width, height, and
three channels of color information.
 Scanned photographs are commonly stored as RGB
images.
 The color information is stored in three sections of a
third dimension of the image.
 These sections are known as color channels, color
bands, or color layers. One channel represents the
amount of red in the image (the red channel), one
channel represents the amount of green in the image
(the green channel), and one channel represents the
amount of blue in the image (the blue channel).
Indexed Image

 A special category of colour images is the


indexed image. In most images, the full range
of colours is not used. So it is better to
reduce the number of bits by maintaining a
colour map, gamut, or palette with the image.
Pseudocolour Image

 Like true colour images, Pseudocolour


images are also used widely in image
processing. True colour images are called
three-band images. However, in remote
sensing applications, multi-band images or
multi-spectral images are generally used.
These images, which are captured by
satellites, contain many bands.
Types of Images based on
Dimensions

 Types of Images Based on Dimensions


2D and 3D
 Types of Images Based on Data Types

 Single, double, Signed or unsigned.


To summarize
 four types of images: binary, grayscale,
indexed, and RGB.
 Binary images have only two values, zero and
one.
 Grayscale images represent intensities and
use a normal grayscale color table.
 Indexed images use an associated color table.
 RGB images contain their own color
information in layers known as bands or
channels.
 An image consists of a two-dimensional array of
pixels.
 The value of each pixel represents the intensity
and/or color of that position in the scene.
 Images of this form are known as sampled or
raster images, because they consist of a discrete
grid of samples.
 Such images come from many different sources
and are a common form of representing
scientific and medical data.
 once the image is loaded into memory, it
typically takes one of two forms: indexed or RGB.
An indexed image is a two-dimensional array,
and is usually stored as byte data.
Indexed image

 An indexed image does not explicitly contain any color


information.
 Its pixel values represent indices into a color Look-Up
Table (LUT).
 Colors are applied by using these indices to look up the
corresponding RGB triplet in the LUT.
 In some cases, the pixel values of an indexed image
reflect the relative intensity of each pixel.
 In other cases, each pixel value is simply an index, in
which case the image is usually intended to be
associated with a specific LUT.
Image Processing Fields

 Computer Graphics: The creation of images

 Image Processing: Enhancement or other


manipulation of the image

 Computer Vision: Analysis of the image


content
Computerized Processes Types

 Low-Level Processes:
 Input and output are images

 Tasks: Primitive operations, such as, image


processing to reduce noise, contrast enhancement
and image sharpening
Computerized Processes Types

 Mid-Level Processes:
 Inputs, generally, are images. Outputs are attributes
extracted from those images (edges, contours,
identity of individual objects)
 Tasks:
 Segmentation (partitioning an image into regions or
objects)
 Description of those objects to reduce them to a form
suitable for computer processing
 Classifications (recognition) of objects
Computerized Processes Types

 High-Level Processes:
 Image analysis of attributes and computer vision

 Object recognition
Digital Image Definition

 An image can be defined as a two-


dimensional function f(x,y)
 x,y: Spatial coordinate
 F: the amplitude of any pair of coordinate x,y,
which is called the intensity or gray level of
the image at that point.
 x,y and f, are all finite and discrete quantities.
Colour Image Image
Processing Compression
Fundamental Steps in DIP:
(Description)
Step 1: Image Acquisition
The image is captured by a sensor (eg.
Camera), and digitized if the output of the
camera or sensor is not already in digital form,
using analogue-to-digital convertor
Fundamental Steps in DIP:
(Description)
Step 2: Image Enhancement
The process of manipulating an image so that
the result is more suitable than the original
for specific applications.

The idea behind enhancement techniques is


to bring out details that are hidden, or simple
to highlight certain features of interest in an
image.
Fundamental Steps in DIP:
(Description)
Step 3: Image Restoration
- Improving the appearance of an image

- Tend to be mathematical or probabilistic


models. Enhancement, on the other hand, is
based on human subjective preferences
regarding what constitutes a “good”
enhancement result.
Fundamental Steps in DIP:
(Description)
Step 4: Morphological Processing
Tools for extracting image components that
are useful in the representation and
description of shape.

In this step, there would be a transition from


processes that output images, to processes
that output image attributes.
Fundamental Steps in DIP:
(Description)
Step 5: Image Segmentation
Segmentation procedures partition an image into
its constituent parts or objects.

Important Tip: The more accurate the


segmentation, the more likely recognition is to
succeed.
Fundamental Steps in DIP:
(Description)
Step 6: Representation and Description
- Representation: Make a decision whether the data
should be represented as a boundary or as a complete
region. It is almost always follows the output of a
segmentation stage.
- Boundary Representation: Focus on external shape

characteristics, such as corners and inflections


- Region Representation: Focus on internal properties,

such as texture or skeleton shape


Fundamental Steps in DIP:
(Description)
- Description: also called, feature selection,
deals with extracting attributes that result in
some information of interest.
Fundamental Steps in DIP:
(Description)
Step 7: Recognition and Interpretation
Recognition: the process that assigns label to
an object based on the information provided
by its description.
Fundamental Steps in DIP:
(Description)
Step 8: Knowledge Base
Knowledge about a problem domain is coded
into an image processing system in the form
of a knowledge database.
Applications : Office Automation

 Optical Character Recognition


 Document processing
 Cursive script recognition
 Logo and icon recognition
Industrial Automation

 Automatic Inspection system


 Robotics
 Process control applications
 Oil and natural gas exploration
Biomedical

 X-ray image analysis


 MRI scan
 Routine screening of plant samples
 Mass screening of medical images such as
chromosome slides for detection of various
diseases
Remote sensing

 Natural resourses survey and management


 Environment and pollution control
 Monitoring traffic along roads, docks and
airfields
Criminology

 Finger print identification


 Human face registration and matching
 Forensic investigation
Astronomy and space
applications
 Restoration of images suffering from
geometric and photometric distortions
 Computing close up picture of planetary
surfaces
Meteorology

 Short term weather forecasting


 Long term climatic change detection from
satellite and other remote sensing data
 Cloud pattern analysis
Information technology

 Video conferencing
 Fascimile image transmission
Entertainment and consumer
electronics
 HDTV
 Multimedia and video editing
Military applications

 Missile guidance and detection


 Target identification
 Navigation of pilotless vehicle
Security measures

 Recognition of human objects and


verification of claimed identity based on
images of face, iris, palm, ear and fingerprint
Components of an Image
Processing System
Network

Image displays Computer Mass storage

Specialized image
Image processing
Hardcopy processing
software
hardware

Typical general-
Image sensors purpose DIP
Problem Domain system
Components of an Image
Processing System
1. Image Sensors
Two elements are required to acquire digital
images. The first is the physical device that
is sensitive to the energy radiated by the
object we wish to image (Sensor). The
second, called a digitizer, is a device for
converting the output of the physical
sensing device into digital form.
Components of an Image
Processing System
2. Specialized Image Processing Hardware
Usually consists of the digitizer, mentioned before, plus
hardware that performs other primitive operations, such
as an arithmetic logic unit (ALU), which performs
arithmetic and logical operations in parallel on entire
images.

This type of hardware sometimes is called a front-end


subsystem, and its most distinguishing characteristic is
speed. In other words, this unit performs functions that
require fast data throughputs that the typical main
computer cannot handle.
Components of an Image
Processing System
3. Computer
The computer in an image processing system is a general-
purpose computer and can range from a PC to a
supercomputer. In dedicated applications, sometimes
specially designed computers are used to achieve a
required level of performance.
Components of an Image
Processing System
4. Image Processing Software
Software for image processing consists of specialized
modules that perform specific tasks. A well-designed
package also includes the capability for the user to write
code that, as a minimum, utilizes the specialized
modules.
Components of an Image
Processing System
5. Mass Storage Capability
Mass storage capability is a must in a image processing
applications. And image of sized 1024 * 1024 pixels
requires one megabyte of storage space if the image is
not compressed.

Digital storage for image processing applications falls into


three principal categories:
1. Short-term storage for use during processing.
2. on line storage for relatively fast recall
3. Archival storage, characterized by infrequent access
Components of an Image
Processing System
5. Mass Storage Capability
One method of providing short-term storage is computer memory.
Another is by specialized boards, called frame buffers, that store
one or more images and can be accessed rapidly.

The on-line storage method, allows virtually instantaneous image


zoom, as well as scroll (vertical shifts) and pan (horizontal shifts).
On-line storage generally takes the form of magnetic disks and
optical-media storage. The key factor characterizing on-line
storage is frequent access to the stored data.

Finally, archival storage is characterized by massive storage


requirements but infrequent need for access.
Components of an Image
Processing System
6. Image Displays
The displays in use today are mainly color
(preferably flat screen) TV monitors.
Monitors are driven by the outputs of the
image and graphics display cards that are an
integral part of a computer system.
Components of an Image
Processing System
7. Hardcopy devices
Used for recording images, include laser
printers, film cameras, heat-sensitive
devices, inkjet units and digital units, such
as optical and CD-Rom disks.
Components of an Image
Processing System
8. Networking
Is almost a default function in any computer
system, in use today. Because of the large
amount of data inherent in image processing
applications the key consideration in image
transmission is bandwidth.

In dedicated networks, this typically is not a


problem, but communications with remote
sites via the internet are not always as efficient.
Sampling and Quantization

In order to become suitable for digital processing, an image


function f(x,y) must be digitized both spatially and in amplitude.
Typically, a frame grabber or digitizer is used to sample and
quantize the analogue video signal. Hence in order to create an
image which is digital, we need to covert continuous data into
digital form. There are two steps in which it is done:
•Sampling
•Quantization
The sampling rate determines the spatial resolution of the
digitized image, while the quantization level determines the
number of grey levels in the digitized image. 
Sampling and Quantization
Sampling and Quantization
Representing Digital Images

 The representation of an M×N numerical


array as

 f ( 0, 0 ) f ( 0, 1) ... f ( 0, N  1) 
 f (1, 0 ) f (1, 1) ... f (1, N  1)

f ( x, y )  
 ... ... ... ... 
 
 f ( M  1, 0 ) f (M  1, 1) ... f ( M  1, N  1) 

Weeks 1 & 2 76
Representing Digital Images

 The representation of an M×N numerical


array as

 a0,0 a 0 ,1 ... a 0 , N 1 
 a a 1,1 ... a 1, N 1

A   1, 0

 ... ... ... ... 
 
 a M 1, 0 a M 1,1 ... a M 1, N 1 

Weeks 1 & 2 77
Representing Digital Images

 The representation of an M×N numerical


array in MATLAB

 f (1, 1) f (1, 2 ) ... f (1, N ) 


 f ( 2, 1) f ( 2, 2 ) ... f ( 2, N )

f ( x, y )  
 ... ... ... ... 
 
 f ( M , 1) f ( M , 2) ... f (M , N )

Weeks 1 & 2 78
Representing Digital Images

 Discrete intensity interval [0, L-1], L=2k

 The number b of bits required to store a M × N


digitized image

b=M×N×k

Weeks 1 & 2 79
Representing Digital Images

Weeks 1 & 2 80
Spatial and gray-level resolution
 Sampling is the principal factor used to
determine the spatial resolution of an image.
Effect of spatial resolution
Aliasing and Moire Patterns
 Shannon sampling theorem tells us that, if
the function is sampled at a rate equal to or
greater than twice its higest frequency(fs≥fm )
it is possible to recover completely the
original function from the samples.
 If the function is undersampled, then a
phenonmemnon called alising(distortion) (If
two pattern or spectrum overlap, the
overlapped portion is called aliased) corrupts
the sampled image. The corruption is in the
form of additional frequency components
being introduced into the sampled function.
These are called aliased frequencies.
 The principal approach for reducing the
aliasing effects on an image is to reduce its
highfrequency components by blurring the
image prior to sampling. However, aliasing is
always present in a sampled image. The
effect of aliased frequencies can be seen
under the right conditions in the form of so
called Moiré patterns.
Moire pattern

A moiré pattern is a secondary and visually evident superimposed pattern


created, for example, when two identical (usually transparent) patterns on a
flat or curved surface (such as closely spaced straight lines drawn radiating
from a point or taking the form of a grid) are overlaid while displaced or rotated
a small amount from one another.
Zooming and shrinking digital
images
 Zooming may be viwed as oversampling and
shrinking may be viewed as undersampling.
 Zooming is a method of increasing the size of
a given image.
 Zooming requires two steps:

creation of new pixel locations, and the


assigning new grey level values to those new
locations
Nearest neighbor interpolation
 Nearest neighbor interpolation is the simplest
method and basically makes the pixels bigger.
 The intensity of a pixel in the new image is
the intensity of the nearest pixel of the
original image.
 If you enlarge 200%, one pixel will be enlarged
to a 2 x 2 area of 4 pixels with the same color
as the original pixel.
Bilinear interpolation:

 Bilinear interpolation considers the closest


2x2 neighborhood of known pixel values
surrounding the unknown pixel.
 It then takes a weighted average of these 4
pixels to arrive at its final interpolated value.
This results in much smoother looking
images than nearest neighbour.
Image shrinking

 Image shrinking is done in a similar manner


as just described for zooming. The equivalent
process of pixel replication is row-column
deletion. For example, to shrink an image by
one-half, we delete every other row and
column.

You might also like