Difference Between Computer Vision, Image Processing and Computer Graphics

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 3

Computer vision

is a field of computer science, and it aims at enabling computers to process and identify images
and videos in the same way that human vision does.
Aims to mimic the human visual system
Objective is to build artificial systems which can extract information from images/to make
computers understand images and videos
Difference between computer vision, image processing and computer graphics
In Computer Vision (image analysis, image interpretation, scene understanding), the
input is an image and the output is interpretation of a scene. Image analysis is concerned with
making quantitative measurements from an image to give a description of the image.
In Image Processing (image recovery, reconstruction, filtering, compression,
visualization), the input is an image and the output is also an image.
In Computer Graphics, the input is any scene of a real world and the output is an image.
Computer vision makes a model from images (analysis), whereas computer graphics takes a
model as an input and converts it to an image (synthesis).
The image formation process can be mathematically represented as:
Image = PSF ∗ Object function + Noise
Object function is an object or a scene that is being imaged.
Point spread function (PSF) is the impulse response when the inputs and outputs are the intensity
of light in an imaging system
Signal processing is a discipline in electrical engineering and in mathematics that deals with
analysis and processing of analog and digital signals, and deals with storing, filtering, and other
operations on signals
 Image processing the field that deals with the type of signals for which the input is an image
and the output is also an image is done in image processing. As its names suggests, it deals with
the processing on images.
Image processing basically includes the following three steps:

1. Importing the image via image acquisition tools;


2. Analyzing and manipulating the image;
3. Output in which result can be altered image or report that is based on image analysis.

Methods used for image processing namely, analog image and digital image processing.
Analog image processing done on analog signalsprocessing on two dimensional analog
signals images are manipulated by electrical means by varying the electrical signalE.g.
television image, hard copies like printouts and photographs
 Digital image processing developing a digital system that performs operations on a digital
image help in manipulation of the digital images by using computers The three general
phases that all types of data have to undergo while using digital technique are pre-processing,
enhancement, and display, information extraction.
Fundamental steps in image processing:

1. Image acquisition: to acquire a digital image


2. Image preprocessing: to improve the image in ways that increase the chances for success of
the other processes.
3. Image segmentation: to partitions an input image into its constituent parts or objects.
4. Image representation: to convert the input data to a form suitable for computer processing.
5. Image description: to extract features that result in some quantitative information of interest
or features that are basic for differentiating one class of objects from another.
6. Image recognition: to assign a label to an object based on the information provided by its
descriptors.
7. Image interpretation: to assign meaning to an ensemble of recognized objects

Image is nothing more than a two-dimensional signal


 Applications of Digital Image Processing digital image processing is widely used are
sharpening and restorationMedical fieldRemote sensingTransmission and encoding
Machine/Robot visionColor processingPattern recognitionVideo processing
Components of Image Processing System
Image Processing System is the combination of the different elements involved in the digital
image processing. Digital image processing is the processing of an image by means of a digital
computer

1. Image Sensors: senses the intensity, amplitude, co-ordinates and other features of the
images and passes the result to the image processing hardware. It includes the problem
domain.
2. Image Processing Hardware: is the dedicated hardware that is used to process the
instructions obtained from the image sensors. It passes the result to general purpose
computer.
3. Computer: used in the image processing system is the general-purpose computer that is used
by us in our daily life.
4. Image Processing Software: is the software that includes all the mechanisms and algorithms
that are used in image processing system.
5. Mass Storage: stores the pixels of the images during the processing.
6. Hard Copy Device: Once the image is processed then it is stored in the hard copy device. It
can be a pen drive or any external ROM device.
7. Image Display: It includes the monitor or display screen that displays the processed images.
8. Network: is the connection of all the above elements of the image processing system.

Image enhancement process involves different techniques which are used to improve visual
quality or appearance of an image Image enhancement is the improvement of image quality
without having a knowledge of source of degradation.

If the source of degradation is known, then the process of image quality improvement is called
“image restoration,”

Image filtering is a process to modify the pixels of an image based on some function of a local
neighborhood of the pixel (neighborhood operation).

Some generally used image filtering techniques are:


• Smoothing filters: These filters sometimes are called averaging filters or low pass filters. In this
method, each pixel is replaced by average of the pixels contained in the neighborhood of the filter
mask.

• Sharpening filters: These are used to highlight fine details in an image. Sharpening can be done
by differentiation. Each pixel is replaced by its second order derivative or Laplacian.

• Unsharp masking and high-boost filters: This can be done by subtracting a blurred version of
an image from the image itself.

• Median filters: These are used to remove salt and pepper noises in the image. Figure below
shows one example of image filtering.

Spatial domain filtering Neighborhood of a point (x, y) can be defined by using a square area
centered at (x, y), and this square/rectangular area is called a “mask” or “filter.”

Fourier transform image processing tool which is used to decompose an image into its sine
and cosine components. Important applications for Fourier transform image analysis, image
filtering, image reconstruction and image compression.

Homomorphic filtering used to improve the appearance of an image by simultaneous intensity


range compression and contrast enhancement.

Digital images are formed from optical imageOptical image primary components – lighting
component lighting component corresponds to the lighting condition of a scene (incident
illumination) and reflectance component way the objects in the image reflect light

You might also like