Port City International University: Mid Assignment

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 16

PORT CITY INTERNATIONAL UNIVERSITY

MID ASSIGNMENT

Course Code: CSE 453

Course Title: Digital Image Processing

Submitted to:
Ms. Taslima Binte Hossain

Lecturer, Computer Science & Engineering

Port City International University

Submitted by:
Jahedul Islam CSE 01506411

B.sc in CSE CSE 15-A (DAY)


Question No: 1

Question Name: Explain Brightness adaptation and


Discrimination.
Solution:

Brightness adaptation: Brightness adaptation means the human visual


system can operate only from scotopic to glare limit. It cannot operate over the
range simultaneously. It accomplishes this large variation by changes in its overall
intensity.

Brightness Discrimination: The ability to differentiate the level of brightness.


Discriminating the degree of brightness, however, depends on the visual acuity of
the observer, the wavelength of light, and the sensitivity to light and color as
perceived.
Question No: 2

Question Name: Explain sampling and quantization.


Solution: An image may be continuous w.r.t x and y co-ordinate and also in

amplitude. To convert it to digital form, we have to sample the function in both co-
ordinate and amplitude.

Sampling: The process of digitizing the co-ordinate values is called


Sampling.

• A continuous image f(x, y) is normally approximated by equally spaced


samples arranged in the form of an N x M array where each elements of the
array is a discrete quantity.

• The sampling rate of digitizer determines the spatial resolution of digitized


image.
• Finer the sampling (i.e. increasing M and N), the better the approximation of
continuous image function f(x, y).

Quantization: The process of digitizing the amplitude values is called


Quantization.

• Magnitude of sampled image is expressed as the digital values in Image


processing.
• No of quantization levels should be high enough for human perception of the
fine details in the image.
• Most digital IP devices uses quantization into k equal intervals.
• If b-bits are used,
b
• No. of quantization levels = k = 2
• 8 bits/pixels are used commonly.

Question No: 3

Question Name: Explain about Mach band effect.


Solution: The Mach band describes an effect where the human mind

subconsciously increases the contrast between two surfaces with different


luminance. It can be seen on the index page of this website.

The luminance of the squares above increases in a stepwise fashion. However


although the luminance within each block is constant the apparent lightness of each
strip seems to vary across its length. Close to the left edge of the strip it appears
lighter than at the centre, and close to the right edge of the strip it appears darker
than at the centre. The visual system is exaggerating the difference in luminance
(contrast) at each edge in order top detect it. Try holding a pencil over an edge in
this image and see how the apparent difference in lightness of two adjacent strips
changes. They become much harder to tell apart. The edge detection is working to
enhance object separation.
Mach banding is caused by lateral inhibition of the receptors in the eye. As
receptors receive light they draw light-sensitive chemical compounds from
adjacent regions, thus inhibiting the response of receptors in those regions.
Receptors directly on the lighter side of the boundary can pull in unused chemicals
from the darker side, and thus produce a stronger response. Receptors on the on the
darker side of the boundary, however, produce a weaker effect because of that
same migration.

Question No: 4
Question Name: Describe the fundamental steps in image
processing.
Solution: In computer science, digital image processing uses algorithms to
perform image processing on digital images to extract some useful information.
Digital image processing has many advantages as compared to analog image
processing. Wide range of algorithms can be applied to input data which can avoid
problems such as noise and signal distortion during processing. As we know,
images are defined in two dimensions, so DIP can be modeled in multidimensional
systems.

Purpose of Image processing


The main purpose of the DIP is divided into following 5 groups:

1. Visualization: The objects which are not visible, they are observed.
2. Image sharpening and restoration: It is used for better image
resolution.
3. Image retrieval: An image of interest can be seen
4. Measurement of pattern: In an image, all the objects are measured.
5. Image Recognition: Each object in an image can be distinguished.

Following are Fundamental Steps of Digital Image Processing:


1. Image Acquisition
Image acquisition is the first step of the fundamental steps of DIP. In this stage, an
image is given in the digital form. Generally, in this stage, pre-processing such as
scaling is done.
2. Image Enhancement
Image enhancement is the simplest and most attractive area of DIP. In this stage
details which are not known, or we can say that interesting features of an image is
highlighted. Such as brightness, contrast, etc...

3. Image Restoration
Image restoration is the stage in which the appearance of an image is
improved.Difference between JDK, JRE, and JVM

4. Color Image Processing


Color image processing is a famous area because it has increased the use of digital
images on the internet. This includes color modeling, processing in a digital
domain, etc.

5. Wavelets and Multi-Resolution Processing


In this stage, an image is represented in various degrees of resolution. Image is
divided into smaller regions for data compression and for the pyramidal
representation.

6. Compression
Compression is a technique which is used for reducing the requirement of storing
an image. It is a very important stage because it is very necessary to compress data
for internet use.

7. Morphological Processing
This stage deals with tools which are used for extracting the components of the
image, which is useful in the representation and description of shape.
8. Segmentation
In this stage, an image is a partitioned into its objects. Segmentation is the most
difficult tasks in DIP. It is a process which takes a lot of time for the successful
solution of imaging problems which requires objects to identify individually.

9. Representation and Description


Representation and description follow the output of the segmentation stage. The
output is a raw pixel data which has all points of the region itself. To transform the
raw data, representation is the only solution. Whereas description is used for
extracting information's to differentiate one class of objects from another.

10. Object recognition


In this stage, the label is assigned to the object, which is based on descriptors.

11. Knowledge Base

Knowledge is the last stage in DIP. In this stage, important information of the
image is located, which limits the searching processes. The knowledge base is very
complex when the image database has a high-resolution satellite.

Question No: 5
Question Name: Explain the basic Elements of digital image
processing.
Solution:

Elements of digital image processing systems:

The basic operations performed in a digital image processing systems include

(1)Acquisition, (2) storage, (3) processing, (4) communication and (5) display .
Fig: Basic Elements of digital image processing.

Question No: 6
Question Name: What is Image Transform? What are the
applications of transform?
Solution:

Image Transform: A function or operator that takes an image as its input and
produces an image as its output. Depending on the transform chosen, the input and
output images may appear entirely different and have different interpretations.
Fourier transforms, principal component analysis (also called Karhunen-Loeve
analysis), and various spatial filters, are examples of frequently used
image transformation procedures.

Applications of image transform:


1. Image Correction, Sharpening, and Resolution Correction
Often, we wish we could make old images better. And that is possible nowadays.
Zooming, sharpening, edge detection, high dynamic range edits all fall under this
category. All these steps help in enhancing the image. Most editing software and
Image correction code can do these things easily.
2. Filters on Editing Apps and Social Media
Most editing apps and social media apps provide filters these days.

Above is an example of the original Image and filtered Image. Filters make the
image look more visually appealing. Filters are usually a set of functions that
change the colors and other aspects in an image that make the image look different.
Filters are an interesting application of Image processing.

3. Medical Technology:
In the medical field, Image Processing is used for various tasks like PET scan, X-
Ray Imaging, Medical CT, UV imaging, Cancer Cell Image processing, and much
more. The introduction of Image Processing to the medical technology field has
greatly improved the diagnostics process.
The image on the left is the original image. The image on the right is the processed
image. We can see that the processed image is far better and can be used for better
diagnostics.

4. Computer / Machine Vision:


One of the most interesting and useful applications of Image Processing is
in Computer Vision. Computer Vision is used to make the computer see, identify
things, and process the whole environment as a whole. An important use of
Computer Vision is Self Driving cars, Drones etc. CV helps in obstacle detection,
path recognition, and understanding the environment.

This is how typical Computer Vision works for Car Autopilots. The computer
takes in live footage and analyses other cars, the road, and other obstacles.
5. Pattern recognition:
Pattern recognition is a part of Image Processing that involves AI and Machine
Learning. Image processing is used to find out various patterns and aspects in
images. Pattern Recognition is used for Handwriting analysis, Image recognition,
Computer-aided medical diagnosis, and much more.

6. Video Processing:
Video is basically a fast movement of images. Various image processing
techniques are used in Video Processing. Some methods of Video Processing are
noise removal, image stabilization, frame rate conversion, detail enhancement, and
much more.

Question No: 7
Question Name: Explain Histogram processing.
Solution:

Histogram: A histogram is a graph. A graph that shows frequency of anything.


Usually histogram have bars that represent frequency of occurring of data in the whole
data set.
A Histogram has two axis the x axis and the y axis.
The x axis contains event whose frequency you have to count.
The y axis contains frequency.
The different heights of bar shows different frequency of occurrence of data.

Histogram Processing Techniques:


Histogram Sliding
In Histogram sliding, the complete histogram is shifted towards rightwards or leftwards.
When a histogram is shifted towards the right or left, clear changes are seen in the
brightness of the image. The brightness of the image is defined by the intensity of light
which is emitted by a particular light source.
Histogram Stretching
In histogram stretching, contrast of an image is increased. The contrast of an image is
defined between the maximum and minimum value of pixel intensity.

If we want to increase the contrast of an image, histogram of that image will be fully
stretched and covered the dynamic range of the histogram.

From histogram of an image, we can check that the image has low or high contrast.

HTML Tutorial
Histogram Equalization
Histogram equalization is used for equalizing all the pixel values of an image.
Transformation is done in such a way that uniform flattened histogram is produced.

Histogram equalization increases the dynamic range of pixel values and makes an equal
count of pixels at each level which produces a flat histogram with high contrast image.

While stretching histogram, the shape of histogram remains the same whereas in
Histogram equalization, the shape of histogram changes and it generates only one
image.

Question No: 8
Question Name: Differentiate scotopic and photopic vision.
Solution:

Scotopic vision uses only rods to see, meaning that objects are visible, but appear
in black and white, whereas photopic vision uses cones and provides colour.
Mesopic vision is the combination of the two, and is used for most scenarios.
Photopic Vision:
Photopic vision typically dominates under normal lighting conditions, for instance
during daytime. It is based on three types of cones which are sensitive to short,
middle, and long wavelength ranges, which generally appear blue, green and red,
respectively to the human eye.

Cones are limited in terms of light sensitivity. Vision above 3 cd/m 2 is based on
photopic vision which allows for good color discrimination. In 1924, the
Commission Internationale de l’Eclairage [CIE ] defined a general photopic
spectral sensitivity function of the average human eye on the basis of several
experiments.

The photopic sensitivity is based on the midrange of the visual spectrum called the
ȳ(λ) or Vλ and the basic principle of the response of light meters. The function
graph illustrated above shows that the human eye is not equally sensitive to light
over the whole visual spectrum: the peak sensitivity is concentrated around 555
nm.

Photopic Vision Curve


Scotopic Vision:
Rods are more sensitive to light than cones. However, rods are not sensitive to
different colors as there is only one kind of rod. For this reason, human vision is
unable to distinguish colors under low light conditions. However, rods are very
effective under low light conditions below 0.001 cd/m2.

This type of vision is referred to as scotopic vision which has been defined by the
CIE in 1951 as the relative sensitivity of scotopic vision: V’λ. The highest
sensitivity of scotopic vision is found at a wavelength of about 507 nm. Purkinje
effect is the shift in peak sensitivity when switching between scotopic and photopic
vision. Light levels between photopic and scotopic vision are mediated by a
combination of cones and rods which is called mesopic vision.

Scotopic Vision Curve


Question No: 9
Question Name: There are two broad categories of image enhancement
techniques; Spatial domain techniques and Frequency domain techniques.
Distinguish between these two techniques
Solution: Image enhancement is basically improving the interpretability or
perception of information in images for human viewers and providing `better input
for other automated image processing techniques. The principal objective of image
enhancement is to modify attributes of an image to make it more suitable for a
given task and a specific observer. During this process, one or more attributes of
the image are modified.

There exist many techniques that can enhance a digital image without spoiling it.
The enhancement methods can broadly be divided in to the following two
categories:
I. Spatial Domain Techniques
II. Frequency Domain Techniques
Spatial Domain Techniques: In spatial domain techniques, we directly deal
with the image pixels. The pixel values are manipulated to achieve desired
enhancement. The value of a pixel with coordinates (x; y) in the enhanced image ‘
F’ is the result of performing some operation on the pixels in the neighborhood of
(x; y) in the input image ‘ f’ .
Frequency Domain Techniques: In frequency domain techniques, the image
is first transferred into frequency domain. It means that, the Fourier Transform of
the image is computed first. All the enhancement operations are performed on the
Fourier transform of the image and then the Inverse Fourier transform is performed
to get the resultant image.

The End

You might also like