Professional Documents
Culture Documents
11000120032-Soham Chakraborty IP
11000120032-Soham Chakraborty IP
IMA GE
The mapping of a 3D world object into a 2D digital image plane is called imaging. In order to do
so, each point on the 3D object must correspond to the image plane.
COLOUR AND PIXELATION:
In digital imaging, a frame grabber is placed at the image plane which is like a sensor. It aims to
focus the light on it and the continuous image is pixelated via the reflected light by the 3D object.
The light that is focused on the sensor generates an electronic signal.
Each pixel that is formed may be colored or grey depending on the intensity of the sampling and
quantization of the light that is reflected and the electronic signal that is generated via them.
All these pixels form a digital image. The density of these pixels determines the image quality.
The more the density the more the clear and high-resolution image we will get.
The scene is illuminated
by a single source.
The scene reflects
radiation towards the
camera.
The camera senses it via
A Simple solid state cells (CCD
cameras)
model of
image
formation
GEOMETRIC
TRANSFORMATION
A set of image transformations where the geometry of image is changed without altering its
actual pixel values are commonly referred to as “Geometric” transformation. In general, you
can apply multiple operations on it, but, the actual pixel values will remain unchanged.
In these transformations, pixel values are not changed, the positions of pixel values are
changed.
An affine transformation is a transformation that preserves collinearity and the ratio of
distances (for example – the midpoint of a line segment is still the midpoint even after the
transformation))
The parallel lines in an original image will be parallel in the output image.
In general, an affine transformation is a composition of translations, rotations, shears, and
magnifications.
Digitization
In Digital Image Processing, signals captured from the physical world need to be translated into
digital form by “Digitization” Process.
In order to become suitable for digital processing, an image function f(x,y) must be digitized
both spatially and in amplitude. This digitization process involves two main processes :
Sampling: Digitizing the co-ordinate value is called sampling.
Quantization: Digitizing the amplitude value is called quantization
Typically, a frame grabber or digitizer is used to sample and quantize.
Sampling
Since an analogue image is continuous not just in its co-
ordinates (x axis), but also in its amplitude (y axis), so the part
that deals with the digitizing of co-ordinates is known as
sampling.
In digitizing sampling is done on independent variable. In case
of equation y = sin(x), it is done on x variable.
It is obvious that more samples we take, the quality of the image would be
more better, the noise would be more removed and same happens vice
versa.
However, if you take sampling on the x axis, the signal is not converted to
digital format, unless you take sampling of the y-axis too which is known
as quantization.
Image Sampling - Example
original image sampled by a factor of 2
Images have
been resized
for easier
sampled by a factor of 4 sampled by a factor of 8 comparison
Uniform Sampling vs Non-uniform
Sampling
Quantization:
Quantization is opposite to sampling because it is done on “y axis”
while sampling is done on “x axis”.
Quantization is a process of transforming a real valued sampled
image to one taking only a finite number of distinct values.
Under quantization process the amplitude values of the image are
digitized. In simple words, when you are quantizing an image, you
are actually dividing a signal into quanta(partitions).
Image Quantization - Example
256 gray levels (8bits/pixel) 32 gray levels (5 bits/pixel) 16 gray levels (4 bits/pixel)