EE4235 - 2k18 - L03 - Stages of Image Processing

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 25

Dept.

of EEE, KUET
EE 4235
Digital Image Processing:
Introduction - History

Mohiuddin Ahmad
Dept. of EEE, KUET
Date: 21.08.2023
2
of
25
Key Stages in Digital Image Processing
Analog Image: An analog image is a continuous representation of visual
information in which the intensity or color values vary smoothly and continuously
across the image. Analog images are captured using traditional optical
processes and can take on an infinite number of values within a certain range.
They are typically captured on film or recorded on light-sensitive materials.
Example of an Analog Image: A photograph captured on traditional photographic
film is an example of an analog image. The image is formed by exposing the film
to light, and the resulting image contains a continuous range of tones and
colors.
Digital Image: A digital image is a discrete representation of visual information in
which the intensity or color values are represented as numerical values at
specific grid points (pixels). Digital images are created through sampling and
quantization processes, and they can be stored, manipulated, and transmitted
using digital technology.
Example of a Digital Image: A photograph captured with a digital camera is an
example of a digital image. In this case, the image is captured using a sensor
that converts light into digital data. The image is made up of discrete pixels,
each represented by numerical values that indicate color and intensity.
3
of
25
Key Stages in Digital Image Processing
Analog Image Examples:
1. Photographic Film Prints: Traditional photographs developed from film negatives are analog
images. These images are created through chemical processes that capture light on the film's
emulsion.
2. Paintings and Drawings: Handcrafted artwork using paints, pencils, or other artistic media are
analog representations of visual scenes.
3. Printed Art: Images found in magazines, newspapers, books, and other printed materials that are
produced through traditional printing methods.
4. Analog Television Broadcasts: Analog TV signals that were transmitted over the airwaves
before the transition to digital broadcasting.
5. Analog Oscilloscope Displays: Graphs and waveforms displayed on analog oscilloscopes,
which use cathode ray tubes to visualize electronic signals.
6. Analog Photographs in Old Books: Older books that contain photographs, engravings, or other
printed images produced before the digital era.
7. Film Projector Slides: Images displayed using traditional film projectors in settings such as
presentations or home movie screenings.
8. Vinyl Record Covers: Album covers for vinyl records, which often feature analog artwork and
designs.
9. Analog Camera Pinhole Images: Images captured using pinhole cameras that don't have digital
sensors, producing unique analog images.
10. Analog Art Photography: Photography produced using alternative processes such as
cyanotype, tintype, or daguerreotype, which are considered analog due to their chemical nature.
4
of
25
Key Stages in Digital Image Processing
Digital image processing involves a series of key stages or steps to manipulate and analyze images using
digital techniques. These stages collectively form a comprehensive process for working with digital
images. The typical stages in digital image processing are as follows:
1. Image Acquisition: This stage involves capturing or obtaining the raw image data using devices like cameras, scanners,
satellites, or sensors. The acquired image may be in grayscale or color and could be captured in various environments or
conditions.
2. Image Preprocessing: Preprocessing aims to improve the quality and suitability of the raw image for subsequent processing. It
involves operations like noise reduction, contrast enhancement, and image normalization. Preprocessing helps ensure that the
input data is in a form that is easier to work with and more suitable for analysis.
3. Image Enhancement: Enhancement techniques modify pixel values to improve the visual quality of an image. Processes like
adjusting brightness, contrast, and sharpness, as well as applying color correction, are used to highlight specific features and
make the image more appealing.
4. Image Restoration: Restoration techniques focus on recovering lost or degraded image details caused by factors like noise, blur,
or compression artifacts. These techniques aim to restore the original image quality and improve its perceptual fidelity.
5. Image Segmentation: Segmentation involves dividing an image into meaningful regions or objects based on certain criteria like
color, intensity, or texture. This stage is important for isolating and identifying specific areas of interest.
6. Object Detection and Recognition: This stage involves identifying and locating objects or patterns of interest within an image.
Techniques like template matching, feature detection, and machine learning are commonly used for object detection and
recognition tasks.
7. Image Representation and Description: In this stage, the processed image is represented and described using specific features
or descriptors that capture its important characteristics. This representation can be used for further analysis or comparison.
8. Image Analysis and Interpretation: Analysis involves extracting meaningful information from the image representation. This
stage may include object counting, measuring object properties, and identifying relationships between objects.
9. Image Understanding: This is the final stage where the results of analysis and interpretation are used to draw conclusions, make
decisions, or take actions based on the information extracted from the image.
10. Color Image Processing: Color image processing deals with the analysis and manipulation of color information in images. It
includes color space conversion, color filtering, color correction, and various other techniques specific to working with color
images.
11. Image Compression: Compression methods are used to reduce the size of images for efficient storage, transmission, and
processing. Lossless and lossy compression techniques are applied to preserve image quality while reducing data size.
5
of
25
Key Stages in Digital Image Processing

Image Morphological
Restoration Processing

Image
Segmentation
Enhancement

Image Object
Acquisition Recognition

Representation
Problem Domain
& Description
Colour Image Image
Processing Compression
6
of
25
Key Stages in Digital Image Processing
Image acquisition refers to the process of capturing visual data from the real world and converting it into a digital format that can be
stored and processed by computers or other digital devices. This process involves using various sensors, cameras, scanners, or other
devices to capture the visual information from the scene.

Here are some examples of image acquisition:


1.Digital Cameras: Cameras, whether on smartphones or dedicated digital cameras, capture images using sensors that convert
incoming light into digital signals. These images can be stored as image files on memory cards or internal storage.
2.Webcams: Webcams are commonly used for video calls, live streaming, and capturing images. They are often integrated into
laptops, computers, or external devices.
3.Scanners: Flatbed scanners are used to convert physical documents, photographs, and printed images into digital form by capturing
the image line by line.
4.Satellite and Aerial Imagery: Satellites and aerial vehicles capture images of Earth's surface for applications such as mapping,
environmental monitoring, and urban planning.
5.Medical Imaging Devices: Modalities like X-rays, MRI (Magnetic Resonance Imaging), CT (Computed Tomography), and ultrasound
produce medical images used for diagnosis and treatment planning.
6.Microscopes: Microscopes equipped with digital cameras allow scientists to capture images of microscopic structures for research
and analysis.
7.Security Cameras: Surveillance cameras capture images and videos in various environments, such as public spaces, buildings, and
homes, for security and monitoring purposes.
8.Document Cameras: Used in classrooms and presentations, document cameras capture real-time images of documents, objects, or
drawings and display them on a screen or projector.
9.Industrial Cameras: In manufacturing and quality control, cameras are used to inspect products for defects and ensure consistent
quality.
10.Automotive Cameras: Cameras in vehicles capture images for driver assistance systems, such as lane departure warnings, parking
assistance, and collision avoidance.
11.Smartphones: Built-in cameras in smartphones capture images and videos, enabling users to document moments and share visual
content.
12.Drones: Drones equipped with cameras capture aerial images and videos for applications like aerial photography, mapping,
agriculture, and surveillance.
7
of
25
Key Stages in Digital Image Processing
Image enhancement refers to the process of improving the visual quality of an image to make it more suitable for
human perception or for analysis by automated systems. It involves techniques that modify the image's properties
such as brightness, contrast, sharpness, color balance, and more, with the aim of highlighting certain features or
improving overall visual appeal.
Here are a few examples of image enhancement:
1. Brightness and Contrast Adjustment: This involves changing the overall brightness and contrast levels of an
image to make it clearer. For instance, enhancing the visibility of details in a photograph taken in low-light
conditions.
2. Histogram Equalization: This method redistributes the pixel intensity values across the entire range to enhance
the image's contrast and improve its visual appearance.
3. Sharpening: Image sharpening techniques emphasize edges and details, making the image look crisper and
more defined. This is useful for enhancing the clarity of photographs.
4. Color Correction: Adjusting the color balance, saturation, and hue of an image can make it more vibrant and
true-to-life. For instance, correcting the color cast in a photo taken under unusual lighting conditions.
5. Noise Reduction: Removing or reducing the unwanted noise (random variations in brightness or color) in an
image, which can result from factors like low-light conditions or sensor limitations.
6. Spatial Filtering: Applying filters to an image to enhance certain features. For example, a high-pass filter can
enhance edges, while a low-pass filter can reduce noise.
7. Contrast Stretching: This method increases the contrast in an image by expanding the range of intensity values,
making dark areas darker and light areas lighter.
8. Saturation Adjustment: Enhancing or reducing the intensity of colors in an image to make it more visually
appealing or to highlight certain aspects.
9. Gamma Correction: Adjusting the gamma value of an image can change its brightness levels, often used to
correct the display characteristics to match human perception.
10. Super-Resolution: Using algorithms to enhance the resolution of an image, improving its clarity and detail.
8
of
25
Key Stages in Digital Image Processing
Image restoration is the process of improving the quality of a degraded or damaged image to recover its original or
better visual appearance. This involves the removal or reduction of various types of degradations such as
blurriness, noise, and other imperfections that can occur during image acquisition, transmission, or storage. The
goal of image restoration is to enhance the image's clarity, detail, and overall quality.
Here are some examples of image restoration:
1. Deblurring: Restoring an image that has been blurred due to camera shake, motion, or other factors. Deblurring
techniques aim to reverse the blurring effects and recover sharpness and detail.
2. Denoising: Reducing unwanted noise in an image caused by factors like low-light conditions, sensor limitations, or
compression artifacts. The goal is to improve the image quality by preserving important details while reducing noise.
3. Removing Artifacts: Restoring images that have compression artifacts, scratches, or other unwanted elements that
occur during image transmission or storage.
4. Super-Resolution: Enhancing the resolution of an image to generate a higher-quality version with more details than
the original. This is often used in situations where higher-resolution images are required from lower-resolution sources.
5. Color Restoration: Restoring the true colors of an image that may have faded or been distorted due to aging,
degradation, or improper storage.
6. Inpainting: Filling in missing or damaged parts of an image with plausible content. This is useful for restoring old
photographs or images with damaged areas.
7. Removal of Motion Blur: Restoring an image that has been blurred due to camera or object motion, aiming to recover
the sharpness of the original scene.
8. Image Dehazing: Reducing the effects of haze or fog in images captured in challenging weather conditions to improve
visibility and clarity.
9. Image Retouching: Enhancing the overall aesthetic quality of an image by improving skin tones, removing
imperfections, and adjusting lighting and color balance.
10. Document Restoration: Restoring old or damaged documents, manuscripts, and historical records to improve their
readability and preserve their content.
11. Cultural Heritage Restoration: Enhancing and preserving artworks, artifacts, and cultural heritage items that may
have deteriorated over time.
9
of
25
Key Stages in Digital Image Processing
Image morphological processing is a branch of image processing that focuses on analyzing and manipulating the
shapes and structures within an image using mathematical operations based on set theory and geometry. It
involves operations that modify the spatial arrangement of pixels in an image to highlight certain features, remove
noise, or perform other specific tasks. Morphological operations are particularly useful for processing binary (black
and white) images, but they can also be applied to grayscale images.
Morphological operations work by using a structuring element, which is a small binary pattern or shape, to interact
with the pixels in the image. The structuring element defines how neighboring pixels are considered during the
operation. These operations are widely used in tasks such as noise removal, edge detection, object recognition,
and image segmentation.
Here are some examples of morphological operations:
1. Dilation: Dilation increases the size of bright regions and enhances the connectivity of objects. It expands object boundaries by
moving the structuring element over the image and setting the output pixel to white (1) if any part of the structuring element
overlaps with a white pixel in the input image.
2. Erosion: Erosion reduces the size of bright regions and removes small objects. It moves the structuring element over the image
and sets the output pixel to white (1) only if all the pixels under the structuring element in the input image are white.
3. Opening: Opening is a combination of erosion followed by dilation. It is used to remove small noise and fine details from the
image while preserving the larger structures.
4. Closing: Closing is a combination of dilation followed by erosion. It is used to close small gaps in object boundaries and connect
broken structures.
5. Gradient: The gradient operation computes the difference between dilation and erosion. It highlights edges and boundaries in the
image.
6. Top Hat: The top hat operation computes the difference between the input image and its opening. It can be used to enhance
localized structures.
7. Black Hat: The black hat operation computes the difference between the closing of the input image and the input image itself. It
highlights dark structures against a bright background.
8. Hit-or-Miss Transform: This operation identifies specific patterns in the image based on the structuring element. It's useful for
detecting complex shapes or features.
10
of
25
Key Stages in Digital Image Processing
Image segmentation is the process of dividing an image into meaningful and distinct regions or segments based on certain
criteria, such as color, intensity, texture, or object boundaries. The goal of image segmentation is to simplify the
representation of an image and make it easier to analyze, interpret, and manipulate. Each segment represents a specific
region of interest within the image.
Image segmentation is a fundamental step in various image processing and computer vision tasks, including object
recognition, tracking, medical image analysis, and scene understanding.
Here are some examples of image segmentation:
1. Medical Image Segmentation: In medical imaging, such as MRI or CT scans, segmentation is used to isolate specific
organs, tissues, tumors, or anomalies within the image. For instance, segmenting the various structures within the brain or
identifying cancerous regions.
2. Semantic Segmentation: In computer vision, semantic segmentation assigns each pixel in an image to a specific class
label, such as road, car, tree, or person. This is used in applications like autonomous driving and scene understanding.
3. Object Detection and Tracking: Object detection involves segmenting and identifying objects of interest in an image, often
by enclosing them with bounding boxes. Object tracking involves following the segmented objects over time in a sequence of
images.
4. Image Editing and Manipulation: Segmentation can help isolate specific elements in an image for editing or manipulation.
For example, selecting a person in an image to apply different effects or backgrounds.
5. Satellite and Aerial Imagery: Segmenting land cover types, buildings, roads, and vegetation in satellite and aerial images
aids in land-use classification and urban planning.
6. Biomedical Imaging: In microscopy and cell biology, image segmentation is used to identify individual cells, nuclei, and
subcellular structures for quantitative analysis.
7. Natural Scene Segmentation: Dividing a scene into regions based on color, texture, or intensity can aid in understanding
the different components of the scene.
8. Panorama Stitching: In creating panoramic images, segmenting overlapping regions in multiple images is a key step to
stitch them together seamlessly.
9. Gesture Recognition: Segmenting and analyzing hand gestures or body poses in images or videos can enable gesture-
based interaction systems.
10. Forensics and Security: Segmenting objects or individuals in surveillance images for forensic analysis and security
applications.
11
of
25
Key Stages in Digital Image Processing
Image object recognition, also known as object detection or object identification, is a computer vision task that
involves identifying and localizing specific objects or classes within an image or a video sequence. The goal of
object recognition is to automatically detect and label objects of interest within visual data, enabling computers to
understand and interact with the visual world in a way similar to human perception.
Object recognition has applications in various fields, including autonomous vehicles, surveillance, robotics, retail,
medical imaging, and more.
Here are some examples of image object recognition:

1. Face Recognition: Identifying and locating human faces within images or videos. This is used in social media tagging,
security systems, and photo organization applications.
2. Object Detection in Autonomous Vehicles: Recognizing pedestrians, vehicles, traffic signs, and obstacles in the
environment to enable safe navigation and decision-making for self-driving cars.
3. Retail Inventory Management: Identifying products on store shelves using cameras and tracking stock levels to manage
inventory.
4. Security and Surveillance: Detecting and tracking people, vehicles, and suspicious activities in surveillance footage for
security purposes.
5. Animal Tracking and Conservation: Recognizing and monitoring endangered species in their natural habitats using
camera traps and drones.
6. Industrial Automation: Identifying defective or faulty products on assembly lines to ensure quality control.
7. Medical Image Analysis: Detecting tumors, lesions, and anatomical structures in medical images such as X-rays, MRIs,
and CT scans.
8. Augmented Reality: Recognizing real-world objects or markers to overlay digital information or virtual objects on a user's
view.
9. Agriculture: Identifying and monitoring crop health, disease, and pests in agricultural fields using aerial imagery.
10. Retail Checkout Automation: Automatically identifying and tallying items in a shopping cart using camera-based systems.
11. Artificial Intelligence Assistants: Recognizing objects in a user's environment to provide context-aware information and
assistance.
12
of
25
Key Stages in Digital Image Processing
Image representation and description refer to the process of capturing the essential characteristics
of an image in a structured and informative way. Image representation involves transforming the
raw pixel values of an image into a more compact and meaningful format, while image description
involves describing the content or features of an image using natural language or structured data.

Here are some examples of image representation and description:


Image Representation:

1. Color Histogram: Representing an image by counting the occurrences of different color values
and creating a histogram. This can help in understanding the color distribution of the image.
2. Bag of Visual Words: This technique involves creating a vocabulary of visual words (features)
from a collection of images and then representing each image as a histogram of the frequencies
of these visual words.
3. Feature Vectors: Extracting features from images using techniques like edge detection, texture
analysis, or local descriptors. These features are then combined into a vector that represents the
image's visual characteristics.
4. Convolutional Neural Networks (CNNs): CNNs learn hierarchical features from images by using
convolutional layers. These learned features can be used as a compact representation of the
image's content.
5. Principal Component Analysis (PCA): Reducing the dimensionality of image data while
retaining as much of the original information as possible. This can help in creating a more concise
representation of images.
13
of
25
Key Stages in Digital Image Processing
Image Description:

1. Captioning: Generating descriptive captions for images using natural language. For example,
generating a caption like "A sunny day at the beach with people enjoying the ocean and sand."
2. Image Metadata: Associating metadata with an image, such as location, date, and relevant
keywords, to provide contextual information.
3. Semantic Segmentation Masks: Describing the objects in an image using pixel-level masks that
indicate the boundaries and locations of different objects or regions.
4. Structured Annotations: Describing images with structured data formats, such as bounding
boxes around objects or keypoints on human poses.
5. Visual Question Answering (VQA): Generating textual descriptions in response to questions
about the content of an image. For example, answering a question like "What color is the car in
the image?"
6. Image Tags and Labels: Assigning descriptive tags or labels to images, which can be used for
search and categorization.
7. Attribute-based Description: Describing images based on specific attributes or characteristics,
such as "red dress," "green trees," etc.

Both image representation and description are essential for various applications, including image
retrieval, content-based search, image understanding, and more. These techniques help bridge
the gap between the visual nature of images and the textual or structured representations that are
more suitable for machine analysis and human interaction.
14
of
25
Key Stages in Digital Image Processing
Image compression is the process of reducing the amount of data required to represent an image
while attempting to maintain the quality and perceptual integrity of the original image. The goal of
image compression is to efficiently store or transmit images using less storage space or
bandwidth, making them easier to manage, share, and transmit over networks.

There are two main types of image compression:

1. Lossless Compression: In lossless compression, the original image can be perfectly reconstructed from the
compressed data. No information is lost during compression, making it suitable for situations where preserving
image quality is critical. Examples of lossless compression techniques include:
1. Run-Length Encoding (RLE): Replacing sequences of repeated pixel values with a single value and a count.
2. Huffman Coding: Assigning shorter codes to frequently occurring pixel values and longer codes to less
frequent values.
3. Lempel-Ziv-Welch (LZW): A dictionary-based compression method that replaces repeated patterns with
shorter codes.
2. Lossy Compression: In lossy compression, some amount of image data is discarded to achieve higher
compression ratios. While there is a loss of quality, the compression methods are designed to minimize perceptual
differences. Lossy compression is often used for applications where some loss of quality is acceptable, such as
multimedia streaming and storage. Examples of lossy compression techniques include:
1. JPEG (Joint Photographic Experts Group): A widely used lossy compression format for photographs and
natural images. It achieves compression by quantizing color and spatial information.
2. WebP: A modern lossy and lossless image format developed by Google that aims to provide better
compression efficiency than JPEG and PNG.
3. MPEG (Moving Picture Experts Group): A suite of standards for video and audio compression that includes
lossy compression methods for images within videos.
15
of
25
Key Stages in Digital Image Processing
Image compression

Examples of Image Compression in Practice:


1. Digital Photography: Images captured by digital cameras are often compressed using formats
like JPEG to reduce file sizes for storage and sharing online.
2. Web Content: Websites use compressed images to load quickly and reduce bandwidth usage for
users. Formats like WebP and JPEG are commonly used.
3. Video Streaming: Video compression techniques are employed to transmit videos over the
internet efficiently. Video codecs like H.264 and H.265 use image compression methods.
4. Medical Imaging: Medical images, such as X-rays and MRIs, are often compressed to reduce
storage requirements and facilitate sharing among healthcare professionals.
5. Satellite Imagery: Compressing satellite images allows for efficient transmission and storage of
large amounts of geospatial data.
6. Remote Sensing: Images captured by remote sensing platforms, like drones or satellites, are
often compressed to manage data transfer and storage challenges.
16
of
25
Key Stages in Digital Image Processing
Color image processing refers to the manipulation and analysis of images that contain color information. Unlike grayscale
images, which only have intensity values, color images have additional color channels that capture different color
components, such as red, green, and blue (RGB), or other color models like hue, saturation, and value (HSV).
Color image processing involves various techniques to enhance, analyze, and transform color images for different
applications.
Here are some examples of color image processing:
1. Color Enhancement: Adjusting the color balance, contrast, and saturation of an image to make it more visually appealing or
to emphasize certain features. For example, enhancing the colors of a photograph to make it look more vibrant.
2. Color Filtering: Applying filters to specific color channels to highlight or isolate certain color ranges in an image. For
instance, enhancing the red color of a flower in a garden photograph.
3. Color Correction: Adjusting the color balance to correct any color casts caused by lighting conditions or camera settings.
For example, correcting the bluish tint in an image taken under tungsten lighting.
4. Color Segmentation: Dividing an image into regions based on color information. This can be useful for object tracking,
image analysis, and object recognition.
5. Color Space Conversion: Converting images from one color space to another, such as RGB to HSV or CMYK, to perform
specific tasks like color-based analysis or printing.
6. Color Histogram Analysis: Analyzing the distribution of color values in an image's histogram to understand its color content
and make informed adjustments.
7. Color Quantization: Reducing the number of colors in an image while attempting to preserve its overall appearance. This is
often used for efficient storage and transmission.
8. Color Image Compression: Compressing color images using methods specifically designed to handle multiple color
channels efficiently.
9. Color-Based Object Recognition: Identifying and categorizing objects in an image based on their color characteristics.
10. Color Restoration: Restoring the true colors of an image that may have faded or deteriorated due to aging or degradation.
11. Color Image Fusion: Combining information from different color images to create a single image that contains the best
features of each source image.
12. Medical Imaging: In medical applications, analyzing color images obtained from diagnostic tools like endoscopes, MRIs, or
microscopy for accurate diagnosis.
17
of
Key Stages in Digital Image Processing:
25 Image Aquisition
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

Image Morphological
Restoration Processing

Image
Segmentation
Enhancement

Image Object
Acquisition Recognition

Representation
Problem Domain
& Description
Colour Image Image
Processing Compression
18
of
Key Stages in Digital Image Processing:
25 Image Enhancement
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

Image Morphological
Restoration Processing

Image
Segmentation
Enhancement

Image Object
Acquisition Recognition

Representation
Problem Domain
& Description
Colour Image Image
Processing Compression
19
of
Key Stages in Digital Image Processing:
25 Image Restoration
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

Image Morphological
Restoration Processing

Image
Segmentation
Enhancement

Image Object
Acquisition Recognition

Representation
Problem Domain
& Description
Colour Image Image
Processing Compression
20
of
Key Stages in Digital Image Processing:
25 Morphological Processing
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

Image Morphological
Restoration Processing

Image
Segmentation
Enhancement

Image Object
Acquisition Recognition

Representation
Problem Domain
& Description
Colour Image Image
Processing Compression
21
of
Key Stages in Digital Image Processing:
25 Segmentation
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

Image Morphological
Restoration Processing

Image
Segmentation
Enhancement

Image Object
Acquisition Recognition

Representation
Problem Domain
& Description
Colour Image Image
Processing Compression
22
of
Key Stages in Digital Image Processing:
25 Object Recognition
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

Image Morphological
Restoration Processing

Image
Segmentation
Enhancement

Image Object
Acquisition Recognition

Representation
Problem Domain
& Description
Colour Image Image
Processing Compression
23
of
Key Stages in Digital Image Processing:
25 Representation & Description
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

Image Morphological
Restoration Processing

Image
Segmentation
Enhancement

Image Object
Acquisition Recognition

Representation
Problem Domain
& Description
Colour Image Image
Processing Compression
24
of
Key Stages in Digital Image Processing:
25 Image Compression

Image Morphological
Restoration Processing

Image
Segmentation
Enhancement

Image Object
Acquisition Recognition

Representation
Problem Domain
& Description
Colour Image Image
Processing Compression
25
of
Key Stages in Digital Image Processing:
25 Colour Image Processing

Image Morphological
Restoration Processing

Image
Segmentation
Enhancement

Image Object
Acquisition Recognition

Representation
Problem Domain
& Description
Colour Image Image
Processing Compression

You might also like