Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 21

UNIVERSITY OF MYSORE

Project Presented
on
“CLASSIFICATION OF NATURAL SCENE IMAGE USING
TEXTURE FEATURES.”
Under guidance of,
Presented by, Dr. G Hemantha Kumar
Anusha Afreen A Vice Chancellor & Professor
(18MS008) University of Mysore
Final Year M.Sc Co guide
DOS in Computer Science Dr. Naveena M
Mysuru. System coordinator ICD
University of Mysore
Mysuru.
OUTLINE
• Overview
• Introduction
• Motivation
• Proposed system
• Methodology
• Feature extraction methods
• Experimental Result
• Conclusion & Future enhancement
OVERVIEW

• Natural scene images have vital role in artificial intelligence.


• This project deals with coding of natural scenes in order to extract semantic information.
• The classification is based on texture feature.
• The extracted set of features has been used as input to train a KNN classifier.
INTRODUCTION

• What is texture feature extraction?


 Texture is an important feature of many image types.
 Feature is an image characteristic.
 Texture features are used in different application.
 Feature extraction is a key function in various image processing application.

• The most common way is using Gray-level cooccurrence matrix & Local binary pattern.
MOTIVATION
PROPOSED SYSTEM

• In this project the proposed approach for image classification makes essential use of machine
learning methods.
• In this project we have proposed a system to classify the scenery images into different groups.
• We are building confusion matrix of the best method based on the accuracy. Therefore, by the
confusion matrix we will know the best classified method.
• The proposed method has training and classification phases.
BENCHMARKING DATASETS

Grasslands Green Forest Ocean Sky


METHODOLOGIES

 Collecting standard scene image database

 Converting RGB image into grayscale image

 Extracting features using GLCM/LBP

 Performing classification on extracted data using KNN


System
TEXTURE FEATURE EXTRACTION METHODS

a) Gray level co-occurrence matrix


 It is the most classical second-order statistical method for texture analysis.
 An image is composed of pixels each with an intensity(a specific gray level), the GLCM
is a tabulation of how often different combinations of gray levels co-occur in an image.
b) Local binary pattern
 It is a simple yet very efficient texture operator.
 Which labels the pixels of an image by thresholding the neighborhood of each pixel and
considers the result as a binary number.
Algorithm GLCM:
1. Quantize the image data.
2. Create GLCM.
3. Make the GLCM Symmetrix
i. Make a transpose copy of the GLCM.
ii. Add this copy to GLCM itself.
4. Normalize the GLCM
i. Divide each element by the sum of all elements.
Algorithm LBP:
1. Convert image into grayscale.
2. Select a neighborhood of size r surrounding the center pixel.
3. A LBP valve is then calculated for the center pixel.
4. If the intensity of the center pixel is greater-than-or-equal to its neighbor, then set value to 1;
Otherwise 0.
5. Covert to decimal.
CLASSIFIER

Algorithm KNN:
1. Load the data.
2. Initialize the valve of k.
3. Calculate the distance between test data and each row of training data.
4. Sort the calculated distance in ascending order.
5. Return the predicted class.
ADVANTAGES

• This algorithm can correctly separate the regions that have the same properties we define.
• This method can provide the original images which have clear edges the good segmentation
results.
EXPERIMENTAL RESULTS

• The experiment was carried out for classification of nature scene images.

Result using LBP Result in confusion matrix


Testing feature values
Training feature values
CONCLUSION

• In the present work, texture feature extraction methods and classifiers has been studied and
proposed for natural scene classification.
• Based on proposed approach, an efficient, simple, cheap and reliable system can be developed
for classification of natural scene.
FUTURE ENHANCEMENT

• By using resemblance of other feature extraction methods may produce better classification accuracy for
highlighted features.
• To increase the accuracy of the classification we could use another artificial neural network.
REFERENCES

• Gonzalez, Rafael C., And Richard E. Woods. "Digital image processing." (2002).
• ieeexplore.ieee.org
• https://www.ijcseonline.org
• www.researchgate.net

You might also like