1.1 Urban Tree Identification Urban Trees, Which Are A Dominant Component of The Urban Natural Landscape

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 35

CHAPTER 1

INTRODUCTION

1.1 URBAN TREE IDENTIFICATION

Urban trees, which are a dominant component of the urban natural landscape,
play an important role in improving the urban ecological environment, through air
filtration, microclimate regulation, noise reduction, and water quality amelioration
In addition, there is evidence that urban trees can help to enhance public health and
lessen criminal behavior. Therefore, inventorying the spatial distribution and
detailed information (e.g., species and habitat types) of urban trees is imperative in
decision-making about natural landscape management and planning.

In general, the detailed classification of urban trees can be conducted via


ground surveys, aerial photography, or remote sensing interpretation. However,
conventional ground surveys can be cost- and time-intensive due to the urban
scene complexity, landscape dynamics, and accessibility constraints for private
areas, while the direct observation of land cover through aerial photography or
satellite data can enable cost-effective tree classification. Aerial photographs have
been the primary data source for the detailed classification of urban trees in
previous research. However, remote sensing satellites can now acquire data that
span temporal and spatial scales with more spectral information in a more
convenient way.

1.2 NEED OF URBAN TREE IDENTIFICATION

An urban forest assessment is essential for developing a baseline from


which to measure changes and trends. The most precise way to assess urban forests
is to measure and record every tree on a site, but although this may work well for
1
relatively small populations (e.g., street trees, small parks); it is prohibitively
expensive for large tree populations. Thus, random sampling offers a cost-effective
way to assess urban forest structure and the associated ecosystem services for
large-scale assessments. The methodology applied to assess ecosystem services in
this study can also be used to assess the ecosystem services provided by vacant
land in other urban contexts and improve urban forest policies, planning, and the
management of vacant land. The study’s findings support the inclusion of trees on
vacant land and contribute to a new vision of vacant land as a valuable ecological
resource by demonstrating how green infrastructure can be used to enhance
ecosystem health and promote a better quality of life for city residents.

1.3 THE ROLE OF REMOTE SENSING IN URBAN TREE EXTRACTION

With the constant increase in geometrical resolution of Earth Observation


sensors faced in the last decades, the spatial information provides increasing
contribute to the understanding of the remote sensing imagery, since it
characterizes the sensed landscape in a complementary way with respect to the
spectral signatures of the vegetation areas. In the past, the processing of low
resolution images was mainly performed with pixel-based approaches due to their
direct application to the image. Although, the results produced can be satisfactory
for low resolutions, low correlation is experienced between neighboring pixels, the
performances of such approaches drastically reduce when applied to VHR images.
This effect is not only related to non-considering any spatial measure but also
because with a finer geometrical resolution. In a classification task, the increased
spectral co-variance produced by a greater variability within a class reduces the
overall class spectral reparability. Furthermore, when dealing with VHR images,
the interpretation of the scene can largely benefit from the analysis of the spatial
domain. In addition, the reach of a fine spatial resolution usually is obtained at the
2
detriment of the spectral resolution. In fact, in general most VHR sensors can
acquire one or few spectral bands. Earlier reports are underlined the retrieval and
characterization of the spatial information in the image. Due to the wide range of
features related to the spatial domain, there are several ways of characterizing this
information source. From a general survey of techniques modelling the spatial
information in remote sensing, one can notice that there are different approaches
for extracting the spatial information and correspondent ways (with different levels
of abstraction) for including the extracted information in the processing chain
aiming at the understanding of the image.

1.4 OBJECTIVE
 To implement algorithms for the automated extraction of features from
remotely sensed imagery.
 To identify urban trees in a very high resolution satellite image using Pixel
(NDVI, EVI & VEVI) and Object (GLCM, NDVI, Nearest Neighbor (NN)
Classification) approaches.

3
CHAPTER 2

LITERATURE REVIEW

[1] R. M. Haralick, “Statistical and structural approaches to texture,” Proc.


IEEE, vol. 67, no. 5, pp. 786–804, May 1979.
This study proposes the image processing literature on the various
approaches and models investigation has used for textures. These include statistical
approaches of autocorrelation function, optical transform, digital transform,
textural edgeness, structural element, run lengths and auto aggressive models. The
structural approaches to texture based on more complex primitives than gray tone
has discussed. The texture is furthermore improved by applied the statistical
techniques to structural primitives.

[2] K. H. Riitters et al., “A factor analysis of landscape pattern and structure


metrics,” Landscape Ecol., vol. 10, no. 1, pp. 23–39, 1995.
This study explains the need of metrics to improve the resolution and
clarity of the remote sensed images. Normally 55 metrics which are needed to
measure the landscape areas. A multivariate factor analysis was used to identify the
common axes (or dimensions) of pattern and structure which were measured by a
reduced set of 26 metrics. The first six factors explained about 87% of the variation
in the 26 landscape metrics. These factors were interpreted as composite measures
of average patch compaction, overall image texture, and average patch shape, patch
perimeter-area scaling, number of attribute classes, and large-patch density-area
scaling. We suggest that these factors can be represented in a simpler way by six
univariate metrics - average perimeter-area ratio, contagion, standardized patch
shape, patch perimeter ,area scaling, number of attribute classes, and large-patch
density-area scaling.

4
[3] K. McGarigal and B. J. Marks, “FRAGSTATS: Spatial pattern analysis
program for quantifying landscape structure,” Oregon State Univ.,Corvallis,
OR, USA, Tech. Rep. 2.0, 1995.
This study describes about the program FRAGSTATS which quantify the
landscape structure. It is designed to be more versatile as possible. But it needs
some technical training. Raster and Vector versions are the types of versions in the
program FRAGSTATS. Both versions of FRAGSTATS generate the same array of
metrics, including a variety of area metrics, patch density, size and variability
metrics, edge metrics, shape metrics, core area metrics, diversity metrics, and
contagion and interspersion metrics.
[4] F. E. Kuo and W. C. Sullivan, “Environment and crime in the inner city
does vegetation reduce crime,” Environment Behav., volume 33, no. 3,pp. 343–
367, 2001.
This study describes the relationship between urban trees and the people
living .People living inside the greener environment did not have violent behavior
and crime reduced. These are some of the applications for semantic classification
of urban trees. Furthermore, this pattern held for both property crimes and violent
crimes. The relationship of vegetation to crime held after the number of apartments
per building, building height, vacancy rate, and number of occupied units per
building were accounted.

[5] K.T.Ward and G.R.Johnson,“Geospatial methods provide timely and


comprehensive urban forest information,” Urban Forestry Urban
Greening,vol. 6, no. 1, pp. 15–22, 2007.
This study explains the need for the semantic classification through
explaining the importance of the urban trees. Urban trees are very much important
to reduce the air pollution, crimes and reduce the diseases. Geospatial tools, such

5
as, geographic information systems (GIS), global positioning systems (GPS) and
remote sensing, work extremely well together for gathering, analyzing, and
reporting information. Many urban forest management questions could be quickly
and effectively addressed using geospatial methods and tools. The geospatial tools
can provide timely and extensive spatial data from which urban forest attributes
can be derived, such as land cover, forest structure, species composition and
condition, heat island effects, and carbon storage. Emerging geospatial tools that
could be adapted for urban forest applications include data fusion, virtual reality,
three-dimensional visualization, Internet delivery, modeling, and emergency
response.
[6] Z. Jiang, A. R. Huete, K. Didan, and T.Miura, “Development of a two
band enhanced vegetation index without a blue band,” Remote Sens. Environ.,
vol. 112, pp. 3833–3845, Oct. 2008.
The purpose of study is used to produce the 2-band EVI that is using only
near and infrared bands, without usage of blue band. The enhanced vegetation
index (EVI) was developed as a standard satellite vegetation product for the Terra
and Aqua Moderate Resolution Imaging Spectro radiometers (MODIS). EVI
provides improved sensitivity in high biomass regions while minimizing soil and
atmosphere influences, however, is limited to sensor systems designed with a blue
band, in addition to the red and near-infrared bands. It gives best quality images
and it is acceptable without the blue band compared to EVI1, EVI2 separately. The
atmospheric effect is insignificant in this EVI2.This method paves way for
development of VEVI for semantic classification of urban trees.
[7] J. P. Ardila, W. Bijker, V. A. Tolpekin, and A. Stein, “Quantification of
crown changes and change uncertainty of trees in an urban environment,”
ISPRS J. Photogramm. Remote Sens., vol. 74, pp. 41–55, 2012.

6
This study describes the quantification of tree crowns by using remote
sensing. Local authorities require a detailed report of the state of green resources in
cities to quantify the benefits of urban trees and determine urban forestry
interventions. This study uses bitemporal remote sensing data to monitor changes
of urban trees over time. It presents a fuzzy approach to recognize the fuzziness of
tree crowns from high resolution images in urban areas. The method identifies tree
crown elliptical objects after iterative fitting of a Gaussian function to crown
membership images of two dates. Gradual and abrupt changes are obtained, as well
as a measure of change uncertainty for the retrieved objects. The method uses pixel
and object level. Pixel recognizes vegetation and object level recognizes masks the
non vegetation sources.
[8] M. Alonzo, B. Bookhagen, and D. A. Roberts, “Urban tree species mapping
using hyperspectral and lidar data fusion,” Remote Sens. Environ.,vol. 148, pp.
70–83, 2014.
In this study They fused high-spatial resolution (3.7 m) hyper spectral
imagery with 22 pulse/m2 LIDAR data at the individual crown object scale to map
29 common tree species in Santa Barbara, California, USA. They first adapted and
parallelized a watershed segmentation algorithm to delineate individual crowns
from gridded canopy maxima model. The value of the LIDAR structural metrics
for urban species discrimination became particularly evident when mapping
crowns that were either small or morphologically unique. For instance, the
accuracy with which we mapped the tall palm species Washington robust a
increased from 29% using spectral bands to 71% with the fused dataset. This
technique reduces the classification error.

7
CHAPTER 3

STUDY AREA

In this approach,we have taken Madurai city in the South Indian satate of
Tamilnadu as the study area.According to the 2011 census data,Madurai is the
district having 18.5 lakh of population and known to be the 5th most urban
populous district.The total geographical area of urban part in this city is 317
km2.The population density of Madurai district urban part is 5818 persons per
km2.Literacy rate of Madurai is 89%,these are more than enough to depict it as a
urban area.
Table 3.1 Geographical location of Datasets:
Datasets Coverage Latitude Longitude
area
(m2)
Dataset 1 291,200 9 55’ 9.85” 78 6’ 40.40”
(Anna Nagar,Madurai) 9 55’ 35.57” 78 6’ 14”
Dataset 2 291,200 9 55’10.96” 78 8’50.67”
(Thathanery,Madurai) 9 55’36.69” 78 8’24.27”

Fig 3.1 Overall view of the study area

8
The image has taken by World view-2 satellite sensor with the Resolution about
0.46 meters for panchromatic and 1.84 meter for multispectral images. In this
project the two datasets of Worldview-2 is considered and shown below in Figure
3.2.The details about the datasets is shown in table 3.1.

0 0.2 0.4 0.6 0.8(km)


(a)Worldview-2 image of dataset 1

0 0.2 0.4 0.6 0.8(km)

(b)Worldview-2 image of dataset 2


Fig 3.2 Two Datasets

9
CHAPTER 4

METHODOLOGY

4.1 INTRODUCTION
The following flow chart explains about the process involved in urban tree
identification using textural and spectral method wherein they are integrated to
identify the tree extraction.
In this chapter the conventional methodology (pixel based approach) and the
proposed methodology (object based approach) are discussed in detail.The overall
methodology of the proposed work is shown in fig 4.1.

Input Image

Pixel level Object level

Multi-resolution Segmentation
NDVI, EVI, VEVI Scale: 20 Compactness: 0.5

Object specific features based


classification
(GLCM,NN,NDVI)

Classified output Compare Classified output

Accuracy assessment

Fig 4.1 Proposed Methodology

4.2 CONVENTIONAL METHODS


In this chapter, pixel based approaches such as NDVI, EVI and VEVI is
discussed for identifying the urban trees.

10
4.2.1 NDVI classification

The normalized differential vegetation index (NDVI) was designed to


extract the vegetation part from the sample image precisely using index values. To
calculate the index values finely for the vegetation extraction, the image must
contain the NIR and Red bands. Using these values NDVI is calculated using the
below formula:
Mean NIR - Mean Red
NDVI =
Mean NIR + Mean Red

4.2.2 EVI classification

The enhanced vegetation index (EVI) is an optimized vegetation index used to


enhance the vegetation part and improve the sensitivity in high biomass regions,
reduce the soil and atmospheric influences through the introduction of the Blue
band(B).The EVI extraction is calculated using the below formula consists of
Blue(B),Red(R),Near Infrared(NIR) bands.
NIR – R
EVI = 2.5
NIR + 6R – 7.5B+1

4.2.3 VEVI classification

The Verified enhanced vegetation index is (VEVI) is designed to overcome


the disadvantage in the EVI which is false identification of mixed cyan roofs as the
vegetation. The calculation of VEVI is implemented by inclusion of green band
(G) which extract the vegetation even more precisely than previous methods. The
VEVI is calculated using the below formula:
NIR - R
VEVI = 2.5
NIR + 6R - 3.5B – 4G + 1
11
4.3 PROPOSED METHODS
In this chapter, object based approaches such as NDVI,GLCM and NN is
discussed for identifying the urban trees.

4.3.1 Multi Resolution Segmentation

The multi resolution segmentation available in e-Cognition is a


heuristic algorithm based on the Fractal Net Evolution Approach (FNEA). FNEA
is a bottom-up merging technique that starts with one-pixel objects and a pair wise
comparison of its neighbors in order to merge smaller image objects into larger
ones. The procedure stops when there are no more possible merges. The
parameters that control the segmentation outcome are stated as follows:

(1) Scale parameter - Controls the amount of spectral variation within objects and
therefore their resultant size. Has no unit.

(2) Shape - A weighting between the objects shape and its spectral colour
whereby if 0, only the colour is considered whereas if > 0, the objects shape along
with the colour are considered and therefore less fractal boundaries are produced.
The higher the value, the more that shape is considered.

(3) Compactness - A weighting for representing the compactness of the objects


formed during the segmentation.

The scale parameter determines the size of segmented objects The smaller
number of scale generates objects with small size, whereas the higher number of
scale will generate objects with large size.After experimenting with various scale
values by incrementing by 10, it is found that the best result is obtained for the
scale value of 20.

12
4.3.2 Object Based Feature Extraction

4.3.2.1 Object Based Textures Feature (GLCM) Classification

GLCM is a tabulation of how often different combinations of gray levels


co-occur in an image or image section. The layer values like Mean, Maximum
difference and brightness are calculated. However, we usually do not want a single
measure for a whole image. The texture measure calculation is done to a GLCM
derived from small areas on the image. We then look at a different small area and
record its texture measure to cover the whole image and find quantitatively how
the pixel relationships differ in different places.

To get the output set the scale value 20, compactness 0.5 and the shape
value 0.1. Heraldic originally proposed 14 texture measures calculated from the
GLCM, the selection of which should be case and class-specific. Specifically, in
our study homogeneity and entropy which measure contrast and orderliness
respectively.
Image texture measures the variation in image tone (brightness values) in a
variable-sized, contiguous matrix of pixels in the image, and identifies repeating
patterns of local variation in intensity. The gray-level co-occurrence matrix
(GLCM) is computed as a first step in the texture measures. GLCM is a tabulation
of how often different combinations of gray levels co-occur in an image or image
section. The GLCM is a two-dimensional array, P, in which both rows and
columns represent the set of all possible brightness values. It is defined by
specifying a displacement vector d= (dx,dy) and counting pairs of pixels separated
by d having specific gray levels i and j such that

𝑃𝑑 (𝑖, 𝑗) = 𝑛𝑖𝑗

13
In this research, the displacement vector (1x, 1y) of 45° was used allowing a 3x3
matrix to systematically shift over the image to calculate the central pixel value.
is the number of occurrences of pixel values (i,j) at distance d in the image, and the
matrix P has dimension n × n where n is 256 for the panchromatic band.

The normalized co-occurrence matrix is:

𝑃(𝑖, 𝑗)
𝑁(𝑖, 𝑗) =
∑𝑛−1
𝑖,𝑗=0 𝑃(𝑖, 𝑗)

This normalizes the co-occurrence values so they lie between 0 and 1 as joint
probabilities. Once the GLCM is computed, the rest of the texture measures can be
created.

The measure of contrast was chosen because it characterizes local variation.


Contrast is expected to be high in the informal areas due to inconsistency of
building materials, smaller structures, and general haphazardness of features.
Correlation was also chosen to determine the linear dependence of gray levels on
neighbouring pixels. It should be lower.

The homogeneity value from the GLCM is a measure of information content


resulting from the randomness of brightness values. High entropy means there are
no preferred gray level value pairs in the distance vector d. Informal settlements
were expected to have high entropy values relative to their planned and more
affluent counterparts. This is anticipated from results found in prior research.
(Haralicks, 1973)

14
Table 4.1 Texture Measures of GLCM
Indicators Expected Methods and Tools
Values
(Informal)
GLCM Lower than High contrast is a large difference in intensity of
Contrast formal neighboring pixels (or high local variation) and
greater expected variety of surface materials, shapes,
and sizes.
∑𝑛−1
𝑖,𝑗=0 𝑃𝑖,𝑗 (𝑖 − 𝑗)
2

Entropy Lower than Entropy quantitatively measures randomness of the


formal gray-level distribution.
𝑛−1

∑ 𝑃𝑖,𝑗 (− ln(Pi,j ))
𝑖,𝑗=0

𝑃𝑖,𝑗 is a probability measure, where 0≤ 𝑃𝑖,𝑗 ≥ 1 so


ln(Pi,j )will always be 0 or negative, so we take –ln to
give positive.

GLCM Higher than This statistic is also called Uniformity or Angular


Energy formal second moment. It measures the textural uniformity
that is pixel pair repetitions. It detects disorders in
texture.
𝑛−1

∑ (𝑃𝑖,𝑗 )2
𝑖,𝑗=0

Energy reaches a maximum value equal to one. High


energy values occur when the gray level distribution
has a constant or periodic form.

15
Homogeneity Lower than It measures image homogeneity as it assumes larger
buildings values for smaller gray tone differences in pair

elements. It is more sensitive to the presence of near


diagonal elements in the GLCM.
Measures the smoothness of the gray level distribution
of the image; it is (approximately) inversely correlated
with contrast.
Ng Ng 1
  Pd (i, j )
i 1 j 1 1  (i  j ) 2

Correlation Lower than GLCM Correlation measures the linear dependence of


Buildings graylevels on neighbouring pixels.GLCM Correlation
is independent of other texture measures.
GLCM Correlation is calculated from the mean and
variance of neighboring pixel pairs.

N 1  i  i  j   j 
 i, j  
 
P
i , j 0   i 2   j 2 

4.3.2.2 Object Based Spectral Feature (NDVI) Classification


The normalized difference vegetation index(NDVI) is a simple graphical
indicator that can be analyze remote sensing measurements, typically, but not
necessarily, from a space platform, and assess whether the target being observed
contains live green vegetation or not. Normalized Difference Vegetation Index
(NDVI) uses the NIR and red channels in its formula. Healthy vegetation reflects
more near-infrared (NIR) and green light compared to other wavelengths. But it
absorbs more red and blue light.

16
4.3.3 Classification Techniques
4.3.3.1 Assign Class
The classification algorithms analyze image objects according to defined
criteria and assign them to a class that best meets them. Assign all objects of the
image object domain to the class specified by the class parameter. The range of
value for classification is given in the condition table and the desired color is
chosen for class assignment. A new class also created for the assignment.
4.3.3.2 Nearest Neighbor Classification
This is one type of classification used in the object level approach. It
classifies the two or more neighbors. It does not need any calculation. The input
samples for the assigned class will classify the all the samples in the output image.
It uses parameters like maximum difference, brightness, standard deviation, mean,
GLCM entropy, homogeneity, area of pixels. It is the semi-automatic classification
technique using samples.

4.4 Accuracy Assessment

A classification is not complete until its accuracy is assessed. Classification


Error Matrix is one of the most common means of expressing classification
accuracy is the preparation of a classification error matrix also called as a
confusion matrix or a contingency table. Error matrices compare, on a category-
by-category basis, the relationship between known reference data (ground truth)
and the corresponding results of an automated classification. Various accuracy
indices such as overall accuracy, producer accuracy and user accuracy have been
discussed in determining the accuracy of classification. Using the field visit data
the classification accuracy can be computed and performance of classification
algorithm can be analyzed. The accuracy indices are defined below.

17
Overall Accuracy

It is obtained by dividing the total number of correctly classified pixels by


the total number of pixels.

Overall Accuracy =Total number of correctly classified pixels


Total number of pixels
Producer Accuracy

It is obtained by dividing the correctly classified pixels in each category by


the number of training set pixels used for that category.

Producer Accuracy = correctly classified pixels in each category


Number of training set pixels for that category
User Accuracy

It is computed by dividing the number of correctly classified pixels in each


category by the total number of pixels that were classified in that category.

User Accuracy = Number of correctly classified pixels in each category


Total number of pixels classified in that category

Fig 4.2 Ground Truth image of dataset 1 and dataset 2

18
The accuracy of the classified output is obtained by using the Ground truth
data is shown in fig(4.2). By comparing the samples of ground truth data with our
classified output we can tabulate the accuracy assessment. It is used to calculate
Overall accuracy of the methods used.

19
CHAPTER 5

RESULTS AND DISCUSSIONS

5.1. RESULTS OF CONVENTIONAL METHODS FOR URBAN TREES


IDENTIFICATION
For the given input image, the pixel based methods such as NDVI, EVI is
calculated. But the results produced by these methods wrongly classified the cyan
roofs as vegetation. In order to overcome the disadvantages VEVI is implemented
for better results by incorporating green band. These pixel based methods
differentiates vegetation and non-vegetation.
5.1.1. NDVI Classification
The NDVI classification in pixel method is used to separate the vegetation
and non-vegetation in urban areas. The differentiation of vegetation and non
vegetation area is identified by index values. The index value 1 represents
vegetation areas and index value 0 represents non-vegetation areas. The NDVI
classification on dataset 1 is shown in fig 5.1.

Fig.5.1 (a) Dataset 1 Fig.5.1 (b) NDVI on Dataset 1

20
5.1.2 EVI Classification
The EVI method is used to enhance the vegetation by using formula containing
bands like green, red, blue, NIR.The VEVI methods are more efficient than the
EVI method .The EVI classification is shown in fig 5.2.

Fig.5.2 (a) Dataset 1 Fig.5.2 (b) EVI on Dataset 1

5.1.3 VEVI Classification


We propose the verified EVI (VEVI) to remove the mixed cyan roofs by
incorporating the spectral in the green band. The inclusion of the G band (band 3)
in the VEVI can suppress the signal of cyan roofs and simultaneously enhance the
vegetation signal, thereby performing better than the traditional EVI. The VEVI
classification is shown in fig 5.3.

Fig.5.3 (a) Dataset 1 Fig.5.3 (b) VEVI on dataset 1

21
5.2.RESULTS OF PROPOSED METHODS FOR URBAN TREE
IDENTIFICATION
For the given input image , the object based methods such as NDVI and
GLCM is calculated. In these methods, images are segmented using Multi-
resolution Segmentation and then classified. In this approach segmentation is done
with some important parameters such as scale value 0f 20, compactness of 0.5 and
shape of 0.1.But some of the samples are not identified correctly by the NDVI
approach using assign class. To overcome this disadvantage, GLCM method has
been used with additional parameters such as standard deviation, maximum
difference and brightness. The results obtained from GLCM approach gives better
outcome than NDVI method.
5.2.1 GLCM Classification
In this approach,GLCM classification is done by using parameter called glcm
homogeneity for two datasets. The ruleset for classifying the image is shown in
Table 5.1 and Table 5.2 for the datasets 1 and 2 respectively. The classification
results obtained by these rules are is shown in fig 5.4 . Finally the accuracy is
calculated by comparing the classified data with the ground trouth data and is
shown in Table 5.3 and Table 5.4 for the datasets 1 and 2 respectively.

(a)Dataset 1 (b) GLCM on dataset 1

22
(b) Dataset 2 (b) GLCM on dataset 2
Fig 5.4 GLCM classified image with scale value 20
Table 5.1 Ruleset for GLCM classification for scale value 20 for dataset 1
Classification Rule set
Vegetation GLCM homogeneity>= (0.1) && <=(0.349)

Building GLCM homogeneity>= (0.02) && < (0.1)


Road GLCM homogeneity>= (0.3561) && <(0.35615)

Table 5.2 Ruleset for GLCM classification for scale value 20 for dataset 2
Classification Rule set
Vegetation GLCM homogeneity>= (0.089)

Building GLCM homogeneity>= (0.01) && < (0.069)


Road GLCM homogeneity>= (0.077) && <(0.088)

Table 5.3 Accuracy assessment for GLCM classification of dataset1


Ground truth Vegetation Building Road Others Row User Producer
table (pixel) (pixel) (pixel) (pixel) total Accuracy Accuracy
Vegetation 125 7 0 18 150 51.8 83
Building 5 36 0 1 42 14.9 85
Road 0 15 4 14 33 1.6 12
Others 8 5 0 3 16 1.2 18
Column total 138 63 4 36 241

Overall Accuracy = 69%

23
Table 5.4 Accuracy assessment of GLCM Classification of dataset 2
+
Ground truth Vegetation Building Road Others Row User Producer
table (pixel) (pixel) (pixel) (pixel) total Accuracy Accuracy
Vegetation 102 8 2 0 112 91 85
Building 6 55 4 2 67 82 76
Road 4 6 2 0 12 16 25
Others 8 3 0 1 12 3.1 33
Column total 120 72 8 3 203

Overall Accuracy = 71.7%

5.2.2 Nearest Neighbor classification


Though the GLCM based object based classification identifies
vegetation correctly, accuracy is less. The NN classification is used to classify the
nearest neighbor. It doesn’t require any rulesets. This method is used to extract the
vegetation precisely than GLCM method and it is simple and semiautomatic
method. The standard features are needed to add to the each class. The samples for
active classes have been selected and execute the process with basic
classification..The classification results obtained by these rules are is shown in fig
5.5.Finally the accuracy is calculated by comparing the classified data with the
ground trouth data and is shown in Table 5.5 and Table 5.6 for the datasets 1 and 2
respectively.

24
(a) Dataset 1 (b) NN on dataset 1

(a) Dataset 2 (b) NN on Dataset 2

Fig 5.5 Nearest Neighbor Classification with scale value 20

Table 5.5 Accuracy assessment of NN classification of dataset 1

Ground truth Vegetation Building Road Others Row User Producer


table (pixel) (pixel) (pixel) (pixel) total Accuracy Accuracy
Vegetation 129 14 7 0 150 84 96.2
Building 3 38 1 0 42 90.4 59.
Road 2 3 28 0 33 84.8 68.2
Others 0 9 5 2 16 12.5 100
Column total 134 64 41 2 241

Overall Accuracy = 81.7%


25
Table 5.6 Accuracy assessment for NN classification of dataset 2
Ground truth Vegetation Building Road Others Row User Producer
table (pixel) (pixel) (pixel) (pixel) total Accuracy Accuracy
Vegetation 106 4 2 0 112 94 89
Building 4 60 2 1 67 89 84
Road 2 3 7 0 12 18 75
Others 6 4 1 1 12 6.25 0
Column total 118 71 12 0 203

Overall Accuracy = 84%

5.2.2. NDVI Classification


The Problem with the GLCM based object based spectral classification and
KNN Classifcation technique classifies the buildings,roads as the vegeatation. To
overcome this NDVI based object based textural classification is applied.The
NDVI classification in Object method is efficient than pixel NDVI method and
GLCM method.It extracts vegetation precisely,if the image contains NIR band and
Red band.It classifies vegetation,building and roads more precisely than above
other methods.It classifies the image with accuracy of 90%.For the urban tree
classification, NDVI value is greater than 0.1 for two datasets. The ruleset for
classifying the image is shown in Table 5.7 and Table 5.8 for the datasets 1 and 2
respectively.The classification results obtained by these rules are is shown in fig
5.6 .Finally the accuracy is calculated by comparing the classified data with the
ground trouth data and is shown in Table 5.9and Table 5.10 for the datasets 1 and 2
respectively.

26
(a) Dataset 1 (b) NDVI on dataset 1

(a) Dataset 2 (b) NDVI on dataset 2


Fig 5.6 NDVI classified image with scale value 20
Table 5.7 Ruleset for NDVI classification for scale value 20 for dataset 1

Classification Rule set


Vegetation NDVI >= (0.1)
Building NDVI <= (-0.1) && NDVI > (-0.4)
Road NDVI <= (-0.4)

Table 5.8Ruleset for NDVI classification for scale value 20 for dataset 2

Classification Rule set


Vegetation NDVI>= (0.1)

Building NDVI<=(-0.04) &&NDVI>= (-0.4)


Road NDVI<= (-0.5) &&NDVI>=(-0.0993)

27
Table 5.9 Accuracy assessment of NDVI Classification of dataset 1
Ground truth Vegetation Building Road Others Row User Producer
table (pixel) (pixel) (pixel) (pixel) total Accuracy Accuracy
Vegetation 146 1 3 0 150 60.5 97
Building 1 37 4 0 42 15.3 88
Road 4 4 29 0 33 12 87
Others 0 0 7 5 16 23 31
Column total 151 42 43 5 241

Overall Accuracy = 90%

Table 5.10 Accuracy assessment of NDVI Classification of dataset 2


Ground truth Vegetation Building Road Others Row User Producer
table (pixel) (pixel) (pixel) (pixel) total Accuracy Accuracy

Vegetation 108 1 1 2 112 96 97


Building 1 63 2 2 67 94 94
Road 0 1 30 1 32 93 88
Others 2 2 1 7 12 58 58
Column total 111 67 34 12 223

Overall Accuracy = 93%

The Accuracy assessment for two datasets using GLCM Method in Object
level is formulated. For both the datasets, this method poorly identifies the
vegetation than other two methods and wrongly identifies the building and roads. It
gives less accuracy than other two methods. The accuracy for two datasets have
been formulated and shown in the table 5.3 and 5.4.
Then the accuracy assessment of Nearest Neighbor classification gives better
results than GLCM method in object level for two different datasets, but not more
than NDVI method. The accuracy of NN is greater than GLCM method is shown
in the table 5.5 and 5.6.After that, NDVI method in object level is formulated, this

28
method gives better accuracy than above methods. It shows that NDVI is the best
method for our datasets and is calculated in the table 5.9 and 5.10.
The overall accuracy has been calculated for spectral and textural features for
two different datasets. The GLCM accuracy is in the range of 65-75%, Nearest
Neighbor classification is in the range of 81-84% and NDVI accuracy is in the
range of above 90%.

Table 5.11 Overall Accuracy assessments for all methods of dataset 1

Dataset 1 GLCM Nearest Neighbour NDVI


User Producer User Producer User Producer
Ground
Accuracy Accuracy Accuracy Accuracy Accuracy Accuracy
truth class
(%) (%) (%) (%) (%) (%)
Vegetation 51.8 83 84 96.2 60.5 97
Building 14.9 85 90.4 59.3 15.3 88
Road 1.6 12 84.8 68.2 12 87

Others 1.2 18 12.5 1.2 23 31


Overall
69 81.7 90
Accuracy

Table 5.12 Overall Accuracy assessments for all methods of dataset 2

Dataset 2 GLCM Nearest Neighbour NDVI


User Producer User Producer User Producer
Ground
Accuracy Accuracy Accuracy Accuracy Accuracy Accuracy
truth class
(%) (%) (%) (%) (%) (%)
Vegetation 91 85 94 89 96 97
Building 82 76 89 84 94 94
Road 0.16 76 18 75 93 88

Others 3.1 33 6.25 0 58 58


Overall
79 84 93
Accuracy

29
The Accuracy assessment for these three techniques for two datasets has been
evaluated and the overall accuracy is found. The best of all the method is NDVI
because it precisely identify the vegetation areas and has the accuracy more than or
equal to 90%.

100 93
90
90 81.7 84
79
80
69
70
60
50 Datset 1

40 Datset 2

30
20
10
0
GLCM Nearest Neighbour NDVI

Fig 5.7 Comparison of Overall accuracies of different methods on two datasets.

30
CHAPTER 6
CONCLUSION

In this study, different approaches have been used to identify the urban
trees using VHR satellite images. This project dealt with a two level framework to
precisely identify the urban trees. The Pixel and Object features based
classification such as NDVI, GLCM, VEVI using different parameters like mean,
area, texture has been proposed. However these techniques are falsely identifying
cyan colored roofs and roads as urban trees. In order to overcome this, object based
image analysis is applied. In this work, Multi Resolution Segmentation technique
is applied. In order to execute the classification from segmented image, Spectral
(NDVI) and Textural (GLCM homogeneity) features is utilized. For the
comparison purpose, input image is directly classified by using k-Nearest Neighbor
classification (KNN) Technique. Finally, the accuracy is calculated for object
based spectral features , object based textural feature and KNN techniques. In
order to prove the effectiveness of the proposed methods, very high-resolution data
of two different datasets of Madurai city, South India acquired by World View 2
sensor (1.84m) to identify urban trees from urban buildings and other features. The
overall accuracies for the various methods GLCM,KNN,NDVI produces 69%,
81.7%,90% for dataset 1 and 79%, 84% ,93% for dataset 2 respectively.
However considerable additional work is needed to obtain better results by
considering Structural based features such as shape and size.

31
REFERENCES
[1]. J. P. Ardila, W. Bijker, V. A. Tolpekin, and A. Stein, “Quantification of crown
changes and change uncertainty of trees in an urban environment,” ISPRS J.
Photogram. Remote Sens., vol. 74, pp. 41–55, 2012.

[2]. K. T. Ward and G. R. Johnson, “Geospatial methods provide timely and


comprehensive urban forest information,” Urban Forestry Urban Greening,vol.
6, no. 1, pp. 15–22, 2007.

[3]. C. Freeman andO.Buck, “Development of an ecological mapping methodology


for urban areas in New Zealand,” Landscape Urban Plan., vol. 63,no. 3, pp. 161–
173, 2003.

[4]. A. B. Cumming, M. F. Galvin, R. J. Rabaglia, J. R. Cumming, and D. B.


Twardus, “Forest health monitoring protocol applied to roadside trees in
Maryland,” J. Arboriculture, vol. 27, no. 3, pp. 126–138, 2001.

[5]. C. Jim and H. Liu, “Species diversity of three major urban forest types in
Guangzhou City, China,” Forest Ecol. Manage., vol. 146, no. 1, pp. 99–114, 2001.

[6]. C. Jim, “Land use and amenity trees in urban Hong Kong,” Land Use Policy,
vol. 4, no. 4, pp. 281–293, 1987.

[7] K. McGarigal and B. J. Marks, “Spatial pattern analysis program for


quantifying landscape structure,” US Dept. Agriculture, Forest Service, Pac.
Northwest Res. Station, Portland, OR, USA, Gen. Tech. Rep. PNWGTR- 351,
1995.

32
[8] Q. Luan, C. Ye, and W. Li, “Vegetation landscape change analysis based on
remote sensing in northwest of Beijing,” in Proc. 21st Int. Conf. Geoinformat.,
2013, pp. 1–6.

[9] R. Dinuls, G. Erins, A. Lorencs, I. Mednieks, and J. Sinica-Sinavskis, “Tree


species identification in mixed Baltic forest using LiDAR and multispectral data,”
IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens., vol. 5, no. 2, pp. 594–603,
Apr. 2012.

[10] B. Somers and G. P. Asner, “Invasive species mapping in Hawaiian


rainforests using multi-temporal Hyperion spaceborne imaging spectroscopy,”
IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens., vol. 6, no. 2, pp. 351–359,
Apr. 2013.

[11] H. Zhao, P. Xiao, and X. Feng, “Edge detection of street trees in high-
resolution remote sensing images using spectrum features,” in Proc. 8th Int. Symp.
Multispectral Image Process. Pattern Recog., 2013, pp. 89180M-1–89180M-6.

[12] A. B. Cumming, M. F. Galvin, R. J. Rabaglia, J. R. Cumming, and D. B.


Twardus, “Forest health monitoring protocol applied to roadside trees in
Maryland,” J. Arboriculture, vol. 27, no. 3, pp. 126–138, 2001.

[13] M. Seymour, J. Wolch, K. D. Reynolds, and H. Bradbury, “Resident


perceptions of urban alleys and alley greening,” Appl. Geography, vol. 30, no. 3,
pp. 380–393, 2010.

33
[14] C. Jim and H. Liu, “Patterns and dynamics of urban forests in relation to land
use and development history in Guangzhou City, China,” Geographical J., vol.
167, no. 4, pp. 358–375, 2001.

[15] C. Jim and H. Liu, “Species diversity of three major urban forest types in
Guangzhou City, China,” Forest Ecol. Manage., vol. 146, no. 1, pp. 99–114, 2001.

[16] C. Jim, “Land use and amenity trees in urban Hong Kong,” Land Use Policy,
vol. 4, no. 4, pp. 281–293, 1987.

[17] C. Gong, J. Chen, and S.Yu, “Biotic homogenization and differentiation of the
flora in artificial and near-natural habitats across urban green spaces,” Landscape
Urban Plan., vol. 120, pp. 158–169, 2013.

[18] L. Wang, W. Gong, Y. Ma, and M. Zhang, “Modeling regional vegetation


NPP variations and their relationships with climatic parameters in Wuhan, China,”
Earth Interact., vol. 17, no. 4, pp. 1–20,2013.

[19] Z. Jiang, A. R. Huete, K. Didan, and T.Miura, “Development of a two-band


enhanced vegetation index without a blue band,” Remote Sens. Environ., vol. 112,
pp. 3833–3845, Oct. 2008.

[20] A. Huete, H. Liu, K. V. Batchily, and W. Van Leeuwen, “A comparison of


vegetation indices over a global set of TM images for EOS-MODIS,” Remote Sens.
Environ., vol. 59, pp. 440–451, 1997.

34
[21] R. Mathieu and J. Aryal, “Object-based classification of Ikonos imagery for
mapping large-scale vegetation communities in urban areas,” Sensors,vol. 7, no.
11, pp. 2860–2880, 2007.

[22] M. Baatz and A. Sch¨ape, “Multiresolution segmentation: An optimization


approach for high quality multi-scale image segmentation,” in Proc. Angewandte
Geographische Informationsverarbeitung XII. Beitr¨a zum AGITSymp. Zalzburg
2000, Karlsruhe, Germany, 2000, pp. 12–23.

[23]K. H. Riitters et al., “A factor analysis of landscape pattern and structure


metrics,” Landscape Ecol., vol. 10, no. 1, pp. 23–39, 1995.

[24] R. M. Haralick, “Statistical and structural approaches to texture,” Proc. IEEE,


vol. 67, no. 5, pp. 786–804, May 1979.

[25]K.McGarigal and B. J. Marks, “FRAGSTATS: Spatial pattern analysis


program for quantifying landscape structure,” Oregon State Univ.,
Corvallis,OR, USA, Tech. Rep. 2.0, 1995.

[26] M. Alonzo, B. Bookhagen, and D. A. Roberts, “Urban tree species mapping


using hyperspectral and lidar data fusion,” Remote Sens. Environ.,vol. 148, pp. 70–
83, 2014.

[27] F. E. Kuo and W. C. Sullivan, “Environment and crime in the inner city does
vegetation reduce crime,” Environment Behav., volume 33, no. 3,pp. 343–367,
2001.

35

You might also like