Development of A Mobile Based Sugarcane Saccharum Officinarum Variety Classifier Using Convolutional Neural Network

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 82

DEVELOPMENT OF A MOBILE-BASED SUGARCANE (Saccharum

officinarum) VARIETY CLASSIFIER USING CONVOLUTIONAL NEURAL


NETWORK

A Thesis Manuscript
Presented to the Faculty of the
Department of Computer Science and Technology
College of Engineering and Technology
Visayas State University
Visca, Baybay City, Leyte

In Partial Fulfillment
of the Requirements for the Degree of
BACHELOR OF SCIENCE IN COMPUTER SCIENCE

DARWIN G. CABARRUBIAS
JULY 2022

i
APPROVAL SHEET

ii
TRANSMITTAL

iii
ACKNOWLEDGMENT

The author of this research would like to express his sincere gratitude to the

following individuals who contributed their time, wisdom, and energy to this

successful research. First and foremost, the author would like to praise the Almighty

God, who is forever faithful to His child. For the strength, life, and love he bestowed

upon His humble servant. This research will never be completed unless He wills it to

be. To his adviser, mentor, teacher, and guide, Dr. Jonah Flor O. Maaghop, for the

patience and enthusiasm throughout every step in developing this research. Her great

wisdom and experience have given the author valuable insights in completing this

thesis. To the Student Research Committee members, Dr. Jude B. Rola and Mr.

Jomari Joseph A. Barrera, and the Student Research Committee Chairperson, Prof.

Michael Anthony Jay B. Regis, for their invaluable input in some areas of this

research for its improvement. To Mr. Teofilo L. Olasiman Jr., a local agriculturist of

Hideco Sugar Milling Co., for providing the necessary knowledge about sugarcanes

and assisting the author with gathering the datasets needed to complete this research.

To the department head of the Department of Computer Science and Technology,

Prof. Magdalene C. Unajan, for her kindness, understanding, encouragement, and

support, for allowing the author to use the department’s facilities to finalize this

research paper, and for her final approval of the manuscript. To Mr. and Mrs. Sarl

James and Elizabeth Mamasig-Sebios, for their indispensable knowledge during the

development of the mobile application of this research. Their great wisdom and

assistance made its development feasible for the allotted time frame. To his parents,

Mr. and Mrs. Dominador and Nonita Cabarrubias, as well as his brother and sister-in-

law, Mr. and Mrs. Ronnie and Precious Cabarrubias, for their faith, love, and

iv
unending support to the author during the research process. And finally, the author

would like to offer his deepest gratitude to himself. For the confidence and faith that

this research needed to be complete finally. All glory to God!

v
TABLE OF CONTENTS

Approval Sheet ii
Transmittal iii
Acknowledgment iv
Table of Contents vi
List of Tables viii
List of Figures ix
List of Listings x
List of Equations xi
List of Appendices xii
Abstract xiii

INTRODUCTION 1
Nature and Importance of the Study 1
Statement of the Problem 4
Objectives of the Study 4
Significance of the Study 5
Scope and Limitations of the Study 5
Time and Place of the Study 6

REVIEW OF LITERATURE 7
Plant Variety Classifiers 7
Sugarcane Research Studies 8
Convolutional Neural Network 9

MATERIALS AND METHODS 13


Dataset Collection 13
Image Pre-Processing 14
System Architecture Design 15
Assembling the Convolutional Neural Network Model 16
Model Training 20
Anaconda and Ionic Installation 20
Android Studio, Android SDK, Android Target, Java JDK, and Gradle 20
Deploying the Model to the Mobile Device 21
User Interface Design 21
System Testing and Evaluation 22

RESULTS AND DISCUSSION 25


Data Augmentation 25
Designing the Model 26
Results of Model Training 28
Testing Phase 30
Performance Evaluation 31
Hybrid Mobile Application Development 33
40

vi
SUMMARY, CONCLUSIONS, AND RECOMMENDATIONS 41
Summary 41
Conclusions 42
Recommendations 42

LITERATURE CITED 44

APPENDICES 47
Appendix A. Source Code for the Hybrid Mobile Development 47
Appendix B. Source Code for the Model Training and Testing 54
Appendix C. Sample Sugarcane Varieties 59
Appendix D. Test Results 61
Appendix E. Definition of Terms 65
Appendix F. Curriculum Vitae 67

vii
LIST OF TABLES

TABLE TITLE PAGE

Table 1. Dataset for Sugarcane Variety Classifier 13

Table 2. Summary of the model 17

Table 3. Confusion matrix 22

Table 4. Cohen's Kappa equivalent value 24

Table 5. Model accuracy and loss on training and validation 29

Table 6. Confusion matrix for classification 31

Table 7. Results of performance evaluation 32

viii
LIST OF FIGURES

FIGURE TITLE PAGE

Figure 1. Sugarcane production from April to June 2021 3

Figure 2. Sample image augmentation (A) original image and (B) augmented image 14

Figure 3. System architecture 15

Figure 4. CNN model architecture 17

Figure 5. First feature map 18

Figure 6. Second feature map 19

Figure 7. Third feature map 19

Figure 8. Fourth feature map 19

Figure 9. Use case diagram of the sugarcane variety classifier 22

Figure 10. Model architecture 27

Figure 11. Model accuracy 29

Figure 12. Model loss 30

Figure 13. Application logo and title 37

Figure 14. Home page 38

Figure 15. Choose image from gallery 39

Figure 16. Classifying image file 40

ix
LIST OF LISTINGS

LISTING TITLE PAGE

Listing 1. Data Augmentation 25

Listing 2. Augmented image reproduction 25

Listing 3. Model callbacks and history 27

Listing 4. Model classification on test images 30

Listing 5. Model conversion to JSON File 31

Listing 6. Loading the JSON model 33

Listing 7. Image rescale 34

Listing 8. Image conversion to a tensor 35

Listing 9. Classifications 35

Listing 10. Capture image through camera 36

Listing 11. Crop image 36

Listing 12. Cropped image processing 37

x
LIST OF EQUATIONS

EQUATION TITLE PAGE

Equation 1. Number of parameters 18


Equation 2. Accuracy 23
Equation 3. Precision 23
Equation 4. Recall 23
Equation 5. F1’s Score 23
Equation 6. Cohen’s Kappa 23

xi
LIST OF APPENDICES

APPENDIX TITLE PAGE

Appendix A. Source Code for the Hybrid Mobile Development 47


Appendix B. Source Code for the Model Training and Testing 54
Appendix C. Sample Sugarcane Varieties 59
Appendix D. Test Results 61
Appendix E. Definition of Terms 65
Appendix F. Curriculum Vitae 67

xii
ABSTRACT

DARWIN G. CABARRUBIAS. Visayas State University. June 2022.

DEVELOPMENT OF A MOBILE-BASED SUGARCANE (SACCHARUM

OFFICINARUM) VARIETY CLASSIFIER USING CONVOLUTIONAL NEURAL

NETWORK

Adviser: Jonah Flor O. Maaghop

This study aimed to develop a mobile application to help local farmers and

researchers classify sugarcane varieties. It seeks to address the issue of the low

availability of experts in classifying varieties since having a mobile device for

classification would be much more convenient. In this study, a convolutional neural

network was trained and converted to a JSON file, later deployed in a hybrid mobile

application. The sequential type model consists of four convolutional layers, each

activated by a Rectified Linear Unit function, followed by a pooling layer, a

normalization layer, and a drop-out, and finally, two dense layers were added. The

model was trained with a total of 4500 images, augmented and reproduced from 200

cropped images, and validated using 64 images from five different classes. When

evaluated, a loss value of 0.1153 and an accuracy rate of 95.67% were attained. When

the model was tested using the test set, it acquired an average accuracy of 88.38%, an

average precision of 69.37%, and an average recall of 85.278%, indicating that the

model misidentified some varieties due to the similarities of its features.

The Ionic Framework was used to develop the graphical user interface of this

application and it was deployed to the Android platform using Cordova plugins.

Furthermore, it was recommended to cover other sugarcane varieties and apply image

xiii
processing techniques to improve the performance of the model. Additional features

such as automated cropping of the region of interest and saving classification results

in online storage for easy access are also being considered.

Keywords: Convolutional Neural Network, Hybrid Mobile Application, Ionic

Framework

xiv
CHAPTER I

INTRODUCTION

Nature and Importance of the Study

Sugarcane, also known as Saccharum officinarum, is a perennial herb of the

Poaceae family, mainly cultivated for its juice, from which sugar is processed. Most

of the species are found in some parts of Oceania and Asia, and it is one of the

primary crops in the tropical countries, providing a lot of jobs for hundreds of people,

either firsthand or not (Santos et al., 2015). Sugar is not the only commodity produced

from sugar cane. There are four main byproducts of sugarcane: (1) sugarcane tops, or

SCT, which are primarily used as livestock feed. Cattle and other livestock in

Mauritius rely heavily on SCT, especially during the winter (Naseeven, 1988); (2)

bagasse, which is commonly used as fuel in combined heat and power to produce

steam and electricity (Qing et al., 2018); (3) filter muds, which are used to increase

organic matter on the soil and bring benefits to the soil and plants (Rahmad et al.,

2020); and (4) molasses, which is also used as livestock feed and as a fermentation

source for ethyl alcohol and other chemicals to be used in the industry (Caballero et

al., 2003).

There are 16 varieties of sugarcane that have already been registered and

certified by the National Seed Industry Council, or the NSIC, and are being subjected

to hybridization to develop better varieties further. This includes Phil 8013, Phil 8477,

Phil 8583, Phil 8727, Phil 8839, Phil 8943, Phil 91-1091, VMC 947, VMC 84-524,

VMC84-549, VMC 86-550, VMC 87-95, VMC 87-599, VMC 88-354, VMC 95-152,

VMC 95-06 (Sugarcane High Yielding Varieties, 2006). Sugarcane production was
2

6.91 million metric tons from April to June 2021, up from 5.12 million tons in the

same quarter last year, a 34.8 percent increase. Western Visayas remained the leading

producer with 3.24 million metric tons of sugarcane. Tons of sugarcane are harvested,

accounting for 46.9% of total sugarcane production. Northern Mindanao and Central

Visayas came in second and third, with 20.4 percent and 13.8 percent shares,

respectively (Major Non-Food and Industrial Crops Quarterly Bulletin, April-June

2021, 2021).

There are 16 varieties of sugarcane that have already been registered and

certified by the National Seed Industry Council, or the NSIC, and are being subjected

to hybridization to develop better varieties further. This includes Phil 8013, Phil 8477,

Phil 8583, Phil 8727, Phil 8839, Phil 8943, Phil 91-1091, VMC 947, VMC 84-524,

VMC84-549, VMC 86-550, VMC 87-95, VMC 87-599, VMC 88-354, VMC 95-152,

VMC 95-06 (Sugarcane High Yielding Varieties, 2006). Sugarcane production was

6.91 million metric tons from April to June 2021, up from 5.12 million tons in the

same quarter last year, a 34.8 percent increase. Western Visayas remained the leading

producer with 3.24 million metric tons of sugarcane. Tonnes of sugarcane are

harvested, accounting for 46.9% of total sugarcane production. Northern Mindanao

and Central Visayas came in second and third, with 20.4 percent and 13.8 percent

shares, respectively (Major Non-Food and Industrial Crops Quarterly Bulletin, April-

June 2021, 2021).

An accurate information on sugarcane varieties is essential in cropping,

particularly in predicting sugarcane production and in assessing its vulnerability to

pests and diseases (Apan et al. 2004).


3

Figure 1. Sugarcane production from April to June 2021

The ongoing techniques for classifying specific sugarcane varieties are

bounded by genomic analysis and visual discrimination. Having trained staff to

visually distinguish sugarcane varieties is possible. Still, the desired outcome would

differ depending on the personnel and location due to the differences in plant ages

and farming methods such as fertilization of soil and irrigation, exposure to sunlight,

and removal of dry leaves (Neto et al. 2018).

Classification of plants or crops varieties would be tedious if done manually

by an expert. Often, these experts are not around for farmers to advice on

classification. Classifying plant varieties, even without the help of an expert or

without possessing a high-end device, would help local researchers and farmers in

the future. Local sugarcane farmers of Barangay Montebello and Barangay

Masarayao, Kananga, Leyte have been classifying varieties manually or having the

agriculturists of Hideco Sugar Milling Company or HISUMCO classify for them.

The problem of insufficient experts would be solved by a mobile app that could

accurately classify different types of sugarcane.


4

The physical characteristics of a plant present in the leaf, in the flower, or in

the fruit itself are the deciding factors for its variety classification. Selecting this

unique characteristic for classifying varieties, reducing the redundant features

without losing essential information, and processing this acquired data accurately by

a mobile application would significantly reduce classification errors.

With the help of Machine Learning and Neural Networks and integrated into

a mobile application, a new generation of farmers can now optimize the old

procedures of classifying crop varieties left by the previous generation of farmers.

Statement of the Problem

Agriculturists at Hideco Sugar Milling Company (HISUMCO) and local

farmers have been classifying sugarcane varieties manually since the factory started

its operation and farmers began planting. Classifying its varieties using only the naked

eye is not convenient enough for the local farmers and the researchers involved in this

type of crop. Without the presence of experts, the classification of sugarcane would be

tedious and time-consuming. The technology and devices for classifying sugarcane

varieties other countries have been using are not readily available yet on local farms

and research centers. It would be helpful to have a mobile app that can reliably tell the

different types of sugarcane.

Objectives of the Study

The study mainly aims to develop an image-based sugarcane variety

classifier.

Specifically, the study intends to:

1. build a sugarcane variety classification model using a Convolutional


5

Neural Network,

2. evaluate the performance of the model using its Accuracy, Precision,

Recall, F1’s Score and Cohen’s Kappa metrics; and

3. design and develop a user-friendly graphical user interface for mobile-

based sugarcane variety classifier.

Significance of the Study

The development of the Sugarcane Saccharum officinarum mobile variety

classifier will aid farmers, local agriculturists, and researchers in accurately

classifying sugarcane varieties. Experts in recognizing sugarcane varieties may not

always be available for local farmers to ask for assistance on classification. If experts

are unavailable or cannot afford to pay for these experts, having an easy-to-use

mobile application for variety classification would help our local farmers save time

and money. The probability of errors in manual recognition would be reduced,

allowing researchers to develop more sugarcane products in the future.

Scope and Limitations of the Study

The dataset for this study is composed of images of sugarcane stalks captured

using an Oppo A31 smartphone with a 12-megapixel rear camera and a resolution of

1800x4000 pixels. The collection of the images was done between 9:00am and

3:00pm in Barangay Masarayao and Barangay Montebello, Kananga, Leyte. This

study has only covered five sugarcane varieties: PS 1, VMC 84-524, VMC 86-550,

PHIL 94-0913, and VMC 95-06. The mobile app was built on a framework called

Ionic. It was only deployed on Android OS versions 9, 10, 11, and 12. Furthermore, a
6

Convolutional Neural Network Model developed in Tensorflow was used in this

study.

Time and Place of the Study

The datasets were gathered from the sugarcane fields in Barangay Masarayao

and Barangay Montebello, Kananga, Leyte on December 26-27, 2021, several weeks

before the sugarcane harvest. The development of the system was conducted at the

Department of Computer Science and Technology, Visayas State University, Visca,

Baybay City, Leyte, from March 2022 to June 2022.


7

CHAPTER II

REVIEW OF LITERATURE

Plant Variety Classifiers

In agricultural fields, plant variety classification is a continuing study. Different

researchers have utilized distinct parts of a plant, such as fruits and flowers, to

identify its variety, but the most widely used part is the leaf, which is the easiest part

to acquire. Different algorithms are applied to extract the region of interest correctly.

In a recent study (Tan et al. 2018), Sobel Edge Detection was used to extract the vein

architecture in the leaf. Unajan et al. (2017) used the Otsu Thresholding Method to

separate a sweet potato leaf image from its background, and a Gray Level Co-

Occurrence Matrix (GLCM) was used in extracting the second-order statistical

features. After the features were extracted, a classification procedure was used to

recognize plant varieties.

In a study by Tabada & Beltran (2019), an Artificial Neural Network (ANN)

was used as a classifier to recognize four mango varieties. ANN can learn and model

complex and non-linear relationships, deduce hidden relationships from hidden data,

and not impose restrictions on the input variables. ANN is ideally used for

recognizing plant varieties with extensive input data. The overall accuracy achieved

after the system was implemented was 96%, indicating a high accuracy rating. It was

recommended to use Zernike Moments in the future to increase the accuracy rating.

Furthermore, with machine learning methods and neural networks, the authors

(Zhang et al., 2021) used three machine learning algorithms in classifying corn seed

varieties. The Deep Convolutional Neural Network (DCNN), K Nearest Neighbor

(KNN), and Support Vector Machine (SVM) were the models that were compared
8

based on the accuracy rating in this study. The DCNN model showed promising

results, having a 100% rating for the accuracy, sensitivity, specificity, and precision of

the training set and a 94% accuracy rating for the testing set. The KNN model had the

worst performance compared to the DCNN and SVM models. However, the study

only used four corn seed varieties and therefore recommended adding more varieties

in the future and that a real-time system should be used to classify corn seed varieties

Sugarcane Research Studies

Neto et al. (2018) used visible and near-infrared spectral reflectance of stalks

and multivariate methods in classifying four sugarcane varieties. In this study,

sugarcane billets were planted in a controlled environment. After 163 days, 12 stalks

of each variety were randomly selected for field spectroscopy measurements using a

portable spectrometer and a reflection probe to emit light onto the sugarcane stalks

and collect the reflected light. After calculating the reflectance values of the images,

four different multivariate methods were used to classify the sugarcane varieties:

Principal Component Analysis (PCA), Factorial Discriminant Analysis (FDA),

Stepwise Forward Discriminant Analysis (SFDA), and Partial Last-Squares

Discriminant Analysis (PLS-DA). While evaluating the PCA results, two varieties'

scores overlapped, so only classifiers of three varieties were tested. PLS-DA was

shown to have the highest overall classification accuracy of 82%, followed by FDA

with 81.4% and SFDA with 73.6%. It was recommended that future studies should

expand the wavelength range and cultivate more sugarcane varieties to observe if the

results remain.

In a study by the authors (Alencastre-Miranda et al. 2020), healthy sugarcane

billets were determined using a Convolutional Neural Network, which increased the
9

population of sugarcane plants. They aimed to detect damage features in the shortest

possible time with the minimum number of images to retrain the network to plant

healthy billets. Different CNN models were used in this study, including AlexNet,

VGG-16, and ResNet101. AlexNet showed the best performance, while VGG-16 and

ResNet101 showed the least. Two-step transfer learning was used to solve the

problem of retraining models with limited data. It was recommended that manual

preprocessing during the data collection be considered in future research.

Convolutional Neural Network

Authors (Sabzi et al. 2017) conducted a study to classify three orange

varieties. Three features were extracted from each variety, namely texture, color, and

shape, and they acquired 263 features. The combination of Artificial Neural Network

and biologically inspired metaheuristic algorithms were used to aid in the

computational efficiency of the process and to avoid the problem of overfitting

classifiers, known as the “curse of dimensionality.” In this case, the extracted features

were more than the number of objects, 100 samples per class. The ANN was used as

the classifier, which consists of multilayer perceptrons (MLP), and the second

algorithm would iteratively control the successive executions of the ANN until the

optimal features were selected. Three metaheuristic algorithms were used and

produced three hybrid feature selection methods: Ant Colony Optimization Algorithm

(ANN-ACO), Particle Swarm Optimization Algorithm (ANN-PSO), and Simulated

Annealing Algorithm (ANN-SA). Two-hybrid algorithms were used for the

classification: Artificial Bee Colony (ABC) and Harmony Search (HS). A K-nearest

neighbor classifier (KNN) was compared with the hybrid techniques. The KNN

showed a very low accuracy rating of only 70%, ANN-ABC’s accuracy rating was
10

96.70%, and ANN-HS’s accuracy rating was 94.28%. It recommended the application

of deep learning neural networks for future research.

The authors conducted the study (Pasion et al., 2019) to classify rice varieties

and used Convolutional Neural Network as the classifier. This study resolved the

overfitting problem by modulating the neural networks' entropic capacity and

designing a CNN framework using a multiscale and sliding window approach. A

trivial CNN, which only has a few layers and filters per layer, together with data

increases and 0.5 drop-outs, was used to remove unnecessary features. By preventing

the layer from seeing the same feature twice, the drop-outs have helped reduce the

overfitting problem. A confusion matrix was used to describe the performance of the

classifier, and after creating the neural network, it was integrated into a mobile device

and resulted in an overall 93.8% accuracy rating. Confusion and miscalculations were

attributed to a bad lighting condition.

To help farmers classify olive fruit varieties during postharvest, the authors

(Real et al., 2019) developed an automated olive variety classifier using six models of

Convolutional Neural Network. This study gathered data using an ad hoc image

acquisition system designed to potentially integrate with a conveyor system. The

fruits were stochastically placed in the image acquisition system to mimic an image

capture on a conveyor system. Due to the variations in the color of ripe olive fruit in a

single variety, the color features of the fruits were discarded at the beginning, and

morphological features were considered. After segmenting the images, each fruit

image was extracted as an individual 501x501 pixel binary image and used a

weighing-function-based transformation to add sphericity and three-dimensionality to

each fruit image. Training of the six CNN model architectures was done after the
11

images of olive fruits had been extracted, and to quantify the performance of the

classifiers, a metric was used where the ratio between the numbers correctly

categorized within a specific variety and the total number of them included in the

corresponding validation subset. The overall average accuracy of the six CNN models

was more than 90% in almost all cases, and it was recommended to increase the

number of elements the neural network was trained with and to add new olive

cultivars in the future.

The manual classification of Abaca fiber is tedious and time-consuming, so

the authors (Barrera & Montañez, 2020) developed an automated abaca fiber grade

classification and used Convolutional Neural Network as the classifier. The images

were taken in a controlled environment using objective equipment. The CNN was

designed to have five convolutional layers with a rectified linear unit for the

activation function and added three layers for pooling; the last pooling layer was

flattened into a single column vector. A stochastic gradient optimization method

determined the weights and biases between the neurons. After the model's training,

the results were plotted on a confusion matrix, and with this, the accuracy rating and

the Cohen kappa value were calculated. The accuracy rating of the classifier was

83%, which indicated that the application of a customized CNN was sufficient for

classification. It was suggested that in the future, either more features should be taken

from the images or a different AI algorithm should be used.

To detect the incidence of pests and diseases in jackfruit, the authors (Oraño et

al., 2019) developed a mobile-based application to help jackfruit farmers identify

damage to a jackfruit. Using Tensorflow to develop the CNN model, it had three

Convolutional Layers followed by a max-pooling layer. Data augmentation was


12

applied to the training dataset to avoid over-fitting using Keras’s

ImageDataGenerator() class. The model achieved a significant validation accuracy of

97.93% and a training accuracy of 99.17% in its final epoch. The model’s

performance was then evaluated using basic metrics such as accuracy, precision,

recall, and F1 score, and it achieved a high accuracy rating of 99.87%. After the

model’s performance was evaluated, it was deployed to an Android application,

converting the H5 file model to a TfLite model. It was suggested to add more jackfruit

damage, like cracked fruit, browned fruit, etc.

Since mobile-based applications for sugarcane studies are seldom made from

previous related studies and involve Convolutional Neural Network as a classifier,

this study was developed to introduce new ideas related to sugarcane research.
13

CHAPTER III

MATERIALS AND METHODS

Dataset Collection

The sugarcane sample stalks were gathered from plantations located in

Barangay Montebello and Barangay Masarayao, Kananga, Leyte and the manual

recognition was done by Agriculturist Teofilo L. Olasiman Jr. In this study, 296

images for each of the five varieties were divided into three sets of images: 200 for

training, 64 for validation, and 32 for testing. A 12-megapixel rear camera of Oppo

A31 was used to capture these images and only the middle part of the stalk was

considered as the region of interest. Since the dataset was too few for training, data

augmentation methods were carried out to reproduce the training images and

produced 900 images per class. Table 1 specifies the total number of images per class.

Table 1. Dataset for Sugarcane Variety Classifier

Number of Images

Classes Training Validation Testing Total(Class)

VMC 84 524 200 64 32 296

VMC 86 550 200 64 32 296

Phil 94 0913 200 64 32 296

VMC 95 06 200 64 32 296

PS 1 200 64 32 296
Total number of
images 1000 320 160 1480
14

Image Pre-Processing

Image cropping was performed to extract the region of interest. The 200

sample images for each of the five sugarcane varieties were used to feed the model for

the training phase. In the testing phase, 32 for each of the five varieties were used.

The cropped images were resized to have a resolution of 280x280 pixels and were

reshaped into 200x200x3 since the images to be fed have to be colored. Data

augmentation techniques such as zooming and rotation were applied to increase the

number of images used in training the model and eliminate the possible overfitting.

This was done by making transformed instances of the images belonging to the same

class as the original images. Figure 2 shows the sample augmentation results

A B
Figure 2. Sample image augmentation (A) original image and (B) augmented image
15

System Architecture Design

The sugarcane variety classifier is composed of two phases, as shown in

Figure 3, where both phases begin with the acquisition of images of the five

sugarcane varieties. In the training phase, the images were augmented before they

were used in training, and the convolutional layer of the model extracted features. It

was then converted to a JSON file deployed in an Android application. Before the

JSON model was deployed, the H5 model was tested first using the testing dataset

acquired. Using Cordova plugins, the application was able to connect to the local

storage to read cropped files and connect to the back camera to take images and crop

them in real-time.

Figure 3. System architecture


16

Assembling the Convolutional Neural Network Model

Figure 4 visualizes the total structure of the convolutional neural network

model. It mainly consists of four convolutional layers that provide the extraction of

the features from an RGB input image, followed by a max-pooling layer, followed by

a batch normalization layer, and finally, a drop-out layer. Max-pooling layer is a

pooling procedure that chooses the most significant element from the feature map

region covered by the filter. The normalization layer is a layer that allows the

network's layers to learn more effectively. A drop-out layer is a mask that nullifies

some neurons' contributions to the following layer while leaving all others unchanged,

and a fully connected layer acts as the model’s classifier. The first two convolutional

layers have 32 filters with a 3x3 kernel matrix, and the size for the pooling layer is

2x2, while the third and fourth layer convolutional layers have a filter size of 64 with

the same kernel size and pooling size of the previous layers

A Rectified Linear Unit (ReLU) activation was used after each convolutional

layer. A dense layer with 64 nodes was added, having ReLU as the activation,

followed by another layer of batch normalization and drop-out before adding the final

dense layer with five nodes or the output layer with Softmax as the activation. The

model then used the last layer to predict which of the five classes could obtain the

highest probability. The summary of the model shown in Table 2 displays the shape

and the total number of parameters learned during the training process.
17

Figure 4. CNN model architecture

Table 2. Summary of the model


18

The computational formula used in computing the number of parameters in each layer

is:

Number of parameters = weights + bias (1)

where: weights = [i × (f×f) × o] + o, where: i, no. of input maps (or channels), f, filter

size (just the length), o, no. of output maps (or channels. this is also defined by how

many filters are used). Based on a 200x200x3 (200 wide, 200 high, 3 color channels)

input image, the first output shape 198 was the convolutional layer with a 3x3 filter.

In addition, in this layer 3 feature maps were used as input, while 32 feature maps

were selected as output, resulting in 32 alternative 3x3x3 filters were identified. The

pooling layer, on the other hand, simply replaces a 2x2 neighborhood with its greatest

value, leaving no learnable parameters. The number of parameters for the fully

connected layer is the product of the input, and output maps, plus additional bias for

every output. For this reason, the two dense layers have 409664 and 325 learning

parameters, respectively. Figures 5, 6, 7 and 8 show the feature maps that were made

from the various convolutional layers, pooling, normalization and dropout in the

network during training.

Figure 5. First feature map


19

Figure 6. Second feature map

Figure 7. Third feature map

Figure 8. Fourth feature map

The figures above illustrate the features extracted from each convolutional

layer. The pooling layer selects the best features extracted from the convolutional

layer; then, it is normalized in the batch normalization layer, and its redundancies are

reduced in the drop-out layer. This procedure is then repeated up to the final

convolutional layer before it is flattened and passed to the final dense layers.
20

Model Training

The model was compiled with Adam as the optimizer since its optimization

time is faster and requires fewer parameters for tuning, which balances the learning

rate throughout the learning process. In addition, cross-entropy was used as the

categorical loss function and accuracy as a metric to determine the accuracy of the

validation set during training. The training and validation datasets were divided into

batches of five images before loading into the model, making it 180 images per epoch

and 12 images per epoch for validation. The model was fitted for 1000 epochs, with

the validation data set to monitor the performance of the model throughout the

training. Model Checkpoint callbacks were used in Keras to save the best model based

on the highest level of validation accuracy.

Anaconda and Ionic Installation

Before model training, the necessary software installation and environment

were made. These include Anaconda IDE, Tensorflow, and other essential packages.

Furthermore, the Ionic Framework was used to develop the mobile application, in

which a command-line installation was applied, ensuring that the Node Package

Manager was installed first.

Android Studio, Android SDK, Android Target, Java JDK, and Gradle

The Ionic framework requires specific packages to deploy the application on

the Android Operating System. These include the Android SDK, wherein Android

Studio Chipmunk | 2021.2.1 Patch 1 for Windows 64-bit must also be installed to

fully utilize its functionalities. Android Target was installed within Android Studio to
21

specify which version of Android OS our application would be installed on. Android

development also requires the Java JDK, which can be downloaded from Oracle’s

official website. Gradle ensures that we can generate an apk from the.java and.xml

files, which were also downloaded from their website. It was later included in the

Environment Path as it is required to utilize its features thoroughly.

Deploying the Model to the Mobile Device

Ionic Framework and Android Studio were used to develop the hybrid mobile

application for this study. This is because of its ability to be deployed on other

platforms and the ability to use the device’s hardware components (Rivera, 2020).

Tensorflow JS was then installed using Node Package Manager to utilize the

Tensorflow libraries in developing the application. After the model was trained, it was

then converted into a JSON file so that it could be used and tested first in a browser

prior to its actual deployment.

User Interface Design

A user interface was developed using the Ionic Framework and Cordova

plugins for users to have easier access to the application. The application provides two

options to the users upon opening the app: selecting cropped images from the

mobile’s local storage and capturing new sugarcane images using the camera. Figure

9 illustrates the use case diagram for the application.


22

Figure 9. Use case diagram of the sugarcane variety classifier

System Testing and Evaluation

After the training phase, the model was tested using the testing dataset

acquired during data collection. The results were plotted in a confusion matrix,

providing its true-positive (TP), true-negative (TN), false-positive (FP), and false-

negative (FN), as shown in Table 3.

Also, performance metrics such as accuracy, precision, recall, and F1 score

were computed to determine the model's performance in classifying sugarcane

varieties

Table 3. Confusion matrix

Actual Values
Positive (1) Negative (0)

Positive (1) TP FP

Predicted Values
Negative (0) FN TN
23

The formulas used to compute the Accuracy, Precision, Recall and F1’s Score

are shown below:

𝑡𝑝+𝑡𝑛
Accuracy = (2)
𝑡𝑝+𝑓𝑝+𝑡𝑛+𝑓𝑛

𝑡𝑝
Precision = (3)
𝑡𝑝+𝑓𝑝

𝑡𝑝
Recall = (4)
𝑡𝑝+𝑓𝑛

2∗𝑝∗𝑟
F1’s Score = (5)
𝑝+𝑟

The formula used to compute the Cohen’s Kappa (K) for a multi-class

classifier (Grandini et al, 2020) are shown as:

𝑐 ×𝑠− ∑𝐾
𝑘 𝑝𝑘 × 𝑡𝑘
𝐾= (6)
𝑠 2 − ∑𝐾
𝑘 𝑝𝑘 × 𝑡𝑘

Where:

• c = ∑𝐾
𝑘 𝐶𝑘𝑘 the total number of elements correctly predicted

• s = ∑𝐾 𝐾
𝑖 ∑𝑗 𝐶𝑖𝑗 the total number of elements

• pk = ∑𝐾
𝑖 𝐶𝑘𝑖 the number of times that class k was predicted (column

total)
24

• tk = ∑𝐾
𝑖 𝐶𝑖𝑘 the number of times that class k truly occurs (row total)

After computing the value for Cohen’s Kappa (K), it was then interpreted using

the values in Table 4.

Table 4. Cohen's Kappa equivalent value

Level of Percentage of Data


Value of Kappa Agreement that are Reliable
0-.20 None 0 – 4%
0.21–0.39 Minimal 4 – 15%
0.40–0.59 Weak 15 - 35%
0.60–0.79 Moderate 35 - 63%
0.80–0.90 Strong 64 - 81%
Above .90 Almost Perfect 82 - 100%
CHAPTER IV

RESULTS AND DISCUSSION

Data Augmentation

Since the dataset was too small to get a significant result, overfitting was

encountered during the model's initial stages of the training process. To overcome this

problem, data augmentation using Keras’s ImageDataGenerator class was done to

reproduce the training images to a total of 900 images, which were then used for

training (see Listing 1 and 2).

Listing 1. Data Augmentation


datagen_test = ImageDataGenerator(rescale = 1.0/255)

datagen = ImageDataGenerator(rotation_range=30,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.3,
zoom_range=0.1,
horizontal_flip=True,
samplewise_std_normalization=True,
fill_mode='nearest',
brightness_range=[0.5, 0.9])

Listing 2. Augmented image reproduction


for imageFolder in os.listdir('C:/Users/Darwin/Documents/CNN project
Thesis/Version 2/Training'):
try:
os.mkdir('Augmented_Images')
except: pass
for file in os.listdir(r'C:/Users/Darwin/Documents/CNN project Thesis/Version
2/Training/'+imageFolder+'/'):
if not os.path.exists(r'C:/Users/Darwin/Documents/CNN project Thesis/Version
2/Training/'+imageFolder+'/'):
26

os.mkdir(r'C:/Users/Darwin/Documents/CNNprojectThesis/BrandnewImages/'+image
Folder)
img = load_img(r'C:/Users/Darwin/Documents/CNN project Thesis/Version
2/Training/'+imageFolder+'/'+file)
x = img_to_array(img)
x = x.reshape((1,) + x.shape) # 1,3,WIDTH, HEIGHT - > 3 is because the image
is RGB. [0, 255, 255], if was grey 3 would be 1 [60]
i=0
for batch in datagen.flow(x,save_prefix='New2-'+imageFolder,batch_size = 1,
save_to_dir='C:/Users/Darwin/Documents/CNN project
Thesis/Augmented_Training_Images/'+imageFolder,
save_format='jpg'):
i += 1
print("Looks good")
if i >= 5:
print("done!")
break

Designing the Model

After the data was augmented, the model was structured accordingly. After

each convolutional layer, a pooling layer was applied, followed by batch

normalization and a drop-out layer as shown in Figure 10. The model is saved based

on the highest validation accuracy it could get based on the current epoch.

Furthermore, the model was fitted and compiled along with the steps per epoch, the

callbacks used, and the history was plotted to show the graph for the model’s training

loss and accuracy. After training, the model was saved to the local storage as an H5

file (see Listing 3).


27

Figure 10. Model architecture

Listing 3. Model callbacks and history


filepath = 'C:/Users/Darwin/Documents/CNN project Thesis/Version
1/Models/Saved_Model_SCVC-epoch-{epoch:02d}.h5'
checkpoint = ModelCheckpoint(filepath,
monitor = 'val_accuracy',
verbose = 1,
save_best_only = True ,
28

mode = 'max',
period = 1)

reduce_lr = ReduceLROnPlateau(monitor = 'val_loss', factor = 0.2, patience = 3,


mode = 'max', min_lr = 0.0001)
tensorboard_callback = TensorBoard(log_dir = '.\logs')
history = model.fit(train_dataset,
steps_per_epoch = len(train_dataset)//batch_Size,
validation_data = validation,
validation_steps = len(validation)//batch_Size,
epochs = 1000,
verbose = 1,
callbacks = [checkpoint, reduce_lr, tensorboard_callback])
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'val'], loc='upper left')
plt.show()
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'val'], loc='upper left')
plt.show()
model.save('C:/Users/Darwin/Documents/CNN project Thesis/Version
1/Models/June1_SCVC_model_v3.h5')
print("model saved!")

Results of Model Training

The model was trained for 1000 epochs. Table 5 shows the epoch the model

saved based on the highest validation accuracy achieved. The results indicate that the

highest rating based on validation accuracy was at epoch 806, with an accuracy of

0.8833.
29

Table 5. Model accuracy and loss on training and validation

Training Validation

Epoch No. Loss Accuracy Loss Accuracy


1 2.1606 0.333 2.7833 0.200
3 1.5413 0.4711 2.5565 0.3167
9 1.2845 0.5656 2.4731 0.3667
22 0.9536 0.6511 1.3136 0.4667
25 0.9323 0.6578 1.2325 0.5667
27 0.8692 0.6789 1.2385 0.5833
114 0.4514 0.8389 0.9665 0.7333
115 0.4359 0.8467 0.9269 0.8167
529 0.1647 0.9400 0.5928 0.8500
806 0.1153 0.9567 0.6754 0.8833
996 0.0452 0.9844 0.9495 0.7667

The illustrations of the training and validation loss and accuracy are shown in

Figures 11 and 12. The model's performance remained slightly comparable for

training and validation datasets, and it was able to learn, as shown in the line plot.

Figure 11. Model accuracy


30

Figure 12. Model loss

Testing Phase

Before the model was deployed to a mobile device, it was tested first with 32

test images per variety. Kera’s model function predict() was applied to classify 160

test images, and the results were saved as a CSV file (see Listing 4.)

Listing 4. Model classification on test images


#output in probabilities
pred = test_model.predict(test_generator, verbose=1,
steps=num_test_samples/batch_size)

#convert the output into class number


predicted_class_indices=np.argmax(pred,axis=1)

#name of the classes


labels = (test_generator.class_indices)
labels = dict((v,k) for k,v in labels.items())
predictions = [labels[k] for k in predicted_class_indices]
filenames=test_generator.filenames
results=pd.DataFrame({"Filename":filenames,
"Predictions":predictions})
export_csv = results.to_csv (r'C:/Users/Darwin/Documents/CNN project
Thesis/Version 1/Testing-7.csv', index = None, header=True)
31

Model Conversion

After the model was tested, it was then converted into a JSON file using the

Tensorflow.JS class tfjs.converter.save_keras_model() for it to be deployed in a

hybrid mobile application (see Listing 5).

Listing 5. Model conversion to JSON File


import tensorflowjs as tfjs
from tensorflow.keras.models import load_model
model = load_model('C:/Users/Darwin/Documents/CNN project Thesis/Version
1/Models/June1_SCVC_model_v3.h5')
tfjs.converters.save_keras_model(model,'June1_SCVC_model_3.JSON')

Performance Evaluation

Further verification was done on the model using a new set of un-augmented

testing datasets. Comparing the actual and predicted outcome values, Table 6 shows

the number of correctly and incorrectly predicted test images.

Table 6. Confusion matrix for classification

Classified
VMC VMC 86- PHIL 94- Actual
Class 84- 524 550 0913 VMC 95-06 PS 1 Total
VMC 84- 524 26 0 0 21 2 49

VMC 86-550 0 28 0 0 0 28

PHIL 94-0913 4 3 32 0 15 54
VMC 95-06 2 1 0 11 1 15

PS1 0 0 0 0 14 14
Classified
Total 32 32 32 32 32 160

The data shown in table 6 exhibits that the model had misidentified 15

samples of PS1to Phil 94-0913 and 21 samples of VMC 95-06 to VMC 84-524. This
32

would mean that the model had found little distinction between PS 1 to Phil 94-0913

and VMC 95-06 to VMC 84-524. This is a likely scenario since Phil 94-0913 has

almost similar color features with PS 1 and VMC 84-524 with VMC 95-06.

The model’s performance was further evaluated by its accuracy, precision,

recall, and F1 score. The average of each of these metrics was computed to figure out

how well the model worked. Table 7 shows the results of the calculations.

Table 7. Results of performance evaluation

Class Accuracy Precision Recall F1 Score


VMC 84- 524 84.65% 81.25% 53.06% 64.84%
VMC 86-550 97.56% 87.5% 100% 93%
PHIL 94-0913 83.33% 100% 100% 100%
VMC 95-06 86.48% 34.37% 73.33% 46.80%
PS1 89.88% 43.75% 100% 60.86%
Average 88.38% 69.374% 85.278% 76.508%

Table 7 shows that the model had a high recall and a low precision. It means

the classifier model has misclassified some varieties, as shown in Table 6.

Misclassification among some of the varieties would also mean that this was due to

the similarities in the color features of some varieties. The model's average F1 score

was 76.5 percent, indicating that it performed slightly well when classifying varieties

despite its low precision. But since there is only a minimal difference between the

evaluated overall average accuracy of 88.38% and the obtained validation accuracy

during training, which was 88.33%, this indicates that the model has a preferable

generalization proficiency.
33

The Cohen’s Kappa value was computed after computing the basic metrics

and the value achieved was 0.60, indicating a Moderate level of agreement. Its

percentage of reliability only ranges from 35% - 63%. The Moderate level of

agreement could be caused by the misclassifications of the model between the

varieties VMC 84-524, VMC 95-06, PHIL 94-0913 and PS1.

Hybrid Mobile Application Development

Using the Ionic Framework, a hybrid application was developed that can be

deployed on an Android device written in the web programming language Javascript.

Upon the opening of the application, the converted model is loaded immediately using

the Tensorflow.JS loadLayersModel() function (see Listing 6). The interface

comprises only two buttons: the “Select from file" button and the “Capture from

camera” button. The first button would open the gallery and would allow users to

choose from cropped images of sugarcane varieties. The code below shows the

procedure for reading files from the gallery using FileReader () in Javascript. The

selected image will be loaded into a tag, then passed to the preview image() function,

where it is rescaled (see Listing 7). The classify() function then receives the image

where it checks first if the model is loaded and if there is a valid image loaded. The

image is then turned into a tensor using the Tensorflow.JS function

tf.browser.fromPixels(). The tensor is then resized based on the shape input to the

model when it was being trained (see Listing 8).

Listing 6. Loading the JSON model


ionViewDidEnter() {
console.log(this.inputFileElement);
34

tf.loadLayersModel('/assets/model/SCVC_J1_3/model.json').then(model => {
this.model = model;

this.toast("Model was loaded successfully");


})
.catch(e => {
console.log(e);
this.toast("Error loading model");
});

Listing 7. Image rescale


previewImage(src: string) {
this.imagePreview.nativeElement.src = src;
this.hasValidImage = true;

const newImg = new Image();

newImg.onload = () => {
const height = newImg.height;
const width = newImg.width;

const scale = 1.0/255;

this.canvas.nativeElement.width = this.imagePreview.nativeElement.naturalWidth
* scale;
this.canvas.nativeElement.height =
this.imagePreview.nativeElement.naturalHeight * scale;

const context = this.canvas.nativeElement.getContext('2d');


context.drawImage(this.imagePreview.nativeElement, 0, 0,
this.canvas.nativeElement.width, this.canvas.nativeElement.height);

// console.log(this.canvas.nativeElement)

this.classify().then(() => {

});

newImg.src = this.imagePreview.nativeElement.src
35

Listing 8. Image conversion to a tensor


if (this.hasValidImage) {

let tensor = tf.browser.fromPixels(this.canvas.nativeElement, 3)

.resizeNearestNeighbor([200, 200]) // change the image size


.toFloat()
.expandDims()
.div(255);

Following the conversion of the image to a tensor, it was then fed to the model

for the classification process. Then, it was mapped from an array returning the class

name of the variety (see Listing 10).

Listing 9. Classifications
let predictions = this.model.predict(tensor) as any;

const d = await predictions.data() as [];


console.log("predictions: " +d);

let top5 = Array.from(d)


.map(function (p, i) { // this is Array.map
return {
probability: p,
className: TARGET_CLASSES[i] // we are selecting the value from the obj
};
})
.sort(function (a: any, b: any) {
return b.probability - a.probability;
}).slice(0, 5) as any[];

console.log(top5);
this.result = top5[0].className;

} else {
// No file selected
this.toast("Please select an image first.");
}
If the user wants to use the camera, the Cordova Plugin Camera is used for the

application to connect to the device’s camera (see Listing 10). After the user takes the
36

image using the camera, it will be cropped using the Cordova Plugin Crop (see

Listing 11). The cropped image will then be passed to the readImageFile() function to

be converted to base64 format and remove unnecessary characters in its file name (see

Listing 12). The converted cropped image was then passed again to the

previewImage() function for it to be rescaled, converted, and classified.

Listing 10. Capture image through camera


captureImage() {
this.camera.getPicture({

quality: 100,
destinationType: this.camera.DestinationType.FILE_URI,
encodingType: this.camera.EncodingType.JPEG,
mediaType: this.camera.MediaType.PICTURE

}).then((fileUri) => {

this.cropImageFile(fileUri)

}, (err) => {
console.log("Error in captureImage(): ", err);
});
}

Listing 11. Crop image


cropImageFile(fileUri: string) {
const cropOpt: CropOptions = {
quality: 100,
targetHeight: 250,
targetWidth: 250
}

this.crop.crop(fileUri, cropOpt)
.then(
fileUri => {

this.readImageFile(fileUri);

},
error => {
console.log("Error in cropImageFile(): ", error)
}
37

);
}
Listing 12. Cropped image processing
readImageFile(fileUri: string) {
let splitPath = fileUri.split('/');
let imgName = splitPath[splitPath.length - 1];
if (imgName.indexOf("?") > -1)
imgName = imgName.split("?")[0]
let fileUrl = fileUri.split(imgName)[0];
this.filePlugin.readAsDataURL(fileUrl, imgName).then((base64Cropped: string)
=> {
this.previewImage(base64Cropped)

}, (error: any) => {


console.log("Error in readImageFile(): ", error);

}).catch(e => {
console.log("Catched error in readImageFile(): ", e);
})

Figure 13 shows the application logo and its title from the mobile phone on

which it was installed.

Sugarcane Variety Classifier

Figure 13. Application logo and title


38

Figure 14 shows the home page for the various classifier applications. It only

has two buttons: "Select Image" and "Capture From Camera." The JSON model is

immediately loaded upon opening the application, as displayed in the figure.

Figure 14. Home page


39

The device’s local storage is opened upon tapping the “Select from Gallery”

button, and the user will have to choose a pre-cropped image, as shown in Figure 15.

It will then be loaded to the home page and fed to the JSON model to make

classifications.

Figure 15. Choose image from gallery


40

After the cropped image is loaded and classified, the application immediately

displays the result of the classification in the home page, as shown in Figure 16.

Classified Variety: VMC

Figure 16. Classifying image file


41

CHAPTER V

SUMMARY, CONCLUSIONS, AND RECOMMENDATIONS

Summary

Sugarcane farmers’ traditional methods of ocular classification of sugarcane

varieties are inadequate for obtaining reliable results. Most of the time, sugarcane

experts are not always present in classifying varieties. Thus, this study was developed

to implement a hybrid mobile application that would classify sugarcane varieties in

real time with significant results using Convolutional Neural Network that would help

local farmers and researchers and resolve the problem of the low availability of

experts.

The model was trained with augmented training images for 1000 epochs and

was then tested using test images before being converted into a JSON file that could

be deployed into a hybrid mobile application. The mobile application was then

designed and developed, which allows the users to either select pre-cropped images or

capture and crop images using the camera, and then the model will display on the

home page the result of the classification.

Basic metrics and Cohen’s Kappa value were used to evaluate the model's

performance before converting it into a JSON file. The results showed a high recall

but low precision, indicating that the model had many misclassifications. The model

misclassified PHIL 94-0913 as PS1 and VMC 84-524 as VMC 95-06; that could

probably be due to the similarities between the color features and the textures.
42

Conclusions

Based on the result of the Cohen’s K value, the percentage range in which the

data is only reliable is between 35 - 63%. The moderate level of the agreement

indicates that the misclassifications of VMC 84-524 to VMC 95-06 and PHIL 94-

0913 to PS 1 were due to their similarities in color features. But with the high

accuracy rating, the model could classify sugarcane varieties based on color features,

and the mobile application could correctly classify sugarcane varieties in real-time.

Recommendations

To make the classifier more effective and increase the accuracy and precision,

recommendations were given as follows:

 Add dataset of other sugarcane varieties to avoid data augmentation.

 Implement a new CNN model for multi-class classifier rather than the

multiple binary classifier implemented on this study

 Employ more advanced pre-processing techniques (i.e. Eigen vectors,

Principal Component Analysis (PCA)) to reduce the feature size and

improve training performance.

 Perform real-time segmentation to separate the Region of interest (i.e.

sugarcane sample) with the environmental background when taking an

image sample.

 Use a different color space for the classifier due to the general unreliability

of RGB on color intensity differences.


43

 Evaluate a field testing of the application (preferably at different times of

the day) for improvements that can be made when environmental

variabilities are taken into account.

 Add an online database to store the samples taken from the field as

additional training and testing data.


44

LITERATURE CITED

Alencastre-Miranda, M., Johnson, R. M., & Krebs, H. I. (2020). Convolutional Neural


Networks and Transfer Learning for Quality Inspection of Different Sugarcane
Varieties. IEEE Transactions on Industrial Informatics, 787 - 794.
Apan, A., Held, A., Phinn, S.R., & Markely, J. (2004). Spectral discrimination and
classification of sugarcane varieties using EO-1 hyperion hyperspectral
imagery.
Rodríguez, F. J., Garcia, A., Pardo, P. J., & Chávez, F. (2017). Study and
classification of plum varieties using image analysis and deep learning
techniques. Progress in Artificial Intelligence.
Barrera, J. A., & Montañez, N. (2020). Automated Abaca Fiber Grade Classification
Using Convolution Neural Network (CNN). Advances in Science, Technology
and Engineering Systems Journal, 2017-213.
Caballero, B., Finglas, P., & Trugo, L. (2003). Encyclopedia of Food Sciences and
Nutrition. Academic Press.net

Camden, R. K. (2016). Apache Cordova in Action. Shelter Island: Manning


Publications Co.

Chaudhary, P. (2018). IONIC FRAMEWORK. International Research Journal of


Engineering and Technology (IRJET), 3181-3185.

Grandini, M., Bagli, E., & Visani, G. (2020). Metrics for Multi-Class Classification:
an Overview. ArXiv, abs/2008.05756.

Hossin, M. a. (2015). A REVIEW ON EVALUATION METRICS FOR DATA


CLASSIFICATION EVALUATIONS. International Journal of Data
Mining & Knowledge Management Process (IJDKP).

J. F. V. Oraño, E. A. Maravillas and C. J. G. Aliac, "Jackfruit Fruit Damage


Classification using Convolutional Neural Network," 2019 IEEE 11th
International Conference on Humanoid, Nanotechnology, Information
Technology, Communication and Control, Environment, and Management (
HNICEM ), 2019, pp. 1-6, doi: 10.1109/HNICEM48295.2019.9073341.

Major Non-Food and Industrial Crops Quarterly Bulletin, April-June 2021. (2021,
April-June). Retrieved from Philippine Statistics Authority Web site:
https://psa.gov.ph/non-food/sugarcane
Naseeven, R. (1988). Sugarcane tops as animal feed. In: ‘Sugarcane as Feed’. FAO
Animal Production and Health Paper No.72, 106-122.
45

Neto, S. A., Lopes, D. C., Toledo, J. V., & Zolnier, S. (2018). Classification of
sugarcane varieties using visible/near infrared spectral reflectance of stalks
and multivariate methods. The Journal of Agricultural Science, 1-10.
Pasion, E. A., & Lagarteja, J. G. (2019). Android-Based Rice Variety Classifier
(Arvac) Using Convolutional Neural Network. INTERNATIONAL
JOURNAL OF SCIENTIFIC & TECHNOLOGY RESEARCH, 481-485.
Qing, X., Tao, J., Gao, S.-j., Zhengxian, Y., & Nengsen, W. (2018). Characteristics
and Applications of Sugar Cane Bagasse Ash Waste in Cementitious
Materials. Materials, 3-5.
Rahmad, M., Asrul, L., Kuswinanti, T., & Musa, Y. (2020). The Effect of Sugarcane
Bagasse and Filter Mud Compost Fertilizer and Manure Application on the
Growth and Production of Sugarcane. International Journal of Scientific
Research in Science and Technology, 388-345.
Real, J. P., Aquino, A., & Marquez, J. A. (2019). Olive-Fruit Variety Classification by
Means of Image Processing and Convolutional Neural Networks. IEEE
Access, 629-641.
Rivera, J. D. (2020). Practical TensorFlow.js Deep Learning in Web App
Development. New York: Springer Science+Business Media New York.

Sabzi, S., Abbaspour-Gilandeh, Y., & Garcia-Mateos, G. (2017). A new approach for
visual identification of orange varieties using neural networks and
metaheuristic algorithms. INFORMATION PROCESSING IN
AGRICULTURE, 162-172.
Santos, F., Bore´m, A., & Caldas, C. (2015). Sugarcane Agricultural Production,
Bioenergy, and Ethanol. Brazil: Academic Press.
(2006). Sugarcane High Yielding Varieties. Los Baños, Laguna: Philippine Council
for Agriculture, Forestry and Natural Resources Research and Development
(PCARRD).
Tabada, W. M., & Beltran, J. G. (2019). Mango Variety Recognizer Using
Image Processing and Artificial. Philippine Computing Science Congress, (p.
Neural Network).

Unajan, M. C., Tabada, W. M., Gerardo, B. D., & Fajardo, A. (2017). Sweet Potato
(Ipomoea batatas) Variety Recognizer Using. Manila International Conference
on “Trends in Engineering and Technology” (MTET-17). Manila: Universal
Researchers UAE.
Zaconne, G. (2016). Getting Started with TensorFlow. Birmingham : Packt
Publishing Ltd.
46

Zhang, J., Cheng, F., & Dai, L. (2021). Corn seed variety classification based on
hyperspectral reflectance imaging and deep convolutional neural network.
Journal of Food Measurement and Characterization, 484–494.
47

APPENDICES
48

Appendix A. Source Code for the Hybrid Mobile Application

Home.Page.Ts

import {Component, ElementRef, OnInit, ViewChild} from '@angular/core';


import {ToastController} from '@ionic/angular';
import * as tf from '@tensorflow/tfjs';
import {TARGET_CLASSES} from './target_classes';
import {Camera, CameraOptions} from '@awesome-cordova-plugins/camera/ngx';
import {PhotoService} from 'src/app/services/photo.service';
import {Crop, CropOptions} from '@ionic-native/crop/ngx';
import {File} from '@ionic-native/file/ngx';
import { tensor } from '@tensorflow/tfjs';

@Component({
selector: 'app-home',
templateUrl: './home.page.html',
styleUrls: ['./home.page.scss'],
})
export class HomePage implements OnInit {

hasValidImage = false;

imgURL;
croppedimg: string;
clickedImage: string;

model = null;

@ViewChild("canvas") canvas: ElementRef = null;

file: any = null;

@ViewChild('inputFileElement', {static: true}) inputFileElement: ElementRef;


@ViewChild('imagePreview', {static: true}) imagePreview: ElementRef;

result: string = null;

constructor(private toastService: ToastController,


private camera: Camera,
public photoService: PhotoService,
private crop: Crop,
private filePlugin: File) {
}
49

ngOnInit() {

captureImage() {
this.camera.getPicture({

quality: 100,
destinationType: this.camera.DestinationType.FILE_URI,
encodingType: this.camera.EncodingType.JPEG,
mediaType: this.camera.MediaType.PICTURE

}).then((fileUri) => {

this.cropImageFile(fileUri)

}, (err) => {
console.log("Error in captureImage(): ", err);
});
}

cropImageFile(fileUri: string) {
const cropOpt: CropOptions = {
quality: 100,
targetHeight: 250,
targetWidth: 250
}

this.crop.crop(fileUri, cropOpt)
.then(
fileUri => {

this.readImageFile(fileUri);

},
error => {
console.log("Error in cropImageFile(): ", error)
}
);
}

readImageFile(fileUri: string) {
let splitPath = fileUri.split('/');
let imgName = splitPath[splitPath.length - 1];

if (imgName.indexOf("?") > -1)


50

imgName = imgName.split("?")[0]

let fileUrl = fileUri.split(imgName)[0];

this.filePlugin.readAsDataURL(fileUrl, imgName).then((base64Cropped: string)


=> {

this.previewImage(base64Cropped)

}, (error: any) => {


console.log("Error in readImageFile(): ", error);

}).catch(e => {
console.log("Catched error in readImageFile(): ", e);
})
}

ionViewDidEnter() {
console.log(this.inputFileElement);

tf.loadLayersModel('/assets/model/SCVC_J1_3/model.json').then(model => {
this.model = model;

this.toast("Model was loaded successfully");


})
.catch(e => {
console.log(e);
this.toast("Error loading model");
});

previewImage(src: string) {
this.imagePreview.nativeElement.src = src;
this.hasValidImage = true;

const newImg = new Image();

newImg.onload = () => {
const height = newImg.height;
const width = newImg.width;

const scale = 1.0/255;


51

this.canvas.nativeElement.width = this.imagePreview.nativeElement.naturalWidth
* scale;
this.canvas.nativeElement.height =
this.imagePreview.nativeElement.naturalHeight * scale;

const context = this.canvas.nativeElement.getContext('2d');


context.drawImage(this.imagePreview.nativeElement, 0, 0,
this.canvas.nativeElement.width, this.canvas.nativeElement.height);

// console.log(this.canvas.nativeElement)

this.classify().then(() => {

});

newImg.src = this.imagePreview.nativeElement.src
}

setFile() {
const input: HTMLInputElement = this.inputFileElement.nativeElement;
this.hasValidImage = false;

this.result = null;
this.file = null;

if (input.files.length > 0) {

const file = input.files[0];

this.file = file;

const reader = new FileReader();


reader.onload = () => {

this.previewImage(reader.result as string)

};
reader.readAsDataURL(file);

} else {
52

this.imagePreview.nativeElement.src = "/assets/images/placeholder.jpg";
}
}

triggerInputFile() {
this.inputFileElement.nativeElement.click();
}

triggerModel() {
alert("this is a model woohoo");
}

async classify() {

if (this.model === null) {


this.toast("No model loaded");
return;
}

if (this.hasValidImage) {

let tensor = tf.browser.fromPixels(this.canvas.nativeElement, 3)

.resizeNearestNeighbor([200, 200]) // change the image size


.toFloat()
.expandDims()
.div(255)

let predictions = this.model.predict(tensor) as any;

const d = await predictions.data() as [];


console.log("predictions: " +d);

let top5 = Array.from(d)


.map(function (p, i) { // this is Array.map
return {
probability: p,
className: TARGET_CLASSES[i] // we are selecting the value from the obj
};
})
.sort(function (a: any, b: any) {
return b.probability - a.probability;
}).slice(0, 5) as any[];
53

console.log(top5);
this.result = top5[0].className;

} else {
// No file selected
this.toast("Please select an image first.");
}

toast(message: string) {

this.toastService.create({
message: message,
duration: 3000
}).then(toast => {
toast.present();
});
}

Home.page.hmtl

<ion-header [translucent]="true">
<ion-toolbar>
<ion-buttons slot="start">
<ion-menu-button></ion-menu-button>
</ion-buttons>
<ion-title>Home</ion-title>

</ion-toolbar>
</ion-header>

<ion-content>

<img #imagePreview alt="" src="/assets/images/placeholder.jpg" class="selected-


image">

<div id="filename" *ngIf="file">{{ file.name }}</div>

<input type="file" #inputFileElement (change)="setFile()" accept="image/*">


54

<img src="{{imgURL}}">

<ion-button round (click)="triggerInputFile()" color="secondary" full >Select


Image</ion-button>
<ion-button (click)="captureImage()">
Capture from Camera
</ion-button>

<img [src]="clickedImage" />

<p id="result">
<span *ngIf="result">Predicted Variety: {{ result }}</span>
<span *ngIf="result == null && file">Predicting...</span>
</p>

<canvas #canvas></canvas>

</ion-content>
55

Appendix B. Source code for Model training and testing

import sys
import warnings
import numpy as np
import cv2
import tensorflow as tf

import os, os.path


from tensorflow.keras.preprocessing.image import ImageDataGenerator,
array_to_img, img_to_array, load_img
from tensorflow.keras.preprocessing import image
import os
from matplotlib import pyplot as plt
from tensorflow.keras.callbacks import ModelCheckpoint
from tensorflow.keras.models import Sequential
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.metrics import categorical_crossentropy
from tensorflow.keras.callbacks import EarlyStopping, ReduceLROnPlateau,
TensorBoard
from tensorflow.keras.models import Model
from tensorflow.keras import layers
from tensorflow.keras.layers import Dropout
from tensorflow.keras.layers import Flatten
from tensorflow.keras.layers import Activation
from tensorflow.keras.layers import Dense
from tensorflow.keras.layers import Input
from tensorflow.keras.layers import BatchNormalization
from tensorflow.keras.layers import Conv2D
from tensorflow.keras.layers import MaxPooling2D
from tensorflow.keras.layers import AveragePooling2D
from tensorflow.keras.layers import GlobalAveragePooling2D
from tensorflow.keras.layers import GlobalMaxPooling2D

from matplotlib import pyplot as plt

from pathlib import Path

kernel_size = (3,3)
pool_size = (2,2)
first_filters = 32
second_filters = 64
56

dropout_conv = 0.5
dropout_dense = 0.5
img_size = 200
batch_Size = 5
CLASSES = 5

train = ImageDataGenerator(rescale = 1.0/255)


train_dataset = train.flow_from_directory('C:/Users/Darwin/Documents/CNN project
Thesis/Version 2/Training/Augmented/',
target_size = (img_size,img_size),
batch_size = batch_Size,
class_mode = 'categorical')

test= ImageDataGenerator(rescale = 1.0/255)


test_dataset = test.flow_from_directory('C:/Users/Darwin/Documents/CNN project
Thesis/Version 1/Testing')

validation_data = ImageDataGenerator(rescale = 1.0/255)


validation = validation_data.flow_from_directory('C:/Users/Darwin/Documents/CNN
project Thesis/Version 1/Validation',
target_size = (img_size,img_size),
batch_size = batch_Size,
class_mode = 'categorical')

#1st TRY FOR MODEL SEQUENTIAL API


model = Sequential()

model.add(Conv2D(first_filters,kernel_size, activation = 'relu',input_shape =


(img_size,img_size,3)))
model.add(MaxPooling2D(pool_size = pool_size))
model.add(BatchNormalization())
model.add(Dropout(dropout_conv))

model.add(Conv2D(first_filters,kernel_size, activation = 'relu'))


model.add(MaxPooling2D(pool_size = pool_size))
model.add(BatchNormalization())
model.add(Dropout(dropout_conv))

model.add(Conv2D(second_filters,kernel_size, activation = 'relu'))


57

model.add(MaxPooling2D(pool_size = pool_size))
model.add(BatchNormalization())
model.add(Dropout(dropout_conv))

model.add(Conv2D(second_filters,kernel_size, activation = 'relu'))


model.add(MaxPooling2D(pool_size = pool_size))
model.add(BatchNormalization())
model.add(Dropout(dropout_conv))

model.add(Flatten())

model.add(Dense(64,activation = 'relu'))
model.add(BatchNormalization())
model.add(Dropout(dropout_conv))
model.add(Dense(5,activation = 'softmax'))

model.compile(Adam(lr = 0.0001), loss = 'categorical_crossentropy', metrics =


['accuracy'])
model.summary()

filepath = 'C:/Users/Darwin/Documents/CNN project Thesis/Version


1/Models/Saved_Model_SCVC-epoch-{epoch:02d}.h5'
checkpoint = ModelCheckpoint(filepath,
monitor = 'val_accuracy',
verbose = 1,
save_best_only = True ,
mode = 'max',
period = 1)

reduce_lr = ReduceLROnPlateau(monitor = 'val_loss', factor = 0.2, patience = 3,


mode = 'max', min_lr = 0.0001)
tensorboard_callback = TensorBoard(log_dir = '.\logs')

history = model.fit(train_dataset,
steps_per_epoch = len(train_dataset)//batch_Size,
validation_data = validation,
validation_steps = len(validation)//batch_Size,
epochs = 1000,
58

verbose = 1,
callbacks = [checkpoint, reduce_lr, tensorboard_callback])

plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'val'], loc='upper left')
plt.show()

plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'val'], loc='upper left')
plt.show()

model.save('C:/Users/Darwin/Documents/CNN project Thesis/Version


1/Models/June1_SCVC_model_v3.h5')
print("model saved!")

from tensorflow.keras.preprocessing.image import ImageDataGenerator


from tensorflow.keras.models import load_model
from tensorflow.keras.preprocessing import image
from tensorflow.keras.preprocessing.image import img_to_array, load_img
import numpy as np
import pandas as pd

# dimensions of our images.


num_test_samples = 160
img_width, img_height = 200, 200
batch_size = 5

test_data_dir = 'C:/Users/Darwin/Documents/CNN project Thesis/Version 1/Testing'

test_datagen = ImageDataGenerator(rescale=1./255)

test_generator = test_datagen.flow_from_directory(
test_data_dir,
target_size=(img_width, img_height),
color_mode="rgb",
batch_size=batch_size,
class_mode=None,
59

shuffle=False
)

test_generator.reset()

test_model = load_model('C:/Users/Darwin/Documents/CNN project Thesis/Version


1/Models/June1_SCVC_model_v3.h5')

#output in probabilities
pred = test_model.predict(test_generator, verbose=1,
steps=num_test_samples/batch_size)

#convert the output into class number


predicted_class_indices=np.argmax(pred,axis=1)

#name of the classes


labels = (test_generator.class_indices)
labels = dict((v,k) for k,v in labels.items())
predictions = [labels[k] for k in predicted_class_indices]
filenames=test_generator.filenames
results=pd.DataFrame({"Filename":filenames,
"Predictions":predictions})
export_csv = results.to_csv (r'C:/Users/Darwin/Documents/CNN project
Thesis/Version 1/Testing-7.csv', index = None, header=True)
60

Appendix C. Sugarcane Sample Images

VMC 84-524

VMC 86-550

Phil 94-0913
61

VMC 95-06

PS 1
62

Appendix D. Test Results

Filename Classifications
84 524\Test-84 524 (1).jpg 84 524
84 524\Test-84 524 (10).jpg 84 524
84 524\Test-84 524 (11).jpg 84 524
84 524\Test-84 524 (12).jpg 84 524
84 524\Test-84 524 (13).jpg 84 524
84 524\Test-84 524 (14).jpg 84 524
84 524\Test-84 524 (18).jpg 84 524
84 524\Test-84 524 (19).jpg 84 524
84 524\Test-84 524 (2).jpg 84 524
84 524\Test-84 524 (21).jpg 84 524
84 524\Test-84 524 (23).jpg 84 524
84 524\Test-84 524 (24).jpg 84 524
84 524\Test-84 524 (26).jpg 84 524
84 524\Test-84 524 (27).jpg 84 524
84 524\Test-84 524 (28).jpg 84 524
84 524\Test-84 524 (29).jpg 84 524
84 524\Test-84 524 (3).jpg 84 524
84 524\Test-84 524 (30).jpg 84 524
84 524\Test-84 524 (31).jpg 84 524
84 524\Test-84 524 (32).jpg 84 524
84 524\Test-84 524 (4).jpg 84 524
84 524\Test-84 524 (5).jpg 84 524
84 524\Test-84 524 (6).jpg 84 524
84 524\Test-84 524 (7).jpg 84 524
84 524\Test-84 524 (8).jpg 84 524
84 524\Test-84 524 (9).jpg 84 524
84 524\Test-84 524 (16).jpg 94 0913
84 524\Test-84 524 (20).jpg 94 0913
84 524\Test-84 524 (22).jpg 94 0913
84 524\Test-84 524 (25).jpg 94 0913
84 524\Test-84 524 (15).jpg 95 06
84 524\Test-84 524 (17).jpg 95 06
86 550\Test - 86 550 (1).jpg 84 524
86 550\Test - 86 550 (18).jpg 84 524
86 550\Test - 86 550 (7).jpg 84 524
86 550\Test - 86 550 (10).jpg 86 550
86 550\Test - 86 550 (15).jpg 86 550
86 550\Test - 86 550 (16).jpg 86 550
86 550\Test - 86 550 (17).jpg 86 550
86 550\Test - 86 550 (19).jpg 86 550
86 550\Test - 86 550 (2).jpg 86 550
63

86 550\Test - 86 550 (20).jpg 86 550


86 550\Test - 86 550 (21).jpg 86 550
86 550\Test - 86 550 (22).jpg 86 550
86 550\Test - 86 550 (23).jpg 86 550
86 550\Test - 86 550 (24).jpg 86 550
86 550\Test - 86 550 (25).jpg 86 550
86 550\Test - 86 550 (26).jpg 86 550
86 550\Test - 86 550 (27).jpg 86 550
86 550\Test - 86 550 (28).jpg 86 550
86 550\Test - 86 550 (29).jpg 86 550
86 550\Test - 86 550 (3).jpg 86 550
86 550\Test - 86 550 (30).jpg 86 550
86 550\Test - 86 550 (31).jpg 86 550
86 550\Test - 86 550 (32).jpg 86 550
86 550\Test - 86 550 (4).jpg 86 550
86 550\Test - 86 550 (5).jpg 86 550
86 550\Test - 86 550 (6).jpg 86 550
86 550\Test - 86 550 (8).jpg 86 550
86 550\Test - 86 550 (9).jpg 86 550
86 550\Test - 86 550 (11).jpg 94 0913
86 550\Test - 86 550 (13).jpg 94 0913
86 550\Test - 86 550 (14).jpg 94 0913
86 550\Test - 86 550 (12).jpg 95 06
94 0913\Test - 94 0913 (1).jpg 94 0913
94 0913\Test - 94 0913 (10).jpg 94 0913
94 0913\Test - 94 0913 (11).jpg 94 0913
94 0913\Test - 94 0913 (12).jpg 94 0913
94 0913\Test - 94 0913 (13).jpg 94 0913
94 0913\Test - 94 0913 (14).jpg 94 0913
94 0913\Test - 94 0913 (15).jpg 94 0913
94 0913\Test - 94 0913 (16).jpg 94 0913
94 0913\Test - 94 0913 (17).jpg 94 0913
94 0913\Test - 94 0913 (18).jpg 94 0913
94 0913\Test - 94 0913 (19).jpg 94 0913
94 0913\Test - 94 0913 (2).jpg 94 0913
94 0913\Test - 94 0913 (20).jpg 94 0913
94 0913\Test - 94 0913 (21).jpg 94 0913
94 0913\Test - 94 0913 (22).jpg 94 0913
94 0913\Test - 94 0913 (23).jpg 94 0913
94 0913\Test - 94 0913 (24).jpg 94 0913
94 0913\Test - 94 0913 (25).jpg 94 0913
94 0913\Test - 94 0913 (26).jpg 94 0913
94 0913\Test - 94 0913 (27).jpg 94 0913
94 0913\Test - 94 0913 (28).jpg 94 0913
64

94 0913\Test - 94 0913 (29).jpg 94 0913


94 0913\Test - 94 0913 (3).jpg 94 0913
94 0913\Test - 94 0913 (30).jpg 94 0913
94 0913\Test - 94 0913 (31).jpg 94 0913
94 0913\Test - 94 0913 (32).jpg 94 0913
94 0913\Test - 94 0913 (4).jpg 94 0913
94 0913\Test - 94 0913 (5).jpg 94 0913
94 0913\Test - 94 0913 (6).jpg 94 0913
94 0913\Test - 94 0913 (7).jpg 94 0913
94 0913\Test - 94 0913 (8).jpg 94 0913
94 0913\Test - 94 0913 (9).jpg 94 0913
95 06\Test - 95 06 (11).jpg 84 524
95 06\Test - 95 06 (12).jpg 84 524
95 06\Test - 95 06 (13).jpg 84 524
95 06\Test - 95 06 (15).jpg 84 524
95 06\Test - 95 06 (16).jpg 84 524
95 06\Test - 95 06 (17).jpg 84 524
95 06\Test - 95 06 (18).jpg 84 524
95 06\Test - 95 06 (19).jpg 84 524
95 06\Test - 95 06 (2).jpg 84 524
95 06\Test - 95 06 (20).jpg 84 524
95 06\Test - 95 06 (22).jpg 84 524
95 06\Test - 95 06 (24).jpg 84 524
95 06\Test - 95 06 (27).jpg 84 524
95 06\Test - 95 06 (28).jpg 84 524
95 06\Test - 95 06 (29).jpg 84 524
95 06\Test - 95 06 (30).jpg 84 524
95 06\Test - 95 06 (31).jpg 84 524
95 06\Test - 95 06 (4).jpg 84 524
95 06\Test - 95 06 (5).jpg 84 524
95 06\Test - 95 06 (8).jpg 84 524
95 06\Test - 95 06 (9).jpg 84 524
95 06\Test - 95 06 (1).jpg 95 06
95 06\Test - 95 06 (10).jpg 95 06
95 06\Test - 95 06 (14).jpg 95 06
95 06\Test - 95 06 (21).jpg 95 06
95 06\Test - 95 06 (23).jpg 95 06
95 06\Test - 95 06 (25).jpg 95 06
95 06\Test - 95 06 (26).jpg 95 06
95 06\Test - 95 06 (3).jpg 95 06
95 06\Test - 95 06 (32).jpg 95 06
95 06\Test - 95 06 (6).jpg 95 06
95 06\Test - 95 06 (7).jpg 95 06
PS 1\Test PS 1 (20).jpg 84 524
65

PS 1\Test PS 1 (32).jpg 84 524


PS 1\Test PS 1 (13).jpg 94 0913
PS 1\Test PS 1 (14).jpg 94 0913
PS 1\Test PS 1 (15).jpg 94 0913
PS 1\Test PS 1 (17).jpg 94 0913
PS 1\Test PS 1 (18).jpg 94 0913
PS 1\Test PS 1 (19).jpg 94 0913
PS 1\Test PS 1 (21).jpg 94 0913
PS 1\Test PS 1 (23).jpg 94 0913
PS 1\Test PS 1 (24).jpg 94 0913
PS 1\Test PS 1 (26).jpg 94 0913
PS 1\Test PS 1 (28).jpg 94 0913
PS 1\Test PS 1 (29).jpg 94 0913
PS 1\Test PS 1 (30).jpg 94 0913
PS 1\Test PS 1 (31).jpg 94 0913
PS 1\Test PS 1 (7).jpg 94 0913
PS 1\Test PS 1 (4).jpg 95 06
PS 1\Test PS 1 (1).jpg PS 1
PS 1\Test PS 1 (10).jpg PS 1
PS 1\Test PS 1 (11).jpg PS 1
PS 1\Test PS 1 (12).jpg PS 1
PS 1\Test PS 1 (16).jpg PS 1
PS 1\Test PS 1 (2).jpg PS 1
PS 1\Test PS 1 (22).jpg PS 1
PS 1\Test PS 1 (25).jpg PS 1
PS 1\Test PS 1 (27).jpg PS 1
PS 1\Test PS 1 (3).jpg PS 1
PS 1\Test PS 1 (5).jpg PS 1
PS 1\Test PS 1 (6).jpg PS 1
PS 1\Test PS 1 (8).jpg PS 1
PS 1\Test PS 1 (9).jpg PS 1
66

Appendix E. Definition of Terms

Apache Cordova - is an open source framework that allows applications developed

using web languages like Javascript, HTML and CSS to run in multiple platforms.

Access to hardware features are also supported by the Cordova framework such as

battery status, camera, dialogs and vibration.

Convolutional Neural Network - is a class of neural networks that specializes in

processing data that has a grid-like topology, such as an image. A digital image is a

binary representation of visual data. It contains a series of pixels arranged in a grid-

like fashion that contains pixel values to denote how bright and what color each pixel

should be.

Hybrid Mobile Application – is essentially web apps that have been put in a native

app shell. Once they are downloaded from an app store and installed locally, the shell

is able to connect to whatever capabilities the mobile platform provides through a

browser that's embedded in the app.

Ionic - is an open source UI toolkit for building performant, high-quality mobile and

desktop apps using web technologies — HTML, CSS, and JavaScript — with

integrations for popular frameworks like Angular, React, and Vue.

Tensorflow - is an end-to-end open source platform for machine learning. It has a

comprehensive, flexible ecosystem of tools, libraries and community resources that

lets researchers push the state-of-the-art in ML and developers easily build and deploy

ML powered applications.
67

Tensorflow.JS - is a library for machine learning in JavaScript. Develop ML models

in JavaScript, and use ML directly in the browser or in Node.


68

Appendix F. Curriculum Vitae

DARWIN G.
CABARRUBIAS
Junior Software Developer
Profile EDUCATION
Open-minded and reliable graduate from Visayas State University
Visayas State University with a BS in August 2017 – July 2022
Computer Science, specializing in Web
Development and Machine Learning with National Heroes Institute
Image Processing. Skilled in: June 2010 – March 2014

 Tensorflow San Agustin Schoolyard Montessori


 Laravel Framework June 2004 – March 2010
 Javascript
 Mikrotik Networking

WORK EXPERIENCE
Contact Visayas State University - Data Encoder
August 2021– April 2022
PHONE: Prepare documents needed for the AACCUP Accreditation for the Department of
09750743223
Computer Science and Technology

EMAIL: Certificates
cabarrubiasdarwin98@gmail.com

ADDRESS:  MikroTik Certified Network Associate


Brgy. Montebello, Kananga, Leyte, 6531  MikroTik Certified Routing Engineer
 MikroTik Certified User Management Engineer
 PhilNits IT Passport Certification

References

 Prof. Magdalene C. Unajan, Department Head, DCST, VSU,


Baybay City, Leyte; Email: magdalene.unajan@vsu.edu.ph
 Prof. Jude B. Rola, Assoicate Professor I, DCST,
VSU,Baybay City, Leyte; Email: jude.rola@vsu.edu.ph

You might also like