Thesis Mike

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 51

Republic of the Philippines

BOHOL ISLAND STATE UNIVERSITY


Calape Campus
San Isidro, Calape, Bohol

CORAL REEFS IDENTIFIER APP


USING
TENSORFLOW

MIKE C. ALBELDA
BSCS 4-A
INTRODUCTION

Problem Background

The ecosystem is a biological community that saves lives and sustains our needs

and income. One of the most important fragments in the ecosystem is the coral reef. A

coral reef is an underwater ecosystem characterized by reef-building corals. Reefs are

formed by colonies of coral polyps held together by calcium carbonate. Coral reefs

provide an important ecosystem for life underwater, protect coastal areas by reducing

the power of waves hitting the coast, and provide a crucial source of income for millions

of people.

The study of coral reefs is important for providing a clear, scientifically-testable

record of climatic events over the past million years or so. This includes records of

recent major storms and human impacts that are recorded by the changes in coral

growth patterns.

The most common cause of damaging the corals is careless tourism. Increasing

tourism is one of the major causes of the destruction of coral reefs. The following factors

all contribute to coral reef damage: uncontrolled building and irresponsible business

operations, increased discharge of wastewater, careless tourist behavior, and boats and

other vessels used for recreational activities.

If corals are bombarded by stressors like poor water quality, a marine heatwave,

or overfishing, their energy will be directed at survival instead of reproduction. To

ensure that people and wildlife can continue relying on the lifesaving services coral
reefs provide, we need to reduce threats to reefs. Keeping corals healthy so they

continue to reproduce in strategic areas around the world.

Image processing is a way to convert an image to a digital aspect and perform

certain functions on it and get an enhanced image or extract other useful information

from it. It is a type of signal time when the input is an image, such as a video frame or

image, and the output can be an image or features associated with that image. Usually,

the Image Processing System includes treating images as two equal symbols while

using the set methods used. It is one of the fastest-growing technologies today, with its

use in various business sectors. Graphic Design forms the core of the research space

within the engineering and computer science industry as well.

With the means of this mobile application, it would be easier for the user to

recognize or track down the objects that need to be studied. The output from each layer

is computed by matrix multiplication of the output of the previous layer with learnable

weights of that layer. In that case, the application should be responsive enough so that

the objects that need to be process will be recognized in the way it should be, to came

up with accurate results.

Literature Review

There have been recent advances in machine learning approaches in the

efforts to study about ecosystem. They've gotten a lot of attention from academia,

businesses, and governments. This section summarizes some of the available

researches on coral reef detection using various machine learning algorithms.


Since humans are showing activities that threaten the coral reefs around

the world. There has been a great deal of study done. The authors presented a deep

learning model to partake an efficient development.

Based on the study conducted by Mehta et al. [32], he used a Support

Vector Machines (SVM) classifier directly on spatial information of coral images without

any preprocessing step to get a binary (coral/non-coral) output. He reasoned for using

raw-data as classifier input such that features and descriptors for coral textures are very

difficult to obtain. He achieved an accuracy percentage of 95 for his classification but

this method behaves negatively with any change in underwater illumination.

This study is also supported by Pizarro et al. [33] which showed that, he

introduced object recognition for coar se habitat classification (1st experiment; 8

classes: coralline rubble, hard coral, hard coral +soft coral + coralline rubble, Halimeda

+ hard Coral + coralline rubble, macroalgae, rhodolite, sponges, and un-colonized) (2nd

experiment; 4 classes: reef + coarse sand, coarse sand, reef, and fine sand), he

employed the same color features of Marcos but different texture features based on

bag-of-words using scale-invariant feature transform (SIFT) with extra saliency feature

of Gabor-filter response, this method can only be used with single-object images (one

class per entire image) [34].

In the study conducted by Marcos et al. [35], he developed an automated

rapid classification (5 classes: coral, sand, rubble, dead coral, and dead coral with

algae) for underwater reef video, he used color features based on a histogram of

normalized chromaticity coordinates (NCC) and texture features from local binary

patterns (LBP) descriptor, those features feed into the linear discriminant analysis (LDA)
classifier. In the case of using more classes [34], his method generated inaccurate

classifications.

This study is supported by Johnson-Roberson et al. In his study, he

showed an approach for the autonomous segmentation and classification of corals

through the combination of visual and acoustic data, which generates 60 visual

features: 12 are the mean and standard deviation of all of the RGB and HSV channels

separately and the remaining 48 are obtained by convolving the region with Gabor

wavelets at six scales and four orientations and taking the mean and standard deviation

of the results for each scale and orientation combination, then SVM is selected for the

classification task, preprocessing is mainly required to allocate foreground regions

before feature extraction.

According to Purser et al. [37], he investigated machine-learning

algorithms for the automated detection of cold-water coral habitats, which computes 15

differently oriented and spaced gratings to produce a set of 30 texture features, and to

compare a computer vision system with the use of three manual methods: 15-point

quadrat, 100-point quadrat, and frame mapping. Strokes & Deane [38] described an

automated algorithm for the classification of coral reef benthic organisms and substrates

which divides the image into blocks, then finds the distance between those blocks and

identifies specs blocks based on color features (normalized histogram of RGB color

space) and texture features (radial samples of 2D discrete cosine transform) by using

inconvenient distance metric (manually assigned parameters) after the unsuccessful

results of the well-known Mahalanobis distance.


Beijbom et al. [14] also introduced Moorea Labeled Corals (MLC) dataset

and proposed a multiscale classification algorithm for automatic annotation, he

developed color stretching for each channel individually in L*a*b* color space as a pre-

processing step, then used Maximum Response (MR) filter bank approach (rotation

invariant) as color and texture feature, followed by applying Radial Basis Function

kernel (RBF) of Support Vector Machines (SVM) classifier. In this method, seeks all

possibilities (time-consuming) to find a suitable patch size around selected image points

for species identification.

In the study revealed by Schoening et al. [39], he introduced a semi-

automated detection system for deep-sea coral images, in which firstly applies to

preprocess step for illumination correction, then secondly extracts high dimensional

features at labeled pixels based on MPEG7 standard (four descriptors for color features;

three for texture; ten for structure and motion), after that, a set of successive different

support vector machines is applied alongside with thresholding post-processing,

although the used features are more generic for any application leading to high

sensitivity for underwater cluttering background. Stough [40] presented an automatic

binary segmentation technique of live Staghorn coral species, in which regional intensity

quantile functions (QF) is used as color features and Scale-Invariant Feature Transform

(SIFT) is maintained to be texture features, followed by linear SVM as the classifier, this

supervised technique is highly noise-sensitive (due to SIFT).

The study of Shihavuddin implemented a hybrid variable-scheme

classification framework for benthic coral reef images or mosaics, this framework uses a

combination of the following features: (local binary pattern (CLBP), grey level co-
occurrence matrix (GLCM), Gabor filter response, and opponent angle and hue channel

color histograms) and also a combination of the following classifiers: (k- nearest

neighbor (KNN), neural network (NN), support vector machine (SVM) or probability

density-weighted mean distance (PDWMD)) alongside middleware procedures for better

result enhancement, but this framework works sufficiently with small-patched images

(high negative impact of background information).

Rather than depending on human-crafted features (please see table 3.1)

to get a proper coral classification, the proposed work decides to let the feature

mapping be done automatically by deep convolutional neural networks regardless of

any underwater environment condition. by feeding new images, the network can learn

and adapt the constructed feature maps concerning the desired class outputs.

The researcher used Convolutional Neural Network (ConvNet/CNN)

algorithm to show image-processing outputs specifically for image recognition.

Convolutional Neural Network (ConvNet/CNN) is a Deep Learning algorithm which can

take in an input image, assign importance (learnable weights and biases) to various

aspects/objects in the image and be able to differentiate one from the other. The pre-

processing required in a ConvNet is much lower as compared to other classification

algorithms.

Convolutional Neural Network (CNN), is a subset of deep learning and

neural networks most commonly used to analyze visual imagery. Compared to other

image classification algorithms, convolutional neural networks use minimal

preprocessing, meaning the network learns the filters that typically are hand-engineered

in other systems.
The output from each layer is computed by matrix multiplication of output

of the previous layer with learnable weights of that layer and then by addition of

learnable biases followed by activation function which makes the network nonlinear.

Artificial neural networks have long been popular in machine learning. More recently,

they have received renewed interest, since networks with many layers (often referred to

as deep networks) have been shown to solve many practical tasks with accuracy levels

not yet reached with other machine learning approaches. In image analysis,

convolutional neural networks have been particularly successful. The term refers to a

class of neural networks with a specific network architecture, where each so-called

hidden layer typically has two distinct layers: the first stage is the result of a local

convolution of the previous layer (the kernel has trainable weights), the second stage is

a max-pooling stage, where the number of units is significantly reduced by keeping only

the maximum response of several units of the first stage. After several hidden layers,

the final layer is typically a fully connected layer. It has a unit for each class that the

network predicts, and each of those units receives input from all units of the previous

layer.

SIGNIFICANCE OF THE STUDY

This study aims to provide a new way to identify what types of coral reefs are

capable of the following:

Fisheries Students. This study will greatly help them know how important our

coral reefs are. For the reason that, reefs are among the planet's most diverse and

valuable ecosystems.
Department of Environment and Natural Resources (DENR). This study

has allowed the researcher to contribute to Government workers’ efforts in protecting

our wide seas. Our environment agency is in charge of developing and enforcing

policies, guidelines, and laws relating to environmental management, as well as the

management and conservation of the country's natural resources.

Fisherman. This study would help the fisherman to gain awareness of the

importance of coral reefs which is the primary shelter of the fish. Moreover, if they

were responsible for their needs, they should prohibit the use of dynamite fishing,

overfishing and other illegal forms of fishing which will not mitigate the underwater life

sources.

Researcher. The researcher will have the opportunity to learn new things and

apply what they've learned to the development of this system.

Future Researchers. This study will serve as a model for future researchers

aiming to develop new approaches for detecting coral reefs.

Particularly, the researchers aim to answer the following questions:

1. Does coral image app recognition feasible to use?

2. In what way does the coral image recognition help the environment?

3. How well the application detects an object?

Objectives

At the culmination of the study, the researcher aims to achieve the following:

1. To be able to create an app that identifies coral and show its information.
2. To implement the specified algorithm in the system as efficiently as possible.

3. Help to prevent the destruction of endangered coral reefs.

CHAPTER II

METHODOLOGY

Design

The researcher's research methods include descriptive and developmental

technology research. The descriptive study approach entailed acquiring, analyzing, and

interpreting data, and presenting the current system.

In the realm of instructional technology, developmental research is essential. The

most prevalent sort of developmental research are those in which the object's

development process is described and analyzed, and the end observation is assessed.

Instrument

Mobile phone camera. The project will need a camera in most of its processes

and the most accessible camera is from a mobile phone. The project will be installed in

a mobile phone and through its camera, it will do the scanning.

Laptop. The project will be programmed using a laptop.


Data Set

Coral classes

Alcyonacea Alcyonacea

Colpophyllia Dendronephthya
Dichocoenia Diploria

Elkhorn coral Eusmilia

Isophyllia Manicina Areolate

Meandrina Meandrites Montastraea Cavernosa


Orbicella Annularis Plerogyra sinuosa

Porites astreoides
Pseudodiploria clivosa

Pseudodiploria strigose Siderastrea radian

Siderastrea sidereal Acropora


Data Collection

The images that are being collected in the study are different classifications of

corals species. There 20 categories of corals namely: Agaricia, Agaricia tenuifolia,

Alcyonacea, Colpophyllia, Dendronephthya, Dichocoenia, Diploria, Elkhorn coral,

Eusmilia, Manicina areolate, Meandrina meandrites, Montastraea cavernosa, Orbicella

annularis, Plerogyra sinuosa, Porites astreoides, Pseudodiploria clivosa, Siderastrea

radians, Siderastrea sidereal, Sinularia Crassa, Acropora. Medium that were used to

gather the samples are downloaded from the internet also we gathered actual photo

samples. The camera that is used is Realme 7i and the settings are: type: CMOS,

Aperture: f/1.8, ISO: 844, pixel size: 5mm and sensor size: 1/29s.

There are 2 ways how we collect the images first is collecting pictures from

internet and the other one is capturing from a mobile phone that has a white

background. It was a captured in a well lit area wherein the coral is well presented. The

coral sample was came from the sea and was brought in the researchers house. The

images where put in a folder according to its classification and the researcher created a

project where he upload the different kinds of corals labeling them independently and by

the use google teachable machine learning the data where trained, stored and extracted

as tensorflowlite model.

Data Labelling
Teachable Machine by Google is being used in the system, Teachable Machine

is a web-based tool that makes creating machine learning models fast, easy, and

accessible to everyone. The images were labelled individually as different class. The

researcher uses this method because it is very convenient and user friendly. There are

20 categories with each having 2-10 pictures every class. Agaricia- 5, Agaricia

tenuifolia-4 , Alcyonacea-5, Colpophyllia-3, Dendronephthya-5, Dichocoenia-3, Diploria-

3, Elkhorn coral-4, Eusmilia-4, Manicina areolate-4, Meandrina meandrites-4,

Montastraea cavernosa-4, Orbicella annularis-4, Plerogyra sinuosa-4, Porites

astreoides-3, Pseudodiploria clivosa-3, Siderastrea radians-3, Siderastrea sidereal-4,

Sinularia Crassa-10, Acropora-10. In labeling the images it is very simply in Teachable

Machine you just rename the class and it automatically labelled after it has been done

training.

Model Training

The model was trained by Teachable Machine by Google which then exported as

Tensorflow which uses TensorFlow.Js algorithm, a library for machine learning in

Javascript, to train and run the models you make in your web browser. The amount of

epochs used in training is 500, batch size is 16, and image sizes where around

20kilobyte. The expository used in the system uses github

https://github.com/googlecreativelab/teachablemachine-community. The training elpsed

only 1 minute.

Model Implementation
The programming language that is being used in the system is Android Studio.

The application is only applicable only in Android Devices running Android Jellybean

running Android 4.1 up. The android Sdk being used is commandlinetools-win-

8512546_latest.zip.
METHODOLOGY FRAMEWORK

Model
Implementation
Model
Data Collection Data Labelling Model Training (Rice Maturity
Conversion
(Summary) (Summary) (Summary) Detection
(Summary)
Android
Application)

SUBSYSTEM

Model

Phone

Elkhorn Coral
Block Diagram

Training Phase Testing Phase

R Recognition
Image Acquisition Image Acquisition
Target

Pre-Processing Pre-Processing

Feature Extraction Feature Extraction

R User Train Classifier Classification

Post-Processing

Fig. 1.0 Block Diagram


A brief overview of the application:

This block diagram explains the flow of an image recognition application.

In this system application, there is the training phase and testing phase. For the training

phase, it goes first by recognizing the object through the first process which is,

recognition target. Next, is the image acquisition, where the application acquires an

image through scanning and uploading of an image. Then, it goes along by processing

the images through third level which is pre-processing. The image will undergo on the

process which reads all the information needed for the application. After that, the

image’s information will be criticized, acquiring all the necessary information that will be

further extracted through feature extraction and will proceed to another level. At this

point, another level comes next, which is the train classifier. This time, the features

acquired from the previous levels will be prompted for training in the application. After

training, the images will be classified and stored in the application with the next level

called classification. Post- processing is the final process where the application scans

an image with the same features as those trained images. When the image is scanned,

the application will display the information specifically the name of the image and its

specific details that will serves as the post and final result. As for the testing phase, it

will go the same way as the training phase excluding the first level which is the

recognition target and the train classifier. The instructions are simple so it can be

followed easily. This diagram shows how this application works among elements.
PROJECT MANAGEMENT

Materials and Resources

The following are the materials and resources needed to create the project. This

includes the materials that are needed for the circuit and documentation.

Material/Resource Quantity Cost

Lenovo IdeaPad (3-14||L05-81WD005TPH |3- 1 26, 995.00 Php

1005G1/4GB/1TB+)

Bond Paper 1 Rim 182.00 Php

Internet 400.00 Php

Folder 1 8.00 Php

Android Phone (Oppo A5s) 1 6,999.00 Php

Total Cost 34,575.00 Php

Table 1.0 Materials and Resources

Lenovo IdeaPad (3-14||L05-81WD005TPH |3-1005G1/4GB/1TB+) – Lenovo

Ideapad is a Windows 10 laptop with a 15.60-inch display that has a resolution of

1920x1080 pixels. It is powered by a Core i3 processor and it comes with 8GB of RAM.

The Lenovo Ideapad packs 1TB of SSD storage. Graphics are powered by Intel HD

Graphics 620. Connectivity options include Wi-Fi 802.11 ac and it comes with 3 USB
ports (2 x USB 2.0, 1 x USB 3.0), HDMI Port, Multi Card Slot, Headphone and Mic

Combo Jack, RJ45 (LAN) ports.

Bond Paper - Is a durable paper that is suitable for electronic printing and use in

office machines, including copiers and network and desktop printers.

Flashdrive (Sandisk 32GB) - A sandisk Cruzer flash drive has become a

popular portable media and data storage device. The storage capacity of current

SanDisk Cruzer Flash Drives ranges from four to 32 gigabytes. They can be used for

storing word documents, PDF files, spreadsheets and even huge video files, music

downloads and photo.

Internet - It is a system architecture that has revolutionized communications and

methods of commerce by allowing various computer networks around the world to

interconnect. The Internet is very useful to the system because it creates access to

information, communication and payment set by the client and administrator.

Android Phone (Oppo A5s) - Is fueled with 4230 mah, Li-Polymer battery

capacity which is Non-removable. The Oppo A5s (AX5s) runs on Android 8.1 Oreo

operating System with ColorOS 5.2 user interface.

Operating System (Windows 10 64-bit) - Windows 10 is a Microsoft operating

system for personal computers, tablets, embedded devices and internet of things

devices. Windows 10 features built-in capabilities that allow corporate IT departments to

use mobile device management (MDM) software to secure and control devices running

the operating system.


Java – is a high-level, class-based, object-oriented programming language that

is designed to have as few implementation dependencies as possible. It is a general-

purpose programming language intended to let programmers write once, run anywhere

(WORA), meaning that compiled Java code can run on all platforms that support Java

without the need to recompile. Java applications are typically compiled to bytecode that

can run on any Java virtual machine (JVM) regardless of the underlying computer

architecture. The syntax of Java is similar to C and C++, but has fewer low-level

facilities than either of them.

Android Studio - is the official integrated development environment (IDE) for

Google's Android operating system, built on JetBrains' IntelliJ IDEA software and

designed specifically for Android development.[8] It is available for download on

Windows, macOS and Linux based operating systems or as a subscription-based

service in 2020. It is a replacement for the Eclipse Android Development Tools (E-ADT)

as the primary IDE for native Android application development.


GANTT CHART
Table 2.0 Gantt Char

TASK
October November December January February March April May

1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4
Submission
of Thesis 2
Title

Title
Approval

Title
Hearing

Planning

Designing

Developme
nt

Testing

Documenta
tion

Finalizing
PROTOTYPE DESIGN

Preview 1.0

Preview 2.0
Preview 3.0

Preview 4.0

Preview 5.0
Preview 6.0

Preview 7.0
Preview 8.0

Preview 9.0
Preview 10.0

Preview 11.0

Preview 12.0

Preview 13.0
Preview 14.0
Preview 15.0

Preview 16.0

Preview 17.0
Preview 18.0

Preview 18.0
Program Flowchart

Coral Recognition

Start

No Recognized
Image?

Yes
Camera/Gallery
Show Information and
Accuracy

Capture Coral Image

End

Coral Detection

TESTING AND DATA GATHERING


Testing

Questionnaire that is being used in the study is testing the usability of the

system which developed by the researcher. In system usability, the researcher

adopted the questionnaire made by Lewis, J. R. (1995) which is the IBM

Computer Usability Satisfaction Questionnaires: Psychometric Evaluation and

Instructions for Use. See Appendix A for the copy of the questionnaire. This

questionnaire contains 19 items or indicators. Respondents were asked to rate

each item from 1 to 7 to tell their degree of agreement or disagreement on the

usability of the system.


There are 5 questionnaires were distributed to the employees of LGU-

Loon specifically working in the Agriculture Office and offices which has

knowledge in corals and 5 for BISU-Calape students who are taking BSF Course.

Data Gathering

The researcher aimed to determine the usability of the system and

application he has developed. He asked permission from the admin staff of the

mayor’s office of LGU-Loon to introduce and present the system to the staff and

distribute the questionnaire. Given the approval, the researcher gathered the

staff which includes different employees. He presented his system and

demonstrated its features in detail like recognizing the corals using camera or by

using gallery, a button that shows its information. He gave the staff ample time to

explore the system on their own. After that, he asked for their perception on the

usability of the system through the use of questionnaires. There were 10

respondents who were chosen through purposive non-probability sampling.

CHAPTER 3

DATA PRESENTATION AND ANALYSIS

System Usability

The researcher adopted the system usability questionnaire of Lewis, J. R.

(1995) in gathering data on the usability of Coral Image Identification App.

Respondents for system usability were asked to rate from 1 to 7 which


represents their degree of agreement or disagreement on the different indicators

of system usability. The interpretative guide for the interpretation of the statistical

results of the system usability is presented in Table 6.8.

Table 3. Interpretative Guide for System Usability

Scale Range Description Interpretation


Strongly The respondent strongly believed in the
7 6.4 – 7.0
Agree usability of the system.
The respondent believed in the usability
6 5.5 – 6.3 Agree
of the system.

Tend to The respondent tends to believe in the


5 4.6 – 5.4
Agree usability of the system.
Neither
The respondent is neutral in trusting that
4 3.7 – 4.5 Agree nor
the system is usable.
Disagree
Tend to The respondent tends not to trust that the
3 2.8 – 3.6
Disagree system is usable.

The respondent believes that the system


2 1.9 – 2.7 Disagree
is not usable.

Strongly The respondent strongly believed that the


1 1.0 – 1.8
Disagree system is not usable.

Table 4. System Usability Result


N = 10

Weighte
Indicators Interpretation
d Mean
1. Overall, I am satisfied with how easy it is to 6.5 Strongly Agree
use this system.
2. It was simple to use this system. 6.5 Strongly Agree

3. I can effectively complete my work using this 6.2 Agree


system.
4. I am able to complete my work quickly using 6.2 Agree
this system.
5. I am able to efficiently complete my work 6.1 Agree
using this system.
6. I feel comfortable using this system. 6.7 Strongly Agree

7. It was easy to learn to use this system. 6.5 Strongly Agree

8. I believe I became productive quickly using 6.1 Agree


this system.
9. The system gives error messages that clearly 5.7 Agree
tell me how to fix problems.
10. Whenever I make a mistake using the 6.2 Agree
system, I recover easily and quickly.
11. The information (such as online help, on- 6.1 Agree
screen messages, and other documentation)
provided with this system is clear.
12. It is easy to find the information I need. 6.7 Strongly Agree

13. The information provided for the system is 6.5 Strongly Agree
easy to understand.
14. The information is effective in helping me 6.0 Agree
complete the tasks and scenarios.
15. The organization of information on the 6.6 Strongly Agree
system screens is clear.
16. The interface of this system is pleasant. 6.2 Agree

17. I like using the interface of this system. 6.3 Agree

18. This system has all the functions and 6.6 Strongly Agree
capabilities I expect it to have.
19. Overall, I am satisfied with this system. 6.5 Strongly Agree

Composite Mean 6.3 Agree

Table 4, presents the respondents’ perception on usability of the system. It

shows that indicators 6 (I feel comfortable using this system) and 12 (It is easy to

find the information I need) both got the highest weighted mean of 6.7 which

indicates “Strongly Agree”. Indicators 15 (The organization of information on the

system screens is clear) and Indicators 18 (This system has all the functions and

capabilities I expect it to have) got the second highest weighted mean of 6.6
which also indicates “Strongly Agree”. Next, Indicators 1 (Overall, I am satisfied

with how easy it is to use this system), Indicators 2 (It was simple to use this

system), Indicators 5 (It was easy to learn to use this system), Indicators 13 (The

information provided for the system is easy to understand) and Indicators 19

(Overall, I am satisfied with this system) got the third highest weighted mean of

6.5 which indicates “Strongly Agree”.

On the other hand, the indicators 13, (I like using the interface of this

system) got a weighted mean of 6.3 which indicates “Agree”. Indicators 3 (I can

effectively complete my work using this system), 4 (I am able to complete my

work quickly using this system), 10 (Whenever I make a mistake using the

system, I recover easily and quickly) and 16 (The interface of this system is

pleasant) have a weighted mean which has 6.2 means “Agree”. Followed by,

Indicators 5 (I am able to efficiently complete my work using this system),

Indicators 8, (I believe I became productive quickly using this system), and

Indicators 11 (The information (such as online help, on-screen messages, and

other documentation) provided with this system is clear) got a weighted mean of

6.1 which means “Agree” next to 6.2. Lastly, the lowest weighted mean garnered

6.0 is Indicator 14, (The information is effective in helping me complete the tasks

and scenarios) which denotes also “Agree”.

The result implies that the system is strongly high in its usability. They are

comfortable using this system. It is easy for them to find the information that is

needed. The organization of information on the system screens is clear and this

system has all the functions and capabilities I expect they have.
On the other hand, they also like using the interface of this system and

can effectively complete my work using this system. The interface of this system

is pleasant since the user made it user-friendly. And they believe they became

productive quickly using this system. The least indicator that was tallied by the

researcher is, the information is effective in helping me complete the tasks and

scenarios since there are no task in the system and scenarios because it only

identifies the corals. The respondents are satisfied with how easy it is to use this

system. It was simple to use this system and the information provided for the

system is easy to understand. Overall, they are satisfied with the system.

Conclusion

Generally, the system implies that it is effective in its system usability. This

result also suggests that the system, is very comfortable to use and it is easy to

find the information they need. Furthermore, this implies that the image

recognition of corals is very informative and useful since the respondents feel

comfortable and find what the information they need. The organization of

information on the system screens is clear, the system has all the functions and

capabilities they expect it to have which is clearly evident that the coral image

recognition app is effective and is feasible to use.


Bibliography

REFERENCES

1. https://www.unep.org/explore-topics/oceans-seas/what-we-do/protecting-coral- reefs/
why-protecting-coral-reefs-matters

2. https://www.qm.qld.gov.au/microsites/biodiscovery/05human-impact/importance-of-
coral-reefs.html

3. https://www.qm.qld.gov.au/microsites/biodiscovery/05human-impact/importance-of-
coral-reefs.html

4. https://www.unep.org/explore-topics/oceans-seas/why-do-oceans-and-seas-matter

5. https://coral.org/wordpress/wp-content/uploads/2014/02/coastaldev.pdf
6. https://coral.org/wp-content/uploads/2014/02/coastaldev.pdf

7. https://www.mdpi.com/2079-9292/10/1/81

8. https://coral.org/wordpress/wp-content/uploads/2014/02/coastaldev.pdf

9. What do corals need


https://coral.org/en/coral-reefs-101/what-do-corals-reefs-need-to-survive/

10. https://coral.org/en/coral-reefs-101/types-of-coral-reef-formations/

11. Coral reef alliance


https://coral.org/en/

12. https://www.google.com/search?
q=coral+species&oq=coral+spe&aqs=chrome.1.69i57j0i512j0i10i433j0i512l2j0i10l2j0i512
j0i10l2.3840j0j7&sourceid=chrome&ie=UTF-8

CURRICULUM VITAE

PERSONAL DATA

Name: Mike Cornito Albelda

Date of Birth: July 29, 1996

Place of Birth: Cuasi, Loon, Bohol

Age: 25

Sex: Male

Religion: Roman Catholic

Civil Status: Single

Citizenship: Filipino
Height: 5’5

Weight: 55

Language Spoken: Visayan, Tagalog, English

FAMILY BACKGROUND

Father’s Name: Laurentino Sarausa Albelda Jr.

Occupation: Barangay Captain

Mother’s Name: Camila Cornito Albelda

Occupation: House Keeper

EDUCATIONAL BACKGROUND

Elementary: Sto. Nino de la paz Elementary School

Tontonan, Loon, Bohol

2008-2009

Secondary: Loon South National High School

Cuasi, Loon, Bohol

2012-2013

Tertiary: Bohol Island State University – Calape Campus

San-Isidro, Calape, Bohol

2021-2022

WORK EXPERIENCE

SALES AGENT/SALES MARKETING: MetroWealth International Company

Galleria Luisa

Tagbilaran City

2014-2015

CAFÉ ATTENDANT: Café Ligbeans

J. Borja Street. Tagbilaran City

2015-2017

ENERGY CONSULTANT: BSGE Consumer’s Enterprises

Bohol Branch
2018-2019

You might also like