Download as pdf or txt
Download as pdf or txt
You are on page 1of 87

Similarity Report

PAPER NAME AUTHOR

OptiScan: Early Warning Sign of Cataract -


Detection Using

WORD COUNT CHARACTER COUNT

15755 Words 95340 Characters

PAGE COUNT FILE SIZE

77 Pages 109.2KB

SUBMISSION DATE REPORT DATE

Jun 1, 2024 5:15 PM GMT+8 Jun 1, 2024 5:17 PM GMT+8

9% Overall Similarity
The combined total of all matches, including overlapping sources, for each database.
5% Internet database 1% Publications database
8% Submitted Works database

Summary
OptiScan: Early Warning Sign of Cataract Detection Using

Convolutional Neural Network

A Design Project
52
Presented to the Faculty of the

Department of Computer Engineering

College of Engineering and Architecture


24
Don Honorio Ventura State University

In Partial Fulfillment of

the Requirements of the Degree of

Bachelor of Science in Computer Engineering


By

Esguerra, John Lloyd G. Garcia, Bryan Mel V.

Manganti, Christopher Clark C.

Manuel, Shane S. Pineda, Ashley P. Torres, Ian Gerald J.

May 2024

INTRODUCTION

Over 55% of blind people over the age of 65 are becoming such due to cataracts,

making up 78% of the world's blindness rate above that age. The human eye's lens

makes it possible to see. Water and some proteins make up its composition.

Normally, it is translucent. However, in cases of cataract, the lens loses transparency

and develops a cloud as a result of proteins clumping together and clouding the lens.
80
As a result, vision is impaired since light cannot enter the eye. Cataracts are the most
common cause of blindness around the world, especially in older people. The

condition's earliest signs include watery eyes, blurry vision, and white spots can be

seen in the eye. (M. R. Yadav et al., 2017)

In the Philippines, 62% of the more than two million individuals who suffer from vision

impairment have cataracts. In rural locations where access to eye care treatments is

limited, the majority of visually impaired Filipinos reside below the national poverty line.

People who urgently require eye care cannot afford or obtain eye therapy since among

the 1,573 ophthalmologists in the country, most of them work in various national and

regional government hospitals as well as private clinics in urban areas. (The Fred

Hollows Foundation, 2020)

Traditional methods of cataract detection face significant challenges, primarily

stemming from the reliance on specialized equipment and the need for trained

healthcare professionals for comprehensive and detailed examination. The

conventional method of diagnosing cataracts involves an ophthalmologist examining

the patient's eyes using an ophthalmoscope slit-lamp microscope or similar tool to look

for clouding of a normally bright lens in the eye. (P. Tripathi et al., 2022)

The costly nature of these diagnostic tools, coupled with the complexity of the

procedures, makes cataract detection both time-consuming and financially

burdensome. This situation is heightened in remote or underserved areas, where

access to such specialized resources is limited or nonexistent.


Furthermore, the diagnostic accuracy of traditional methods is subject to variation

based on the expertise or specialization of healthcare providers. The interpretive

nature of results can introduce inconsistencies, potentially leading to delayed, or

inaccurate diagnoses. The need for skilled professionals also contributes to disparities

in healthcare access, particularly in regions where there is a shortage of

ophthalmologists or eye care specialists.

The integration of technology, such as the OptiScan device, presents a new approach

to healthcare delivery. Devices equipped with high-resolution imaging capabilities

enable non-invasive and convenient data collection, making them especially valuable

for early detection of medical conditions. In the context of cataract diagnosis, the
95
OptiScan device offers new approaches by utilizing Convolutional Neural Networks

(CNNs), a powerful subset of artificial intelligence, for automated detection.

The OptiScan device is designed to overcome the limitations of traditional methods by

providing a user-friendly solution for initial cataract detection. Its portability and

simplicity make it accessible even in remote areas, addressing the disparity in

healthcare access. The incorporation of a CNN-based algorithm, specifically YOLO v8

enhances initial diagnostic accuracy, reducing the reliance on human interpretation.

The emergence of technology and advancements in artificial intelligence, particularly

Convolutional Neural Networks (CNNs), have opened new avenues for transforming
healthcare. The integration of high-resolution image-capture capabilities into devices

offers a promising opportunity to improve the early or initial diagnosis of cataracts. In

response to this, this research design study presents "OptiScan," a device that the

researchers designed to capture high-resolution images of the eye and employ a CNN-

based algorithm for initial and automated cataract detection and diagnosis.

This research focused on the development of the OptiScan device. The primary

emphasis lies in two key aspects: accuracy and accessibility. Ensuring the OptiScan

device's accuracy is important, as it directly influences its effectiveness in detecting

cataracts. Strict testing procedures and comparative analyses are utilized to evaluate

the device's precision in distinguishing between cataracts, normal eyes, and eyes with

diseases other than cataract. This thorough examination seeks to confirm the

OptiScan device's credibility and dependability for diagnosing cataracts.

Simultaneously, the researchers made sure the OptiScan device is accessible for

communities to use. The researchers created a simple and user-friendly interface that

is easy to set up and control.

The training data set comprised 3,213 images, limited due to data privacy restrictions

enforced by ophthalmologists and the limited availability of suitable images from online

sources. The ophthalmologists' concerns about patient privacy necessitated strict

measures to protect sensitive eye images, thereby constraining the amount of data

that could be utilized. Additionally, the lack of publicly available eye images further
11
restricted the size of the training set. In return, the gathered images were validated one
by one by the official advising ophthalmologist. Despite these limitations, the

researchers were able to effectively use the available data to train the system,

ensuring it met the required performance standards.

The study examined the technical aspects of the OptiScan device, it assessed the

effectiveness in the diagnosis and detection of cataracts. In the long run, the project

seeks to assist ophthalmology and the medical community by offering an innovative

remedy that might transform the cataract diagnostic process and improve its

effectiveness, accuracy, and accessibility.

The existing methods for cataract detection rely on conventional diagnostic tools and

techniques that often require specialized equipment and trained healthcare

professionals. These methods can either be expensive, time-consuming, and not

easily accessible, particularly in remote or underserved areas with primary health

doctors. In addition, based on the expertise of a healthcare professional, the accuracy

and effectiveness of cataract diagnosis may vary. The following problems are what the

OptiScan system aims to address:

Inadequate Accessibility to Cataract Detection. Current methods for detecting

cataracts are not easily accessible in remote or underserved areas and primary health

clinics. This limits the ability to conduct screenings and diagnoses, particularly in

regions where specialized equipment and trained professionals are limited.


Variability in Cataract Diagnosis. Depending on the expertise of a healthcare provider,

the accuracy and effectiveness of cataract diagnosis may differ, especially in

underserved areas where basic clinics are the only facilities available and the

practicing healthcare professional specializes only in general health and not

specifically to cataracts or eye disease diagnosis.

Insufficient Presentation of Results. In underserved areas, due to a lack of eye care

equipment currently available, healthcare providers and professionals use

conventional methods like manually drawing the structure or shape of the observed

cataract to the patient.

The OptiScan project aims to develop, implement, and evaluate device

performance for initially identifying or diagnosing cataracts, integrating image captured

with Convolutional Neural Networks (CNNs). Specifically, the core objectives of the

OptiScan project encompass, but are not limited to, the following:

To create a hardware device capable of capturing images of the eye, focused on

functional accessibility and reliability, that empowers primary healthcare providers in

remote areas to conduct initial cataract disease screenings.

To develop a device that offers accurate standardized diagnostics with a CNN-based

machine learning model to automatically identify cataracts for effective initial

assessments and recommendations.


To provide examination results by integrating a printing function into the device,

enabling visualization of the eye and facilitating detailed patient education of the

patient's concern.

These objectives collectively strived to make the initial screening of cataracts in

underserved areas more accessible, potentially facilitating early interventions to reduce

the burden of this vision-impairing condition among individuals. The OptiScan project

aimed to transform cataract diagnosis and increase access to quality eye care,

especially in regions where such resources are limited and will not include areas with

well-established eye care facilities and resources.

As part of the research objectives, the researchers narrowed the classification process

to three main classification categories: "cataract detected," "normal eye," and "not

normal". Hence, it is not capable of identifying or diagnosing other specific eye

conditions other than explicitly detecting cataracts. By focusing on these broader

categories, the researchers effectively assessed and classified a patient's eye in terms

of the early signs of cataracts and initial diagnosis. Additionally, this approach

facilitated more precise data analysis and interpretation, ultimately enhancing the

accuracy and reliability of research findings.

This study is focused on the development, implementation, and evaluation of a device

for cataract detection that captures images and uses convolutional neural networks

(CNNs). The project uses Raspberry Pi 4 as the single-board computer to drive all of
the system's components, including a high-resolution camera for visual and image

inputs to the system, an LCD display for displaying metrics and statuses while using

the device, and a custom circuit that is used to calibrate and control the device,

specifically the camera.

The developed system is also capable of adapting its Raspberry Pi board to a monitor,
2
allowing users to access the user's interface for a more thorough examination of the

captured image and live view of the eye. It also served the functionality of printing the

results and detection metrics such as confidence level and other necessary details.

The developed system can also adapt its Raspberry Pi board to a monitor, allowing the
2
user to access the user interface for a more thorough examination of the captured

image and live view of the eye. It also serves the functionality of printing the results

and detection metrics such as confidence level and other necessary details.

To be able to explore this study, the researchers noted and reviewed related literature

throughout the initial investigation of their field of study.

This cataract detection system, based on a Single-Board Computer and utilizing


2
Deep Convolutional Neural Network with GoogLeNet Transfer Learning and digital

image processing, aims to identify cataract types without the need for dilating drops. It
1
also assesses cataract severity, grade, color, area, and hardness, displaying, saving,

searching, and printing partial diagnoses. Employing descriptive quantitative research


and methodologies like Waterfall and Agile SDLC and Evolutionary Prototyping, the

system was tested by cataract patients, ophthalmologists, engineers, and IT

professionals. Assessment using ISO/IEC 25010:2012 models revealed high

acceptance and effectiveness levels. Accurate and reliable cataract detection were

evident with notable advancement over current methods of examination. This system

represents a modern approach to cataract detection for all patients. (Ivan Dave

Agustino et al., 2019)

2
In the study titled "SBC-Based Cataract Detection System Using Deep
1
Convolutional Neural Network for Asia-Pacific Eye Center Biñan Branch," the
1
researchers employed ISO/IEC 25010:2012 models to assess the accuracy,

functionality, and efficiency of the developed cataract detection system. These models

offered a comprehensive framework for evaluating different aspects of the system,

ensuring it met quality standards and user needs. By using standardized evaluation

criteria, the researchers could enhance the reliability and usability of the technology,

ensuring it effectively served its intended purpose at the Asia-Pacific Eye Center Biñan

Branch.

The YOLO (You Only Look Once) CNN Algorithm represents an important

development in computer vision technology. It is renowned for its efficiency and

accuracy in real-time object detection tasks in images and videos.


Significant effectiveness were revealed in the study through the AI algorithm

performance using YOLOv5 for diagnosing cataract conditions. Across various eye

conditions, including cataracts, the algorithm showcased high diagnostic accuracy.


44
For instance, it achieved an area under the receiver operating characteristic (ROC)

curve of 0.992 for cataracts, indicating excellent discrimination ability. Moreover, when

triaging referral suggestions based on smartphone images, the algorithm

demonstrated notable performance, exhibiting high sensitivity and specificity for

different urgency levels. These levels included 'urgent', 'semi-urgent', 'routine', and

'observation', showing its ability to prioritize cases based on severity. Overall, the

study concluded that the AI system, leveraging YOLOv5, displayed promising


97
performance in diagnosing cataracts and corneal diseases, underscoring its potential

as a valuable diagnostic tool in clinical settings.

48
YOLO, or You Only Look Once, stands out as a real-time object detection algorithm
94
that surpasses various CNN-based counterparts in terms of both speed and accuracy.
15
The ability to detect multiple objects in the same image is its distinguishing feature.

This capability sets it apart from other algorithms that typically focus on identifying one

object at a time. One of the notable advantages of YOLO is its speed, attributed to its

single-shot approach for object detection. Unlike many CNN-based algorithms that

necessitate multiple passes through an image, YOLO can swiftly detect objects in a

single iteration, enhancing efficiency. (Manish Chablani, 2017)


Moreover, YOLO demonstrates impressive accuracy, particularly on challenging

datasets, surpassing numerous CNN-based object detection algorithms on standard


15
benchmarks. This attribute makes it a reliable choice for various applications, such as

self-driving cars and video surveillance. In summary, YOLO's speed, accuracy, and

ability to detect multiple objects simultaneously position it as a highly effective and

efficient solution for real-time object detection tasks, outperforming several other CNN-

based approaches. (Manish Chablani, 2017)

11
The YOLO (You Only Look Once) CNN algorithm is a significant advancement in
18
computer vision, renowned for its speed and accuracy in detecting objects in images

and videos. Unlike other methods that can only identify one object at a time, YOLO can

simultaneously detect multiple objects within the same image, making it highly efficient.

It excels in recognizing objects even in challenging situations, making it valuable for


15
applications such as self-driving cars and security cameras. In the context of the

OptiScan study, which focuses on detecting cataracts in eye images, using the YOLO

model allows researchers to quickly and accurately identify cataracts, enhancing the

system's effectiveness. Thus, the study employs the YOLO framework to utilize its

real-time object detection capabilities for accurate analysis of early signs of cataracts.

In this study, the researchers utilized YOLOv3 to automatically identify and classify

cataracts from eye lens videos. The dataset included videos from 76 eyes of 38
85
individuals, collected via a slit lamp. Data were gathered using five random methods to
37
enhance accuracy, with each video producing 1,520 images. These images were
divided into training, validation, and test sets in a 7:2:1 ratio. (Shenming Hu et al.,

2021)

To maintain algorithm accuracy, data collection employed five random methods, with

each video lasting up to 10 seconds. From these videos, 1,520 images were extracted
25
and split into training, validation, and test sets. Verification on a clinical data test set

yielded a 94% accuracy rate. Additionally, frame detection was completed within 29

microseconds, enhancing detection efficiency. (Shenming Hu et al., 2021)

This algorithm's efficiency allows the use of lens scan videos as the primary research

object, improving screening accuracy. This aligns with the actual cataract diagnosis

process and enhances detection capabilities for non-ophthalmologists. It also

increases access to cataract screening in underserved areas. (Shenming Hu et al.,

2021)

In this study, YOLOv3 was utilized to detect and classify cataracts from eye lens

videos, employing a limited dataset consisting of videos from 76 eyes of 38 individuals.

Despite the small sample size, the algorithm's performance was evaluated, highlighting

its potential applicability even with a constrained dataset. The study underscores the

importance of exploring algorithm performance under limited data conditions, shedding

light on its feasibility and effectiveness in practical settings.


Cataract detection and diagnosis represent critical areas in ophthalmology, with

advancements in technology leading to innovative approaches aimed at early and

accurate identification, ultimately enhancing patient care and outcomes.

The most frequent cause of vision impairment worldwide is cataracts, and prompt

treatment depends on early identification. Traditional techniques of diagnosing

cataracts require ophthalmologists' subjective evaluations, such as slit-lamp exams

and visual acuity tests. Computer-aided diagnostic (CAD) systems for detecting

cataracts have been the subject of several research, with promising findings regarding

accuracy and effectiveness. (Ishitaa Jindal et al., 2019)

71
One of the most common causes of blindness, particularly in the elderly, is cataract.

Nearly half of India's elderly population has cataracts by the age of 80, or has had

surgery to treat them. According to surveys conducted by the WHO and NPCB, there

are over 12 million blind persons in India, and 80.1% of them are blind due to

cataracts. To prevent total blindness, it is essential that cataract cases are identified

early on. This study uses digital image processing on ocular scans to identify the

presence and severity of cataracts. Images of eyes with various degrees of cataracts

are applied to a dataset using two different image processing techniques. An

automatic detection algorithm based on feature extraction is used in the first method.

The second method uses an area-based methodology to assess the degree of

cataract. (Ishitaa Jindal et al., 2019)


Cataracts are a big problem for many people, especially as they age. If the study finds

them early, they can treat them better and prevent blindness. Right now, doctors

mostly use their judgment and some tests to diagnose cataracts, but new computer

systems are also helping. In this study, special computer programs are used to look at

pictures of eyes and find cataracts. Trying out different ways to do this to see which

one works best is also considered. Our goal is to make it easier and faster to find

cataracts so that people can get treated quickly and avoid losing their sight.

Convolutional Neural Networks (CNNs) have modernized medical imaging, providing

powerful tools for evaluating complicated medical data and providing insights essential
41
for diagnosis, treatment planning, and patient care in various healthcare disciplines.

41
In the field of medical image analysis, CNNs have achieved remarkable results. They

are great at extracting features and recognizing patterns, which makes them suitable

for finding cataracts in images. It is generally known that a number of CNN

architectures, including VGG, Inception, and ResNet, are good at detecting cataracts

when used for ophthalmic image analysis tasks. (Mitchell Finzel, 2017)

Convolutional neural networks have been used more often in numerous applications

involving medical imaging throughout the last five years. This is caused by their

versatility in a variety of medical imaging applications, as well as their rise in popularity

following their victory in the 2012 ImageNet competition. These uses are very diverse,
ranging from the identification of Alzheimer's illness in MRIs to the segmentation of

knee cartilage, among many other things. (Mitchell Finzel, 2017)

The adoption of convolutional neural networks (CNNs) in medical image analysis,

particularly for cataract detection, represents a significant advancement in the field.

CNNs have demonstrated remarkable efficacy in extracting features and recognizing

patterns from complex images, making them well-suited for identifying cataracts in

ophthalmic images. Their success can be attributed to their ability to utilize large

datasets and their adaptability across various medical imaging applications. In this

context, the YOLO (You Only Look Once) model, which employs CNNs for its AI

detection framework, enhances the efficiency and accuracy of real-time object

detection. This integration of CNNs in YOLO further strengthens the AI model's

capability to quickly and accurately identify early signs of cataracts, thereby improving

diagnostic outcomes.

Advances in ophthalmic imaging technologies have transformed cataract assessment,


45
enabling more precise diagnosis and personalized treatment strategies, ultimately

improving patient outcomes and quality of life.

Cataract surgery is getting better and better thanks to new technology. One cool thing

is optical coherence tomography (OCT), which helps with measurements and

diagnosis before, during, and after surgery. The researchers also got fancy machines

like femtosecond lasers and robots making surgery more precise, especially when it
comes to certain tricky parts like making cuts in the eye. Another neat feature is

feedback irrigation control during surgery, which makes the whole process safer. The

researchers also use clever techniques like guided implantation of special lenses and

choosing lens powers based on refraction to make sure patients get the best vision

possible. And speaking of lenses, there are some really cool new ones like trifocal and

extended-depth-of-focus lenses that can improve vision in different ways. Plus, there

are options like pinhole lenses and supplementary lenses that are changing how the

researchers think about choosing the right lens for each person. All these advances

mean that cataract surgery is becoming even more precise and effective, which is

great news for patients. (Roberto Bellucci, 2019)

These emphasize the importance of precision and effectiveness in cataract surgery,

which directly correlates with the need for accurate detection. Understanding these

advancements is crucial for cataract detection devices, as it provides the right insights

or information, identifies key technologies, and highlights potential areas for

collaboration and integration. Additionally, awareness of emerging lens technologies

informs future trends and opportunities for device expansion or integration.

The impact of age-related changes on cataract development highlights the involved

interaction between aging processes and ocular health, revealing significant factors

that affect cataract onset and progression.


Visual impairment is a big issue worldwide and is linked to higher chances of passing

away among different ethnic groups. Cataracts, especially those that come with age,

like nuclear, cortical, and posterior subcapsular (PSC) cataracts, are a major reason
78
why people have trouble seeing. Even though having age-related cataracts seems to

increase the risk of dying, we're not sure how each type of cataract affects mortality.

Doctors and eye experts need to understand this because each type of cataract has

different causes, treatments, and impacts on eyesight. Figuring out how cataracts

relate to mortality could teach the researchers more about why they happen. If one

type of cataract is linked to a higher chance of dying, using imaging to look for that

type could help the researchers check overall health and how the body ages. This

question is important for public health because cataracts are becoming more common

worldwide. The researchers still don't know for sure if age-related cataracts directly

raise the risk of dying or if the link is just because older people tend to get them.

(Hongpeng Sun et al., 2014)

Understanding the impact of age-related changes on cataract development is crucial

for effective cataract detection. As we age, our eyes undergo various changes, such

as damage from sunlight and decreased antioxidant defense, leading to the

accumulation of substances that cloud the lens and cause cataracts. Recognizing

these changes allows for better prevention and treatment strategies, ultimately aiding

in the preservation of vision and older individual's quality of life. This knowledge is

essential for cataract detection devices as it gives insight into the fundamental

mechanism and risk factors associated with cataract formation, guiding the
development of more accurate and timely detection methods. By focusing on age-

related changes, cataract detection devices can target individuals at higher risk,

enabling earlier intervention and improving overall eye health outcomes.

The influence of lifestyle factors on cataract prevention underscores the significant

impact of individual choices and behaviors in reducing the risk of cataract

development, highlighting the importance of adopting healthy habits to promote eye

health.

One’s lifestyle choices make a big difference in preventing cataracts. Eating a

balanced diet full of antioxidants, vitamins, and minerals helps your eyes fight off

oxidative stress that can lead to cataracts. Plus, avoiding smoking and cutting back on

alcohol can protect your eyes from damage that can speed up cataract formation.

Always remember to wear sunglasses and hats when out in the sun to shield eyes

from harmful UV rays. And if possible, try to keep a healthy weight and manage

conditions like diabetes, as they're linked to cataracts too. Staying active is also a

great way to improve general health and lower risk of cataract development as one

ages. So, by making these small lifestyle changes, one is not just looking after your

body but also safeguarding your vision for the long haul. (Shuai Yuan et al., 2022)

Senile cataract, a common eye condition impacting around 17% of people, is a major

contributor to vision loss worldwide. While past studies have suggested links between

metabolic syndrome and certain lifestyle habits like coffee, alcohol, and smoking with
the risk of senile cataracts, there's been uncertainty about whether these factors

actually cause cataracts. To get clearer answers, scientists used a method called

Mendelian randomization. This approach uses genetic variations as a kind of natural


73
experiment to understand better if there's a cause-and-effect relationship between

things like body weight, diabetes, blood pressure, lifestyle choices, and the risk of

developing cataracts. By doing this, they hope to shed light on ways to prevent senile

cataracts in the future better. (Shuai Yuan et al., 2022)

In connection with developing a cataract detection device, understanding the influence

of lifestyle factors on cataract development is important. By incorporating knowledge

about these factors into the design and functionality of the device, such as integrating

features for assessing dietary habits, smoking history, or sun exposure, it becomes

possible to provide a comprehensive assessment of an individual's risk for cataracts.

Moreover, leveraging genetic variations and other biomarkers associated with lifestyle

choices can enhance the device's predictive capabilities, enabling earlier detection and

intervention. Thus, by considering lifestyle factors in tandem with clinical and genetic

data, the cataract detection device can offer personalized insights and

recommendations for mitigating cataract risk and preserving visual health.

89
The accessibility and utilization of eye care services in underprivileged areas
39
emphasize the importance of ensuring equitable access to vision care, addressing
39
disparities in healthcare access, and promoting eye health awareness among

vulnerable populations.
Ensuring universal access to eye care, regardless of geographical location or financial

status, is crucial for maintaining overall health and preventing vision-related

complications. However, disparities in access to eye care services persist, particularly

in rural or economically disadvantaged areas. Limited availability of eye clinics and

transportation challenges often hinder individuals from seeking necessary care,

leading to untreated vision issues and potential blindness. To address this,

collaborative efforts involving governments, charities, healthcare professionals, and

local communities are essential. Improving transportation infrastructure, establishing

affordable options, and increasing awareness about eye health can mitigate barriers to

accessing eye care, enabling individuals to receive timely treatment and support.

Additionally, expanding clinic facilities, training more eye care professionals, and

promoting preventive measures can help alleviate the burden of vision problems in

underserved rural communities, ultimately enhancing overall quality of life and societal

participation. (Nana Yaa Koomson et al., 2019)

By understanding the barriers individuals face in accessing traditional eye care

services, such as limited clinic availability and financial constraints, people can tailor

the design and deployment of the detection device to reach these populations

effectively. This involves implementing strategies to ensure affordability, simplicity of

use, and compatibility with existing healthcare infrastructure in rural areas.

Additionally, leveraging mobile and telemedicine technologies can extend the reach of
the device, enabling remote screening and consultation for individuals who lack access

to traditional eye care facilities.

Advanced technologies for remote eye examinations are reshaping ophthalmic care,

providing innovative solutions that enable convenient and efficient assessment of eye

health from a distance, transforming the field of eye healthcare delivery.

Emerging technologies for remote eye examinations, like teleophthalmology,

smartphone applications, AI systems, and remote monitoring devices, are transforming

how eye care is administered. These advancements are simplifying access to eye

care services for individuals in distant or rural areas, which is incredibly significant.

With these innovations, healthcare providers can now conduct virtual appointments,

analyze eye images remotely, and monitor changes in eye health over time. This

facilitates early detection of eye issues and prompt intervention, benefiting patients and

reducing healthcare expenses. Moreover, remote examinations reduce the need for

extensive travel and waiting times, enhancing convenience for all involved. Ultimately,

these cutting-edge technologies have the potential to revolutionize the delivery of eye

care, making it more accessible and efficient for people worldwide.

This study holds significant importance and potential impact for various stakeholders,

benefiting a range of individuals and groups:

Patients with suspected vision impairments - The primary beneficiaries are individuals

who may have early signs of cataracts. OptiScan aims to enable early and accurate
diagnosis, leading to timely interventions that can prevent severe vision loss and

enhance the patients' quality of life.

Healthcare Professionals in Underserved Areas - Ophthalmologists, optometrists,

other eye care specialists, and even basic healthcare professionals in communities

with limited accessibility stand to benefit from OptiScan's diagnostic capabilities. The

device can assist them in providing initial cataract diagnoses or assessments,

ultimately enhancing patient care and doctor recommendations

Underserved and Remote Communities - OptiScan has the potential to reach areas
74
with limited access to specialized eye care facilities. These communities can benefit

from improved access to cataract diagnosis and early intervention.

Primary Healthcare Systems and Providers - More efficient cataract diagnosis can

reduce the burden on healthcare systems and providers. Early cataract detection and

treatment can reduce costs and optimize the use of available resources.

Future Researchers - the development of OptiScan represents a big step forward in

eye care, especially for detecting cataracts early. Future researchers can use

OptiScan as a starting point to improve how the researchers find and diagnose

cataracts. By working on this, the researchers can help make eye care better and

more accessible. OptiScan shows the researchers the importance of detecting eye

problems early and gives them ideas on how to do it even better in the future.
METHODS

The methods section laid out a detailed plan for the research, covering steps like

procedures, tool selection, organizational checks, system building, material details,

testing, assessment, and application strategies. The researchers engaged with

stakeholders and used appropriate tools to gather valuable insights. Through thorough

analysis and specification, the researchers defined what were needed for the project,

including both software and hardware aspects. The researchers provided clear

descriptions of the materials used, ensuring transparency. A structured testing and

evaluation plan was put in place to ensure the system worked effectively. The methods

section concluded by outlining strategies for translating research findings into practical

applications, ensuring practical outcomes in real-world settings.

In this study, quantitative research instruments were employed to gather valuable

insights from different stakeholder groups involved in the field of ophthalmology,

specifically cataract detection and diagnosis, and the implementation of the OptiScan
82
system. These research instruments were carefully chosen to ensure a comprehensive

understanding of the subject matter. In order to get a comprehensive look at the data

in each phase of the study, specifically for data gathering procedures and evaluation

and scoring, interviews, and 4-point Likert scale-based questionnaires were employed.

The research design chosen for this study was action research, which is well-suited for

investigating cataract disease detection and finding ways to improve it. Action research
focused on collaborating with others to solve real-world problems and make positive

changes. By working closely with different people, including doctors, administrators,

and patients, the researchers gained a better understanding of cataract detection from

various perspectives. This approach allowed the researchers to identify practical

solutions to improve detection methods. The researchers also adjusted their methods

based on the resources they had and the constraints they faced. Action research

involved an iterative process of learning and refining their approach, ensuring that their

findings were improving. Ultimately, their goal was to contribute to better cataract

detection methods and make a positive impact in this area (Lewin K., 1946).

The researchers utilized various research instruments and tools to gather and analyze

data effectively. These included survey questionnaires, which allowed the researchers

to collect structured responses from participants regarding their experiences,

preferences, and opinions related to eye health and scanning technologies.

Additionally, the researchers employed techniques such as weighted mean and

composite mean to statistically analyze the gathered data, providing insights into
5
trends, patterns, and overall outcomes of the research. These tools enabled the

researchers to interpret the collected information accurately and derive meaningful

conclusions to further enhance the OptiScan system's development and

implementation.

The researchers focused on cataract disease and its implications for early
90
detection. To explore the impact of cataract disease and detection in the specific
context of the study, the researchers engaged a representative group of stakeholders.

The study incorporates evaluators who are Ophthalmologists, Health Care Clinic Staff,

Other Medical Field Officers, Computer Engineers, Electronics Engineers and IT

professionals.

For the evaluation process, the demographic profile of the evaluators was

documented. For those in the Technology/Technical Professions group, details such

as their highest educational attainment and position in their institutions were recorded.

Meanwhile, for the Ophthalmologists, Health Care Clinic Staff, and Other Medical Field

Officers, information regarding their respective positions was gathered. This data

collection aimed to provide insights into the qualifications and expertise of the

evaluators involved in assessing the device's detection accuracy.

In light of the practical limitations and the restricted quantity of individuals, the

researchers chose to pick participants by purposive sampling. The deliberate selection

of people with particular traits or experiences related to the study goals is known as

purposeful sampling. The approach was selected because it allows researchers to

concentrate on patients with different eye diseases and professionals who specialize in

detecting cataracts, including ophthalmologists. Purposive sampling allows the

researchers to concentrate on important stakeholders and obtain insightful viewpoints

on cataract disease detection, even though it might not yield a representative sample

of the general community. Patients who already suspected they had an eye disease

were the target of the specific factors that researchers took into consideration while
choosing our respondents. Patients went to Kapampangan Development Foundation,

Inc. for advice since they needed a professional assessment for their eye conditions

and where the device was utilized. As a result, the device was used to evaluate any

patient who asked for a check-up during the utilization trials. In this study, to ensure

the protection of patient privacy, actual images of the eyes tested in the actual testing

of the system were not included. This decision was made following the strict

guidelines of the researchers' ophthalmologist consultant and in compliance with the

Data Privacy Act of 2012. By adhering to these regulations, the study maintained the

confidentiality and privacy of all patient data. It is crucial to select particular disciplines

and persons for project evaluation and feedback since they possess the knowledge

and perspective necessary to evaluate and provide insightful information. They are

chosen by researchers due to their knowledge and experience, guaranteeing an

exhaustive and well-informed evaluation procedure that improves the accuracy and

quality of the project.

12
Agile project management is used in the System Development Life Cycle (SDLC) that

was selected for the OptiScan study. Agile is an iterative methodology that places a

high priority on providing value to the client through regular communication and
93 33
cooperation. It is governed by the Agile Manifesto's tenets, which place an emphasis

on people and their interactions, functional software, customer cooperation, and

adapting to change rather than sticking to a strict schedule. Throughout the

development process, the research team can quickly respond to changing

requirements and client needs because of this methodology's flexibility and


adaptability. Agile is frequently linked to software development, but its ideas can be

used for any project that calls for teamwork and iteration, which makes it a suitable fit

for the OptiScan investigation. (Kosta Suranjit, 2023)

The Agile System Development Life Cycle Methodology, a flexible way of

organizing and overseeing system development projects, is shown in Figure 1. The

project is divided into iterative phases by agile, which include planning, designing,

developing, testing, deploying, reviewing, and maintaining. In this specific instance and

its applications, the researchers apply the SDLC model to Agile methodology principles

rather than a particular Agile framework. These guidelines in particular encourage

adaptation and flexibility, which helps the researchers swiftly adjust to the study's

feedback-driven design and changing requirements. Throughout the development


75
process, the agile methodology places a strong emphasis on community and

participant interaction, cooperation, and continuous improvement. The researchers

make sure that the OptiScan system is created iteratively, incorporating feedback at

each stage and providing stakeholders with additional value by adopting Agile

principles.

The researchers create the system development life cycle (SDLC) model, choose

suitable research instruments for data collection, list the materials required, and carry

out an organizational evaluation during the planning or requirements-gathering stage.

As they proceed to the design phase, the researchers define and analyze the

requirements, design the system interface, produce block and circuit diagrams, and
use PROCESS flowcharts to build logical specifications. The researchers follow a

methodical process during development, either by experiment or in chronological

sequence, and they outline the hardware and software needs. Additionally, the

researchers determine crucial elements and equations to test and assess the system's

functionality during the testing phases. Ultimately, the OptiScan system's efficacy and

efficiency are evaluated by the researchers using a variety of criteria and assessment

formulae throughout the maintenance and evaluation phase. Because of its flexibility

and application of Agile system development concepts, the development process

continues in cycles and iterations long after the system has been launched, with

additional developments being made.

Getting into the planning phase, the researchers' focus lay in gathering comprehensive

insights to address the identified challenges within cataract detection and diagnosis

effectively. Engaging with medical professionals and healthcare stakeholders, the

researchers assessed the concerns to understand the issue thoroughly. This process

involved conducting interviews and discussions to discern the intricacies of their

experiences. Subsequently, the researchers compiled a detailed list of requirements

for the initial diagnosis or cataract detection. These requirements encompassed the

development of the OptiScan device capable of capturing detailed images of the eye,

integration with machine learning algorithms using YOLOv8, and implementation of

user-friendly interfaces and printing functionality to ensure accessibility and

comprehension of diagnostic results.


The planning phase included listing all the requirements needed for the hardware,

software, and documentation of the OptiScan project. The requirements and following

plans for the system workflow were identified through the tables and figures in the

subsequent phases of the researchers’ system development lifecycle.

Figure 2 represents the system flowchart. It started from components, including both

software and hardware initializations. The system then proceeded to different

processes, such as loading the CNN model for eye detection and landmark

initializations. The flowchart also visualized the process of diagnosing and detecting

cataracts through the software application. A fallback system and a calibration mode

for camera position were included if camera initialization failed. The system then

proceeded to capture images for processing and detection. After the procedure, the

system generated different filtered and raw images to compile the findings. The

findings were also printed to provide a hard copy of the results.

64
Figure 3 represents the OptiScan system's block diagram. It comprised the Raspberry

Pi 4 Model B with 8GB RAM. The RPi4 was the main board that controlled all the other

modules and components of the system. This included the High-Quality Camera that

supplied the system with visual inputs and images for processing. The system could

send feedback, current status, metrics, system light for better eye observation, and

system button controls for calibration and inputs through this display. A 5V three-amp

power supply powered the system.


Figure 4 illustrates a Dual ATmega Schematic Diagram designed for controlling

stepper motors and facilitating control over camera position. This circuit integrated two

ATmega microcontrollers, each dedicated to effectively managing one stepper motor

operation. Additionally, the ATmega microcontrollers served as intermediaries,


2
enabling quick and effective control over the camera, which was connected to the

Raspberry Pi single-board computer.

Figure 5 illustrates the structural architecture of the system. Most modules in the

OptiScan device were connected to a Raspberry Pi single-board computer (SBC). A

touchscreen display showed the OptiScan Python App, while a 64GB micro SD card

provided storage. The Raspberry Pi Camera was paired with a lamp for imaging, and a

Canon Printer was used for printing results. Additionally, there was a Dual Atmega

board for joystick control of the dual-axis system, and a TB6600 Motor Driver regulated

power to stepper motors. Lastly, a power supply unit provided electricity to the entire

system, along with a head-chin rest for patient positioning. A separate microcontroller

was included in the system to control the dual-axis platform. This was due to the

motors utilizing high voltages and currents in their operation. This was also to prevent

those current and voltage fluctuations from affecting major components such as the

Raspberry Pi microcomputer and their operations.

Figure 6 showed the Single Window Graphical User Interface and Preview Modes:

This feature offered users a convenient interface where all tools and options were

available within one window. Additionally, it included preview modes, enabling users to
see the eye with various visualization techniques such as grayscale, a binarized

image, which specifically showed the shape of an observation in the eye, and a color

map. A grid and eye placement guidelines were also present, as well as statistics of

the detection runtime labeled as “Optistats” in the right-hand corner.

The OptiScan system had several color modes to improve eye exams: Normal Color,

Grayscale, Binarized, and Color Map. Normal Color Mode showed the eye in its

natural colors for a standard view. Grayscale Mode changed the image to shades of

gray, which helped to see fine details and structures in the eye. Binarized Mode

converted the image to black and white, making it easier to see the shape and edges

of cataracts. Color Map Mode used a gradient of colors to show light intensity and

depth, with green being the darkest and red the lightest. This mode helped to visualize

the thickness and reflectivity of a cloudy and or white lens part of the eye as well as its

different parts of the eye. Each mode highlighted different features, making the

examination more complete and detailed.

The Optiscan project integrated a combination of software and hardware components

to ensure the efficiency and effectiveness of the solution. Below, the researchers

outlined the specifications of both the software and hardware aspects, detailing the

essential elements of the project development.

1
Table 1 outlined the Python System Specifications for Personal Computers. For the

Operating System, it was recommended to use Windows 10, macOS, or Linux. The
minimum requirements included Windows 7 or later, macOS, or Linux. Regarding the

Processor, a recommended speed of 2.60 GHz was suggested, with a minimum


1
requirement of 1.30 GHz. For disk space, it was recommended that at least 3 GB be

available, with a minimum requirement of 1 GB.

91
Python was a dynamically semantic, high-level, object-oriented programming language
19
that was interpreted. Its adaptability made it ideal for Rapid Application Development

and scripting or glue languages to connect multiple components. This was due to its
19
extensive built-in data structures, dynamic typing, and dynamic binding capabilities.

One of Python's standout features was its straightforward and easily readable syntax,
49
which facilitated learning and reduced program maintenance costs. Additionally,

Python's support for modules and packages encouraged code modularity and

reusability in software projects. That's why the researchers deemed it proper to be the

programming language that OptiScan utilized.

Table 2 outlined the system specifications required for running OpenCV, a

popular open-source computer vision library, on different operating systems. For

optimal performance, it was recommended to use Windows 10, macOS, or Linux as

the operating system. However, OpenCV may also run on older versions such as

Windows 7 or later, macOS, and Linux. In terms of processor specifications, a

minimum of 1.30 GHz was sufficient, but a recommended processor speed of 2.60

GHz or higher was ideal for smoother performance. Regarding disk space, a minimum
of 1 GB was required, but it was recommended to have at least 3 GB of available disk

space for storing image data and library files.

Additionally, having a RAM of at least 2 GB was the minimum requirement. Still, for

better performance, it was recommended to have 4 GB or higher RAM capacity to

handle complex image processing tasks efficiently. These specifications ensured that

OpenCV functioned optimally across different hardware configurations and operating

systems, enabling users to perform various computer vision tasks effectively.

One of the critical advantages of OpenCV was its open-source nature, which offered

several benefits. This open-source characteristic allowed developers to modify the

algorithms within the library to enhance its performance and capabilities. Furthermore,

OpenCV was distributed under a BSD license, simplifying its use and code

modification by companies (Asal, 2018). OpenCV was employed for image processing

tasks in the context of the mentioned application. This underscored its versatility and

effectiveness in various computer vision applications. This computer vision framework

was the main graphical processing unit of the software system of OptiScan.

In Table 3, TensorFlow system specifications are provided. The recommended


34
operating systems include Ubuntu 16.04 or later, macOS 10.12.6 (Sierra) or later, and

Windows 7 or later. Processors should support x86 and x64 architecture, with a

minimum of 3 GB disk space required, with 1 GB being the minimum. These


specifications ensure compatibility and optimal performance for TensorFlow on various

platforms.

56
TensorFlow is a comprehensive open-source machine learning platform that offers a
27
wide and flexible ecosystem of tools, libraries, and community resources. This

ecosystem empowers researchers to build and deploy machine learning-powered


61
applications efficiently and enables academics to push the boundaries of machine

learning advancements (Google Open Source, 2022). In the specific case mentioned,
63
TensorFlow is used to create a deep convolutional neural network as a fundamental

component of the system to be developed. This exemplifies TensorFlow's capability to

support the development of complex machine-learning models and applications. It is

used to create the CNN model of the OptiScan device.

Table 4 outlined the essential system specifications necessary for running

Ultralytics YOLOv8 effectively. The recommended operating systems, including


67
Windows 10, macOS 10.15, Ubuntu 18.04, and Linux, were chosen to provide users

with a range of options based on their preferences and existing setups. On the other
30
hand, the minimum requirements, covering Windows 7 or later, macOS, and Linux,

ensured that the YOLOv8 model could still function adequately on older or less

advanced systems. These specifications were crucial for ensuring compatibility and

optimal performance, allowing users to leverage the capabilities of YOLOv8 efficiently.


11
The YOLO (You Only Look Once) CNN Algorithm is a significant advancement in
18
computer vision technology. It's renowned for its speed and accuracy in detecting

objects in images and videos. Unlike other methods that can identify only one object

at a time, YOLO can detect multiple objects simultaneously, making it highly efficient.

Its ability to work well in challenging scenarios makes it valuable for applications like

self-driving cars and surveillance systems. Incorporating YOLO into our OptiScan

project, which focuses on detecting cataracts in eye images, could enhance the speed

and accuracy of our system, making it more effective overall.

Table 5 presented the system specifications required to run Tkinter smoothly.


87
Windows 10, macOS, and Linux were recommended for the operating system, while
30
the minimum requirement included Windows 7 or later, macOS, and Linux versions.

The processor should ideally have a clock speed of 2.60 GHz for optimal performance,

with a minimum requirement of 1.30 GHz. Regarding disk space, it was recommended

to have at least 3 GB available, although a minimum of 1 GB was sufficient. For

memory (RAM), a recommended minimum of 4 GB was advisable, while a minimum of

at least 2 GB was necessary for adequate functioning. These specifications ensured

that Tkinter operated efficiently across different platforms and hardware configurations.

Tkinter served as the platform for designing OptiScan's user interface, ensuring user-

friendly interactions. Compatible with Windows, macOS, and Linux, Tkinter ensured

accessibility across different devices. Its simplicity and versatility contributed to an

intuitive interface for eye specialists and patients alike.


Table 6 presented the system specifications required for Google Colaboratory, which

was compatible with different operating systems like Windows 10, macOS, and Linux,

allowing users to access it from various devices. It functioned efficiently with a

processor speed of 2.30 GHz, ensuring smooth performance. Additionally, it required a

minimum of 25 GB of disk space and at least 12 GB of RAM for optimal usage.

Google Colaboratory served as a tool for training the YOLO model, the core

component of OptiScan. This platform provided a powerful cloud-based computing

environment, eliminating the need for high-end hardware. With Colaboratory,

researchers could collaborate and improve cataract detection algorithms efficiently.

28
Table 7 outlined the key specifications of the Raspberry Pi Model 4B Single-Board

Computer. It featured a powerful Broadcom BCM2711 processor, 8GB of memory, and


28
extensive connectivity options, including dual-band wireless LAN, Bluetooth 5.0, and

Gigabit Ethernet. The device also offered GPIO headers, micro HDMI ports for video

output, and a micro SD card slot for storage. Operating within a temperature range of

0–50ºC, it could be powered via USB-C or GPIO header with a minimum requirement

of 3A.

25
The Raspberry Pi 4 Model B, representing the latest iteration of the Raspberry Pi

series, boasted significant advancements in processing capabilities, multimedia

functionality, memory capacity, and connectivity features. Despite these notable


enhancements, it retained compatibility with prior models and upheld a consistent

power consumption profile. As the primary hardware component, the Raspberry Pi 4

served as the foundation for integrating other necessary elements in developing the

OptiScan system.

55
Table 8 presented the technical specifications of the Raspberry Pi Camera Module.

Compatible with all versions of Raspberry Pi, it featured a fisheye lens with a 5-

megapixel OV5647 sensor and a 1/4-inch CCD size. With an aperture of 2.35 and an

adjustable focal length, it provided a broad field of view with a diagonal angle of 160

degrees and a horizontal angle of 120 degrees. The compact module measured 25mm

x 24mm, making it suitable for various Raspberry Pi projects requiring camera

functionality. The Nikon D3200 camera was originally chosen, but it couldn't show real-

time video. This was a problem because OptiScan needed to see video in real-time to

work well. So, the researchers switched to using the Raspberry Pi Camera Module,

which could show video in real-time and worked better for OptiScan.

The Raspberry Pi HQ Camera is an imaging powerhouse in a compact form


23
factor. With its 12.3-megapixel Sony IMX477 sensor, featuring a 7.9mm diagonal

image size and a cutting-edge back-illuminated sensor architecture, it's set to redefine

imaging experiences. It also offers adjustable back focus, providing precise control

over your focus. To suit your specific needs, the Raspberry Pi HQ Camera offers

flexibility in mounting options. It is used to capture high-quality images of the eye for

the processing of the OptiScan device.


Table 9 represented the technical specifications of the Touch-Enabled LCD Monitor.

With an assembly module size of 192.96mm x 110.76mm and a display size of 7

inches, it featured an 800 RGB x 480-pixel display format and an active area of

154.08mm x 85.92mm. Utilizing TFT LCD technology, the monitor included a multi-

touch capacitive touch panel capable of simultaneously detecting up to 10 touch

points. Equipped with an anti-glare surface treatment and RGB-stripe color

configuration, it ensured clear and vibrant visuals. The backlight type was LED, and

the monitor operated on a 5V/1.8A power supply, making it suitable for various

interactive display applications.

The 7-inch Capacitive Touchscreen offered effortless and intuitive interaction. Its

ample size, vibrant colors, and accurate touch response enhanced your content

experience. Versatile for various uses, from interactive kiosks to digital signage.

Enhance user engagement with this stylish and highly responsive display.

Table 10 showed the technical specifications of the Wantai Stepper Motor,

specifically the Nema 17 model 42BYGHW215-X with a 35mm motor length. This
22
stepper motor operated with a step angle of 1.8 degrees and featured two phases. It

had a rated voltage of 3.83 V and a rated current of 2A, with a phase resistance of

2.55 ohms and a phase inductance of 4.9 mH. The motor was equipped with No. 4

lead wires and weighed 0.35kg, making it suitable for various precision control

applications. We switched to Wantai motors because the original ones could not
handle the weight of the dual-axis. Wantai motors are stronger and can support the

load better, ensuring that the dual-axis functions properly without any issues.

A stepper motor is a precise type of electric motor that moves in fixed increments,

making it ideal for controlling a camera platform's x and y-axis movement. Mounted on

the platform, stepper motors received commands from a control system to move the

camera to specific positions. This precise control allowed for accurate positioning,

which was crucial in tasks like photography or surveillance.

Table 11 detailed the technical specifications of the Atmega328p microcontroller. It

featured an 8-bit AVR CPU with 28 pins and operated within a voltage range of 1.8V to

5.5V. With 23 programmable I/O lines and 6 PWM channels, it offered flexible

interfacing capabilities for diverse applications. The CPU speed was 1MIPS for 1MHz,

and it included 2KB of internal SRAM and 1KB of EEPROM, providing ample memory

for program data and configuration settings.

The Power Supply Technical Specifications, outlined in Table 12, detailed the

essential properties required for efficient operation. With an Input Voltage Range

spanning from 110V to 220V, it accommodated various power sources commonly

found worldwide. The Output Voltage was fixed at 12V, ensuring consistent power

delivery to connected devices. With a Wattage of 120 watts (equivalent to 10

amperes), this power supply unit could efficiently handle the demands of OptiScan's
components. Its Power Rating of 12V 120W solidified its capability to provide stable

and reliable power, essential for the optimal performance of the system.

The power supply served as the primary source of electrical power for the OptiScan

system. It took in electricity from a standard power outlet, converting it into a suitable

form for the various components within the system to operate. The Input Voltage

Range of 110V to 220V ensured compatibility with different electrical systems

worldwide. Once converted, the power supply delivered a consistent output voltage of

12V, which was essential for powering components such as the Raspberry Pi, motors,

and sensors. With a Wattage rating of 120 watts (equivalent to 10 amperes), the power

supply could meet the energy demands of the system without interruption or voltage

fluctuations, ensuring stable and reliable performance.

In Table 13, the Stepper Motor Driver was a crucial component of the OptiScan

system, responsible for controlling the movement and position of stepper motors. It

accepted an Input Current ranging from 0 to 5 amps, converting it into a precisely

regulated Output Current between 0.5 and 4.0 amps, suitable for driving stepper

motors effectively. The Control Signal operated within a range of 3.3 to 24 volts,

facilitating seamless communication with other system components. With a maximum

power output of 160 watts, it could efficiently handle the energy requirements of

stepper motor operations. The driver supported various Micro Step configurations,

including 1, 2/A, 2/B, 4, 8, 16, and 32, allowing for fine-grained control over motor

movements. Its compact dimensions of 96*56*33 mm (3.78*2.2*1.3 inches) and


lightweight design of 0.2 kg made it suitable for integration into the OptiScan system

without occupying excessive space or adding unnecessary weight. Additionally, its

drive IC, TB67S109AFTG, ensured reliable and precise motor control under diverse

operating conditions, with a temperature range of -10 to 45℃ and resistance to

humidity to prevent condensation.

Table 13 showed the Stepper Motor Driver, which played a vital role in the OptiScan

system, particularly in controlling the movement of stepper motors for the dual-axis

platform. Stepper motors are precise and reliable electromechanical devices used to

achieve precise positioning and control in various applications. In the context of

OptiScan, stepper motors were employed to manipulate the position of optical

components, such as lenses or mirrors, along two axes (X and Y) to scan and capture

images of the eye.

The Stepper Motor Driver served as the intermediary between the control system and

the stepper motors, converting control signals from the system into precise movements

of the motors. It regulated the amount of current supplied to the motors, ensuring they

moved with the required speed, accuracy, and torque. Additionally, the driver enabled

the system to implement microstepping, which divided each full step of the motor into

smaller increments, allowing for smoother motion and finer control over positioning.

By accurately controlling the stepper motors through the Stepper Motor Driver, the

OptiScan system could precisely position optical components to capture high-


resolution images of the eye. This capability was essential for conducting detailed

examinations and diagnostics, as it enabled the system to scan different areas of the

eye with precision and consistency, facilitating accurate analysis and detection of eye

conditions.

The Joystick, outlined in Table 14, operated effectively within a voltage range of

5 volts (V) and featured an internal potentiometer value of 10k. It utilized 2.54mm pin

interface leads for connectivity and was designed to withstand operating temperatures

ranging from 0 to 70 degrees Celsius. With dimensions of approximately 1.57 inches


47
by 1.02 inches by 1.26 inches (4.0 cm x 2.6 cm x 3.2 cm), this joystick provided

precise control and reliable performance for various applications.

The joystick served as a control mechanism for managing the movement of the dual-

axis platform within the OptiScan device. By manipulating the joystick, users could

effectively navigate and adjust the positioning of the platform along both the horizontal

and vertical axes. This functionality enabled precise control over the scanning and

positioning of the optical components, facilitating accurate and efficient operation of the

OptiScan system.

The lamp outlined in Table 15 used LED lights and could be powered through a USB

port. It worked with voltages ranging from 5 to 12 volts and provided cool-colored light.

Plus, its flexible body allowed users to adjust it easily.


The lamp functioned similarly to the light source used by an eye examiner during eye

examinations. It provided illumination to help capture clear images of the eye for

examination and analysis, enhancing visibility and ensuring accurate results in the

OptiScan device.

17
Table 16 depicts the Canon Pixma TS207 printer is a printing solution compatible with
70
Windows 10, 8.1, and 7 SP1, as well as macOS 10.12 or later systems. With power

requirements of AC 100 - 240 V, 50/60 Hz, it offers flexibility for various environments.
17
Equipped with a total of 1,280 nozzles, it achieves a maximum print resolution of 4,800

x 1,200 dpi, ensuring sharp and detailed prints. The recommended printing area

includes top and bottom margins of 31.6 mm and 29.2 mm, respectively, allowing for

precise placement and alignment of printed content.

This printer is essential for producing hard copies of the results obtained from

the OptiScan device. It ensures that the findings are available in a tangible format for

further analysis, sharing with patients, or record-keeping purposes. With its

compatibility with various operating systems and high-resolution printing capabilities, it

enables clear and detailed output of the detected eye conditions and other relevant

information.

The SanDisk Storage outlined in Table 17, with a capacity of 64 GB, offers

reliable data storage for various applications. It boasts a high read speed of 200 MB/s,

ensuring swift access to stored data for seamless operations. The write speed ranges
from a maximum of 90 B/s to a minimum of 30 B/s, accommodating different data

writing needs with efficiency. Suitable for recording Full HD, 3D, and 4K content, this

storage solution provides ample space and performance for capturing high-quality

multimedia content.
66
Using a 64GB micro SD card for the operating system and processing on a Raspberry

Pi 4 is a practical choice. This ample storage capacity allows for storing the operating

system, various software packages, and data generated during processing tasks. With

sufficient storage space, you can accommodate updates, installations, and data

manipulation without running into storage constraints. It's important to ensure the SD

card's compatibility and performance to optimize the Raspberry Pi's functionality and

efficiency.

Table 18 provides a detailed list of materials utilized in the development of OptiScan.

These materials are essential components that contribute to the functionality and

construction of the device.This provides stakeholders with detailed information about

the materials needed to construct OptiScan.

76
The testing and evaluation of each aspect of the OptiScan system involved unit testing,

integration testing, and system testing. Testing and evaluation were critical

components of system development, ensuring that systems met specified

requirements, functioned correctly, and delivered value to users. Different phases of

testing were employed throughout the system development lifecycle to validate

different aspects of the system, including the following:


12
Unit Testing. The focus was on testing individual units of code, such as functions,

methods, or classes in terms of software, as well as individual hardware components


12
in isolation from the rest of the system. Each unit was tested to ensure that it behaved
65
as expected, accurately performing its intended functionality. This included testing

different input conditions, edge cases, and error-handling scenarios to verify the

correctness and robustness of the unit. Unit testing involved taking the whole code or

developed software and testing each aspect of it manually, covering different aspects

of its behavior and functionality. As per the hardware, individual components were

tested to ensure each was functioning correctly and within the required functionality.

The unit tests involved each component of the system, specifically the hardware and

software components specified in Table 30.

Integration Testing. This phase focused on verifying the interactions and interfaces

between integrated components or modules of the software and hardware system. It

ensured that integrated modules communicated correctly, exchanged data accurately,

and functioned cohesively as part of the larger system. Integration tests covered

integration points, function calls, data exchanges, and communication protocols to

detect defects in the interactions between components. The system was tested using

the top-down approach where testing began from the top-level modules or subsystems

and gradually progressed towards lower-level modules or units. In this approach,

higher-level modules were tested first, and then progressively deeper levels of the
58
system were integrated and tested until the entire system was tested as a whole.
Specifically, the testing started at the power supply module following the booting of the

app of the system through the Raspberry Pi that was in connection or integrated with

the hardware such as capturing and printing through the camera and printer. The next

module to be tested was the dual-axis platform that was connected through digital

signals from the GPIO pins, and then each individual component was to follow.

System Testing. This phase involved validating the entire software system as a whole,
57
covering all functional and nonfunctional requirements. The focus was on testing the

system from an end-user perspective, ensuring that it met specified requirements and

functioned correctly in its intended environment. System tests covered all aspects of

system behavior, including user interactions, workflows, and system performance. The

researchers’ approach was by end-to-end testing and examination, which involved

executing test cases that simulated real-world user scenarios and interactions with the
31
system. Tests covered all functional requirements, verifying that the system behaved

as expected and performed the intended operations. Non-functional requirements,

such as accuracy, functionality, and user-friendliness, were also evaluated during

system testing. The system test was based on several ISO/IEC standards presented in

Tables 19, 20, 21, and 22 in the form of a Likert scale. The evaluation was divided into

two tests: those intended for use by professionals related to the development part of

the project, such as IT and engineers, and the quality-in-use test, which was intended

for use by professionals directly involved in testing the system in real-world usage.
1
The researchers utilized the following statistical methods and tools to evaluate the data

collected during the testing and survey in relation to the development of the system.

4-Point Likert Scale. It is utilized by the researchers to gather data for determining the

level of agreement in the pre and post surveys, as well as gathering insights for

variables such as functionality, accuracy and user-friendliness. A 4-point likert scale is

specifically used to omit the neutral response present from the 5-point likert scale to

keep a confusion-free response and in the process remove a bias response.

Confidence Computation. The researchers used the YOLOV8 (You only look once)
36
algorithm for the detection. In this algorithm, the confidence is represented as the

probability that a detected bounding box contains an object of interest. The


60
confidence score is calculated using a combination of the objectness score (probability

that the bounding box contains an object) and the class probability scores (probabilities

of different classes for the detected object).

The formula used in calculating the confidence is as follows:

1
Weighted mean. It used to verify the weighted mean values of the level of agreement

for the pre-survey and post-survey as well as accuracy, functionality and user-
friendliness of the developed system which uses 4-point likert scales. Weighted mean

is also used to describe the results of the initial tests and final tests in identifying the

correctness and confidence of each detection classification. Below is the weighted

mean formula used:

N = Total number of tests

i = instance of the testing set

j = index of the testing set

Calculating the weighted mean of the confidence scores for instance i as:

1
Composite mean. The weighted mean is used to calculate and verify the overall mean

values of the rates of the level of agreements, the likert scale rates of the nonfunctional

aspect of the system, and the overall detection correctness, confidence and detection

classification as a whole.

M = Total number of averaged scores

i = index of the testing set


The Likert scale, introduced by psychologist Rensis Likert in 1932, is widely used to

gauge opinions for evaluation in research. Traditionally, it presents respondents with a

range of options, such as "Strongly Disagree" to "Strongly Agree". Variations of the

Likert scale have emerged to suit different research contexts. One such variation is

the 4-point Likert scale, which omits the neutral option. This scale offered simplicity

and efficiency, making it advantageous for surveys with limited space or to reduce

respondent confusion. Additionally, the 4-point scale may elicit more definitive

responses from participants, as it lacks a neutral option (Fowler Jr., 2013).

The 4-point Likert scale made it a practical choice for research studies, which the

researchers specifically applied for the Optiscan study which needs firm analysis and

evaluation of the different aspects of the data and the paper as a whole.

Table 19 is the 4-point Likert scale of level of agreement of the users on various

questions about the development of the OptiScan Cataract Detection System.

Assigned point has a scale range gap of 0.75 from 1.00 to 4.00 which is equivalent to

25% gap for each point in terms of percentages. Assigned point 4 of the scale is

verbally interpreted as highly acceptable, followed by assigned point 3 acceptable, 2

as slightly acceptable and 1 as unacceptable. The likert-scale of level of agreement

has a categorical with a verbal interpretation counterpart as it will be used as basis for

other evaluation metrics and is based on definition of the ISO/IEC 25062:2006.


1
Table 20 depicted the 4-point Likert scale for the accuracy assessment of the users of

the developed OptiScan Cataract Detection System. Assigned point 4 of the scale was

verbally interpreted as very accurate, followed by assigned point 3 as accurate, 2 as

less accurate, and 1 as very inaccurate.

1
Table 21 displayed the 4-point Likert scale for the level of functionality of the

developed OptiScan Detection System in detecting, assessing, and identifying the

normal eyes and cataract as well as evaluating its overall functionality. Assigned point

4 of the scale was verbally interpreted as highly functional, followed by assigned point

3 as functional, 2 as less functional, and 1 as not functional.

Table 22 is the 4-point Likert scale of the level of efficiency as a result of the

developed OptiScan Detection System specifically in providing access to the results

efficiently. Assigned point 4 of the scale is verbally interpreted as highly efficient,

followed by assigned point 3 as efficient, 2 as less efficient and 1 as inefficient.

Table 23 presented a 4-point Likert scale designed to assess the reliability of

the developed OptiScan Detection System. In this scale, point 4 was labeled as highly

reliable, point 3 as reliable, point 2 as less reliable, and point 1 as unreliable. This

scale was specifically utilized during the beta testing phase by Technical/Technology
81
Professionals to evaluate the overall performance of the system. It aimed to gather

feedback on the system's reliability, providing valuable insights for further refinement

and improvement.
Tables 24, 25, and 26 presented the System testing form used during the Beta testing

phase by Technical/Technology Professionals. This form encompassed various

aspects of system evaluation, including functionality, efficiency, and reliability. It


43
allowed professionals to assess the system's performance, identify any potential

issues or areas for improvement, and provide valuable feedback for refinement.

Table 24 showed the Integration Tests done during the Beta Testing phase for

Technical/Technology Professionals. It focused on System Functions. This table gave

a complete picture of the testing process, explaining how professionals in the field

evaluated different functions of the system. Each entry in the table described particular

system functions and the corresponding test metrics, which helped professionals

check how well the system worked. By doing this thorough evaluation, professionals

could find any problems or ways to make the system better, making sure it met the

needed standards and user expectations.

During the Beta testing phase, Technical/Technology Professionals utilized the system

test form depicted in Table 25 to assess the time efficiency of the system. This form

offered a structured format for professionals to evaluate and provide feedback on

various aspects related to time efficiency. By assessing different test metrics,

professionals ensured that the system met the necessary standards and user

expectations before its final deployment.


During the Beta testing phase, Technical/Technology Professionals utilized Table 26 to

conduct Integration Tests focusing on Reliability. This table served as a structured

format for evaluating the reliability of various Systems and Subsystems after 1 hour of

continuous usage. It outlined specific Systems and Subsystems along with

corresponding Test Metrics, providing a standardized approach to assess the system's

reliability under prolonged operational conditions.

83
The successful implementation of the OptiScan system relied on the selection of the

right tools for the overall assessment. In this section, the survey questions for system

evaluation were outlined. The questions were based on the 4-point Likert scales

defined on Tables 19, 20, 21, and 22.

The system evaluation developed by referencing "Understanding ISO/IEC 25010


3
Components" included assessments of both Product Quality and Quality in Use
31
Characteristics. This evaluation framework allowed for a comprehensive examination

of the system's performance and user satisfaction.

Quality in use shifted the attention from the inherent attributes of a system to its

performance and satisfaction in real-world usage contexts. This evaluation in Table 27,

vital for IT professionals and engineers assessing system quality, emphasized factors

like user satisfaction, productivity, efficiency, and safety during interactions with the

system in its intended environment. Unlike product quality, which relied on predefined
3
criteria, quality in use was subjective and context-dependent. It necessitated gathering
feedback from users and stakeholders to comprehend their experiences, preferences,

and requirements.

In Table 28, the System Quality Evaluation for Ophthalmologists, Health Care Clinic
1
Staff, and Other Medical Field Officers assessed the quality in use of the OptiScan

system. Quality in use referred to how effectively and satisfactorily users interacted

with the system during their tasks. It assessed factors such as user satisfaction,

productivity, efficiency, and safety, reflecting the real-world usability and effectiveness

of the system in meeting users' needs.

To implement the OptiScan system, the project was evaluated by IT

professionals, engineers, and an ophthalmologist. Following the evaluation was the

deployment to the community for patient use with a major focus on maintaining the

system functionalities and detection accuracy. To ensure that the OptiScan device was

effectively utilized in the community, the researchers provided comprehensive

guidance, which included outlining a detailed written manual about the device's

operations, workflow, and maintenance procedures. Additionally, simplified instructions

were prominently displayed near the control system for quick reference during use.

Recognizing that hands-on demonstrations were often effective too, the researchers

conducted interactive sessions with the staff at the Health Care Clinic on how to use

the OptiScan device. During the sessions, the researchers demonstrated how to

operate the device, addressed any questions or concerns, and provided practical tips

for maintenance, care, and use cases of the device. This proactive approach aimed to
empower users with the knowledge and skills needed to utilize the OptiScan device

optimally and ensure its continued functionality within the community healthcare

setting.

The OptiScan system, as outlined in Table 29, offered detection for Normal Eye,

Cataract Eye, and Abnormal/Non-Cataract Eye, providing versatile screening

capabilities. Powered by a DC power supply with a wide voltage input range, it

included key indicators for status feedback and ran on Raspbian - Raspberry Pi OS

with a screen resolution of 1024x768. The exterior featured a PVC Sintra Board as the

case and wood as the system's base, ensuring protection from outside interferences

and hazards that the system components might present to the user.

The OptiScan device provided a simple and user-friendly experience for performing

eye examinations. Figure 7 showed the basic usage guide for the device. Users began

by following the specified power sequences to ensure proper startup. Once powered

on, users could explore various preview modes to enhance the inspection process and

adjust settings accordingly. Preview guides were also placed in the instructions for

guided alignment of the camera position, ensuring comprehensive coverage of the

eye. Then, the initiation of the eye inspection process allowed for the analysis of

captured images in detail. Lastly, the printing of examination results was facilitated for

reference and aiding in determining any necessary follow-up actions.


RESULTS AND DISCUSSIONS

This study presented the most important numerical data and evaluation results,

followed by a comprehensive discussion of the significance and implications. The

section included all the assessments and data gathered, examined, and processed for
1
interpretation. Charts, tables, and figures were used to present the results
1
comprehensively. The primary objective of this section was to identify the data
obtained from the actual testing for the overall analysis of the study. This was to verify

several factors specified in the study’s methodology.

The result of the digital evaluation and analysis of the trained model through the test

images divided from the dataset used for training. (YOLOV8 Training Results and

Statistics)

Figure 8 showed how well the YOLOV8 classification model learned during training

and validation and how it performed on new data in a graph called an F1 curve. It
8
indicated that the model's best performance, with an F1 score of 0.90, occurred when
8
the classification decision threshold was set to 0.344. This meant that the model was

most accurate when it was confident about its predictions but not overly cautious.

Figure 9 displayed several training metrics of the YOLOV8 classification model,


4
including train/box loss representing the average box loss or error in predicting
4
bounding boxes on the training dataset, as well as how well the model learned to

predict the locations of objects during training. The train/class loss represented the

average class loss or error in predicting classes on the training dataset, indicating how
4
well the model learned to classify objects during training. The val/box loss represented
4
the average box loss on the validation dataset, indicating how well the model

generalized to unseen data in terms of predicting bounding boxes. Finally, val/cls loss
4
represented the average class loss on the validation dataset, showing how well the

model generalized to unseen data in terms of classifying objects. All metrics showed a
steady decrease in loss over time during the training and validation of the YOLOV8
8
model, indicating that the model steadily improved its ability to predict bounding boxes

and classify eye images.

84
Figure 10 depicted the confusion matrix resulting from the training and

evaluation of the YOLOV8 Cataract Detection Model. The matrix consisted of the Y-

axis as the "Predicted" values from the model evaluation, while the X-axis represented

the True or Expected value. In detecting cataracts, out of 73 tested, 3 were incorrectly

detected as non-cataract or not normal, while 70 or 96% of the total values were

detected correctly. On the "not normal or non-cataract" detection, 3 out of 39 were

detected incorrectly, while 36 or 92% were correct. For the "normal" class, 12 out of 54

were incorrectly detected, while 42 or 78% were correct. Lastly, the background class

did not have any detections, as the system or model was not trained to determine a

background image due to the nature of the system setup. Overall, the detection results

from the YOLOv8 Cataract Detection Model, as depicted in Figure 8, demonstrated

accuracy and functionality. While not reaching 100% accuracy due to factors such as

lighting, eye size, or shape variations, the model performed effectively, detecting

cataracts with high accuracy. The model's probabilistic detection method, facilitated by

YOLOv8 and represented with bounding boxes, ensured comprehensive analysis of

eye images.

Table 30 detailed the testing and discussion of initial results derived from the hardware

and software system unit testing. The test comprised expected output based on the
OptiScan system's software and hardware requirements, alongside actual unit test

results. Interpretation of the results relied on the deviation of actual results from each

hardware's minimum expected operational output value. This demonstrated that all

hardware and software components functioned as expected based on the test

description. As a result, both the hardware and software components operated

effectively, meeting their respective performance criteria. The physical components,

such as the single-board computer, camera module, and Nema 17 stepper motors,
22
fulfilled their tasks without issues. The Nema 17 stepper motor was chosen for its

reliability and performance after the initial stepper motor (Nema 24) used only for trial

proved insufficient. Similarly, the software components, including the deep

convolutional neural network algorithm and Python application, accurately detected

cataracts and performed as intended.

For Tables 31, 32, and 33, the confidence level did not reach 100% due to several

factors. Firstly, the detection process relied on probabilities and bounding boxes,

where the YOLOv8 model assigned a certainty level to each detection based on

specific parameters. Additionally, variations in eye characteristics such as shape and

color contributed to the variability in confidence levels. Furthermore, environmental

factors like lighting conditions could also affect the accuracy of the detection process

by influencing how light is reflected on the eye.

Table 31 illustrated a high level of accuracy in the detection process, with a weighted

mean of 90.24%. This figure indicated that out of the total trials examined, almost all
were identified correctly, totaling 10 successful detections. This level of accuracy

suggested that the system's algorithms and methodologies were adept at

distinguishing the intended outcomes, minimizing the likelihood of false identifications.


96
Therefore, this enhanced confidence in the system's performance and its suitability for

real-world applications where precision was important. Moreover, the fact that there

were 10 instances labeled as "very accurate" in interpretation further underscored the

system's stability. This designation confirmed not only the high level of accuracy in

individual detections but also the overall reliability and effectiveness of the system in

consistently delivering precise results.

Table 32 showcased a notable level of accuracy in the detection process, yielding a

weighted mean of 78.10%. This indicated that the majority of trials resulted in correct

identifications, totaling 10 instances of "correct detection" in the correctness column.

Furthermore, in the interpretation column, 7 instances were designated as "very

accurate" and 3 instances as "accurate". This breakdown underscored the system's

capacity to not only identify targets correctly but also to do so with a high degree of

precision and reliability.

Table 33 presented a detection performance with a weighted mean of 52.41%,

accompanied by a verbal interpretation of "accurate". However, despite the presence

of a confidence level for incorrect detections, such cases were considered invalid

because the detection itself was incorrect. This indicated a moderate level of accuracy
in identifying targets. Specifically, there were 7 instances of "correct detection" and 3

instances of "incorrect detection" in the correctness column.

In the interpretation column, the breakdown revealed 5 instances labeled as "very

accurate," 1 instance as "accurate," 1 instance as "inaccurate," and 3 instances as

"very inaccurate". This distribution reflected varying degrees of precision and reliability

in the detection process. Despite the moderate overall accuracy, the presence of 5

instances labeled as "very accurate" suggested that certain detections were

particularly precise and reliable. However, the presence of instances labeled as

"inaccurate" and "very inaccurate" underscored areas where improvement may be

needed to enhance the system's performance and reliability.

The composite mean in Table 34 showed that the accuracy rate for identifying

"Cataract," "Normal," and "Not Normal" eye conditions was calculated at 73.58%. This

consolidated accuracy reflected the system's performance in correctly categorizing

various eye conditions, encompassing both normal and abnormal states. Despite

variations in accuracy rates for individual categories, the composite mean

comprehensively assessed the system's overall accuracy in eye condition

identification.

Table 35, 36, and 37 showed the alpha test integration test results. The tables detailed

the comprehensive evaluation of the OptiScan system and its individual subsystems

conducted by the researchers, examining their operational functionality and accuracy.


Each test assessed specific features, covering routine eye examination procedures,

power distribution, software functionality, camera alignment, and result printing.

In Table 35, the integration testing outcomes for system functionalities were presented,

indicating that all tested Systems and Subsystems had successfully achieved their

intended functionalities according to the provided Test Description and Minimum

Expected Output Values of Test Steps. As per the results of the integration tests,

100% or all components were very functional and successful in the integration of the

OptiScan system. This suggested that the various components of the system were

functioning as intended, signifying their strong performance in fulfilling expected

designated functions. In essence, the results signified that the system functionalities

were not only operational but also delivering the expected outputs as specified during

the testing process.

In Table 36, the results of system testing for time efficiency were presented, detailing

the researchers' findings during the testing phase. It provided a comprehensive

breakdown of testing metrics and corresponding results for various Systems and

Subsystems within the OptiScan system. The test results indicated that each

subsystem completed its tasks in less than a minute, showcasing efficiency in

conducting diagnostic tasks. For example, the OptiScan System efficiently completed a

basic eye examination procedure for both eyes of a patient in approximately 1 minute
86
and 45 seconds. These findings offered valuable insights into the performance and
efficiency of each subsystem, aiding in the optimization and improvement of the

OptiScan System.

Table 37 showcased the results of Integration Tests regarding their Reliability,

indicating that 100% or all of the Systems and Subsystems were functional during

testing based on the test metric for continuous usage of 8 hours. This highlighted that

the OptiScan system consistently demonstrated dependability in its usage, ensuring

reliable performance across its various components and subsystems.

Tables 38, 39, 40, and 41 provided the results of Beta test integration tests, offering a

detailed evaluation of the OptiScan system and its individual subsystems by

Technical/Technology Professionals. These tables thoroughly explored the operational

functionality, accuracy, and reliability of the system components. Each test examined

distinct features, covering routine eye examination procedures, power distribution,

software functionality, camera alignment, and result printing. Through these

comprehensive evaluations, professionals assessed the system's performance across

various functionalities, ensuring its effectiveness and reliability in real-world

applications.

In Table 38, it illustrated that all tested Systems and Subsystems had effectively met

their intended functionalities as per the provided Test Description and Minimum

Expected Output Values of Test Steps. The integration tests produced a weighted
mean of 93.57%, denoted as "Highly Functional," indicating successful performance

and integration of all components into the OptiScan system.

Moving on to Table 39, the results demonstrated a high level of efficiency across

various Systems and Subsystems within the OptiScan, with a weighted mean of

94.38%, interpreted as "Highly Efficient." This indicated streamlined and effective

completion of the entire detection process, ensuring optimal performance.

Table 40 further elaborated on the efficiency of each subsystem within the OptiScan

system, with all components successfully completing their tasks in under one minute.

From Power Distribution to Detection Subsystems, each element achieved efficient

results, contributing to an average time of 64.75 seconds for a basic eye examination

procedure. These findings highlighted the system's effectiveness in swiftly conducting

essential functions, thus enhancing the overall efficiency of the eye examination

process.

Table 41 presented the results of Integration Tests focusing on Reliability, indicating

that all Systems and Subsystems remained fully functional during continuous usage

testing for 1 hour, as evaluated by Technical/Technology Professionals during Beta

Testing. The tests resulted in a weighted mean of 96.43%, which was verbally

interpreted as "Highly Reliable." This high reliability score indicated that the OptiScan

system consistently delivered dependable performance across its various components

and subsystems, ensuring that it met strict standards for sustained operation. This
strong reliability demonstrated the system's capability to function effectively under

extended use, providing confidence in its practical application and overall durability.

Table 42 presented the results of the system testing phase, specifically focusing on the

classification accuracy for cataract cases. The weighted mean accuracy stood at

69.50%, indicating a significant level of precision in identifying cataracts. Additionally,

the verbal interpretation labeled this accuracy level as "Accurate," further confirming

the system's accuracy. All 10 out of 10 patients with cataracts were detected correctly

during testing, highlighting the reliability and strength of the developed system in

diagnosing this condition accurately.

Table 43 presented the classification results for Normal Eye during system

testing. The weighted mean accuracy was documented at 71.19%, indicating a high

level of accuracy, with a verbal interpretation of "Accurate". Furthermore, 7 out of 7

patients were correctly detected, emphasizing the system's accuracy in identifying

normal eye conditions reliably.

Table 44 illustrated the classification outcomes for Abnormal Eye during system

testing. With a weighted mean accuracy of 68.88%, the results indicated a significant

level of accuracy, corresponding to the verbal interpretation "Accurate". Additionally, 5

out of 5 patients were accurately detected, underscoring the system's capability to

identify abnormal eye conditions effectively.


Table 45 presented the composite mean for System Testing, which highlighted
62
the overall accuracy rate for identifying different eye conditions. With a weighted mean

of 69.86% and a verbal interpretation of "Accurate," this consolidated metric reflected

the system's performance in detecting and categorizing eye conditions, including

"Cataract," "Normal," and "Not Normal". Despite variations in accuracy rates for

specific categories, the composite mean provided a comprehensive assessment of the

system's overall accuracy in identifying various eye conditions.

Table 46 showed the results of the System Quality Evaluation conducted among

Technology/Technical Field Professionals and the System Quality in Use Evaluation

for Medical Field Professionals. The weighted mean of 92.52% was observed for each

general and sub-characteristics with the equivalent features of the developed system.

These features encompassed various aspects such as Camera Control and

Positioning, Optiscan User Interface, Eye Detection and Classification Model, Printing

of Results, Eye Images, Examination Results, Device’s Interiors and Exterior, System

Generated Virtual Results, and System Generated Printed Results. The evaluation

indicated that the system was considered "Highly Acceptable" across different
1
characteristics, including Maintainability, Security, Reliability, Functional Suitability,

Performance Efficiency, Compatibility, and Usability. Similarly, it received high ratings


1
for System Quality in Use, specifically in terms of Effectiveness, Efficiency,

Satisfaction, System Coverage, and Usability, as evidenced by the computed weighted

mean.
Based on Table 47, the evaluation results from various Technology/Technical Field

Professionals, including IT, Computer Engineers, and Electronics Engineers, the

composite mean weighted at 90.61% signified a high level of acceptance for the

developed system. This indicated that the system's functionality and performance

across general and sub-characteristics, as well as equivalent features, were highly

regarded by the evaluators. With a consistent "Highly Acceptable" interpretation across

different groups of professionals, it was concluded that the system had garnered

positive feedback and approval from experts in the field.

The table 48 presents assessment outcomes from medical field professionals,

including ophthalmologists, health care clinic staff, and other medical field officers,
1
demonstrating a composite mean weighted at 92.02%. This indicates a strong level of

acceptance for the developed system across various general and sub-characteristics,

as well as equivalent features. The consistent rating of "Highly Acceptable" across

different groups of medical experts underscores the positive feedback and approval of

the system within healthcare settings.

Table 49 shows combined evaluation results from both Technology/Technical and

Medical Field Professionals demonstrate a composite mean of 91.54%. This highlights

a strong level of acceptance for the developed system across various aspects,

including general characteristics, sub-characteristics, and equivalent features. The

consistent rating of "Highly Acceptable" emphasizes the positive reception and support

of the system by professionals from different backgrounds.


CONCLUSIONS

The researchers developed Optiscan, a Single-Board Computer-Based Cataract

Detection Device, integrating the YOLOv8 Deep Convolutional Neural Network

(DCNN) framework. The resulting device was embedded within a Raspbian-based

desktop application, employing Python as the primary programming language. The

application utilized frameworks such as TKinter for the user interface of the Optiscan

system and mainly employed essential libraries like Pandas, PIL, and OpenCV for

image processing, alongside a printing functionality.

Moreover, a dual-axis camera positioning platform, utilizing a custom-developed dual-

ATmega circuit, enhanced the device's imaging as well as in-examination procedure

capabilities. The development process and the final features of the device aligned
closely with the study's defined objectives. These included the creation of a device

capable of precise eye image capture, offering standardized initial cataract diagnosis

through a CNN-based machine learning model, and providing examination results

through printing. These features aimed to comprehensively present eye diagnosis in

terms of detecting cataracts and facilitate patient education regarding the diagnosis.

Following a comprehensive analysis, interpretation, and discussion of the results,

several conclusions were drawn from this study.

The developed cataract detection system exhibited a weighted mean accuracy of

69.86% on actual use, classified as "Accurate" based on the 4-point Likert scale.

Moreover, the composite mean accuracy of the overall classification model,

encompassing both usage trials and actual patient testing, stood at 71.72%, also
1
classified as "Accurate". These findings underscored the effectiveness and reliability of

the developed system in facilitating cataract detection, demonstrating a potential for

basic health center use and initial cataract diagnostic applications as well as gaining

attention and high evaluation from several professionals in the field.

Based on the comprehensive testing of the developed system encompassing unit

tests, integration tests, and system tests, the developed system demonstrated levels of

reliability. Evaluation scores provided by professionals across various disciplines,

including IT professionals, computer and electronics engineers, ophthalmologists,

general healthcare officers, and medical technologists, yielded a composite mean of


91.54%. This score classifies as "highly acceptable” based on the Likert scale of

acceptability, affirming the system's high acceptability, functionality, and effectiveness

in meeting the expectations of diverse stakeholders. Also, to further enhance the

device's utility and address broader healthcare needs, a few of these professionals in

the field recommended that future developments expand its scope beyond cataract

detection alone. Incorporating capabilities to detect additional eye diseases would

significantly increase its clinical impact. For evaluators, notably from the community

where the device was deployed, a high level of agreement and acceptance toward the

developed system was shown as well as their regular use of the device during a week

of usage trials within the community. This exhibits the device's impact with the target

demographic and highlights its potential for seamless integration into local healthcare

practices, specifically for assisting in initial cataract diagnosis.

After the comprehensive development, evaluation, and implementation of the Optiscan

device, the project made significant marks in addressing the stated problems and

objectives of this study.

The OptiScan device represented an advancement in cataract detection and in the

field of embedded systems particularly in medical applications in underserved and

remote areas lacking access to specialized eye care facilities. Through the developed

hardware device capable of capturing eye images and integrating Convolutional Neural

Networks (CNNs) for automated cataract identification, the project fulfilled its primary

objective of enabling primary healthcare providers to conduct initial cataract


screenings. Moreover, by incorporating a printing function, the device ensured ease of

access to examination results, facilitating comprehensive patient education and

enhancing the overall diagnostic process. Healthcare professionals, including those in

the Health Care Clinic at Barangay Concepcion, Lubao, Pampanga, gained access to

an initial diagnostic tool that enhanced patient care and treatment recommendations.

The underserved community gained access to initial cataract diagnosis and early

intervention.
REFERENCES

21
B & H Foto & Electronics Corp.(2024). SanDisk 64GB Extreme PRO UHS-I SDXC

Memory Card. Retrieved from https://www.bhphotovideo.com/c/product/1692701-

REG/sandisk_sdsdxxu_064g_ancin_64gb_extreme_pro_uhs_i.html

Canon. Inkjet PIXMA TS207. Retrieved from

https://ph.canon/en/business/pixma-

ts207/specification?category=printing&subCategory=pixma

3
Codacy Quality (2021). An Exploration of the ISO/IEC 25010 Software Quality Model.

Retrieved from https://blog.codacy.com/iso-25010-software-quality-

model#ISO25010inPractice%C2%A0

53
Components101 (2018). Joystick Module. Retrieved from

https://components101.com/modules/joystick-module

69
Dr. Emma Nash et al. (2013). Cataracts. Retrieved from

https://journals.sagepub.com/doi/10.1177/1755738013477547
40
DFRobot Support. TB6600 Stepper Motor Driver. Retrieved from

https://www.dfrobot.com/product-1547.html

Hakan Asal (2018). What is OpenCV?. Retrieved from

https://medium.com/@hakanasal51/what-is-opencv-fc73e4695625

29
Hongpeng Sun et al. (2014), Age-Related Cataract, Cataract Surgery and Subsequent
42
Mortality: A Systematic Review and Meta-Analysis. Retrieved from

https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&doi=4b18a90ecfc338980

48cacc3f692eb8f7b6e6bef

Intel (2018). Distribution for Python. Retrieved from


51
https://www.intel.com/content/dam/develop/external/us/en/documents/2018-python-

release-notes-intel-28r-29-distribution-for-python-2018-update-1.pdf

Ishitaa Jindal et al. (2019). Cataract Detection using Digital Image Processing,

Published in: 2019 Global Conference for Advancement in Technology (GCAT).


77
Retrieved from

https://ieeexplore.ieee.org/abstract/document/8978316/authors#authors
2
Ivan Dave F. Agustino et al. (2019). SBC-BASED CATARACT DETECTION

SYSTEM
1
USING DEEP CONVOLUTIONAL NEURAL NETWORK FOR ASIA PACIFIC EYE

CENTER BIÑAN BRANCH

5
Lewin, K. (1946). Action research and minority problems. Journal of Social Issues, 2,

4,

34–46. Retrieved from https://doi.org/10.1111/j.1540-4560.1946.tb02295 .

14
Makerlab Electronics (2024). Wantai Stepper Motor Nema 34 97mm 4A 54kg-cm
88
86BYGH450B-003. (n.d.-b). Retrieved from https://www.makerlab-
14
electronics.com/products/wantai-stepper-motor-nema-34-97mm-4a-54kg-cm-

86bygh450b-003

9
Manish Chablani, (2017). YOLO — You only look once, real time object detection

explained. Retrieved from https://towardsdatascience.com/yolo-you-only-look-once-

real-time-object-detection-explained-492dc9230006

7
Microchip Technology (2008). 8-bit AVR Microcontroller with 32K Bytes In-System

Programmable Flash. Retrieved from

https://ww1.microchip.com/downloads/en/DeviceDoc/Atmel-7810-Automotive-

Microcontrollers-ATmega328P_Datasheet.pdf
6
Mitchell Finzel (2017), "Convolutional Neural Networks in Medical Imaging," Scholarly

Horizons: University of Minnesota, Morris Undergraduate Journal: Vol. 4: Iss. 2,

Article 3. Retrieved from https://digitalcommons.morris.umn.edu/horizons/vol4/iss2/3/

16
Nana Yaa Koomson et al. (2019), Accessibility and Barriers to Uptake of Ophthalmic

Services among Rural Communities in the Upper Denkyira West District, Ghana.

Retrieved from https://ijn-medicalarticles.info/jos/article/986

26
Raspberry Pi (2022). Raspberry Pi High Quality Camera and Lenses. Retrieved from

https://www.raspberrypi.com/products/raspberry-pi-high-quality-camera/

ResearchGate (2024). Hardware specifications of Google Colaboratory. Retrieved

from
92
https://www.researchgate.net/figure/Hardware-specifications-of-Google-

Colaboratory_tbl2_353925144

50
Roberto Bellucci (2019). Newer Technologies for Cataract Surgeries. Retrieved from

https://link.springer.com/chapter/10.1007/978-981-13-9795-0_1

35
Shenming Hu et al, (2021). ACCV: automatic classification algorithm of cataract video
based on deep learning. Retrieved from https://biomedical-engineering-

online.biomedcentral.com/articles/10.1186/s12938-021-00906-3

38
Shuai Yuan et al. (2022). Metabolic and lifestyle factors in relation to senile cataract:

a
59
Mendelian randomization study. Retrieved from

https://www.nature.com/articles/s41598-021-04515-x

68
SparkFun Electronics. Raspberry Pi LCD - 7" Touchscreen. (n.d.). Retrieved from

https://www.sparkfun.com/products/13733

46
Stack Exchange Inc,. (2024). Minimum System Requirements to run Python & tkinter.
46
Retrieved from https://stackoverflow.com/questions/64305083/minimum-system-

requirements-to-run-python-tkinter

20
Suranjit Kosta (2023). Comparison of Agile and Scaled Agile Framework. Retrieved

from https://www.linkedin.com/pulse/comparison-agile-scaled-framework-suranjit-kosta

54
TensorFlow, (2022). Google Summer of Code. Retrieved from

https://summerofcode.withgoogle.com/archive/2022/organizations/tensorflow

13
The Fred Hollows Foundation (2020). PHILIPPINES Where We Work. Retrieved from

https://www.hollows.org/us/where-we-work/south-east-asia/philippines-2
10
Tripathi, P., et al (2022). MTCD: Cataract detection via near infrared eye images.

Computer Vision and Image Understanding, 214, 103303. Retrieved from

https://doi.org/10.1016/j.cviu.2021.103303

Ultralytics Inc, (2024). YOLOv8. Retrieved from

https://docs.ultralytics.com/models/yolov8/#supported-tasks-and-modes

72
Waveshare (2016). RPi Camera (D), Raspberry Pi Camera Module, Fixed-focus.

(n.d.).
79
Retrieved from https://www.waveshare.com/rpi-camera-d.htm

32
Yadav, M. R., & M, W. N. (2017). Cataract detection. International Journal of

Advanced Research in Computer and Communication Engineering. Retrieved from

https://doi.org/10.17148/ijarcce.2017.6662
Similarity Report

9% Overall Similarity
Top sources found in the following databases:
5% Internet database 1% Publications database
8% Submitted Works database

TOP SOURCES
The sources with the highest number of matches within the submission. Overlapping sources will not be
displayed.

University of Perpetual Help Sytem Laguna on 2019-03-10


1 1%
Submitted works

ijrte.org
2 <1%
Internet

Kingston University on 2024-04-15


3 <1%
Submitted works

Asia Pacific University College of Technology and Innovation (UCTI) on...


4 <1%
Submitted works

d-nb.info
5 <1%
Internet

digitalcommons.morris.umn.edu
6 <1%
Internet

Flinders University on 2023-04-30


7 <1%
Submitted works

University of Sheffield on 2023-08-29


8 <1%
Submitted works

komunikacie.uniza.sk
9 <1%
Internet

Sources overview
Similarity Report

University of New South Wales on 2022-12-07


10 <1%
Submitted works

University of Reading on 2018-09-15


11 <1%
Submitted works

University of West London on 2023-01-13


12 <1%
Submitted works

University of New South Wales on 2021-10-30


13 <1%
Submitted works

makerlab-electronics.com
14 <1%
Internet

Kwame Nkrumah University of Science and Technology on 2023-09-17


15 <1%
Submitted works

openaccesspub.org
16 <1%
Internet

filedriv.com
17 <1%
Internet

King's College on 2024-04-10


18 <1%
Submitted works

Shinas College of Technology on 2024-01-13


19 <1%
Submitted works

University of Warwick on 2023-08-28


20 <1%
Submitted works

RMIT University on 2021-03-29


21 <1%
Submitted works

Sources overview
Similarity Report

University of Queensland on 2021-05-28


22 <1%
Submitted works

University of South Alabama on 2023-11-22


23 <1%
Submitted works

coursehero.com
24 <1%
Internet

mdpi.com
25 <1%
Internet

Berner Fachhochschule on 2023-06-28


26 <1%
Submitted works

CSU, San Jose State University on 2024-04-08


27 <1%
Submitted works

Higher Education Commission Pakistan on 2024-05-24


28 <1%
Submitted works

ouci.dntb.gov.ua
29 <1%
Internet

International School of Management and Technology (ISMT), Nepal on...


30 <1%
Submitted works

Kingston University on 2024-01-09


31 <1%
Submitted works

University College London on 2024-01-08


32 <1%
Submitted works

University of North Texas on 2024-02-09


33 <1%
Submitted works

Sources overview
Similarity Report

University of Wales Swansea on 2021-05-10


34 <1%
Submitted works

pure.qub.ac.uk
35 <1%
Internet

International Islamic University Malaysia on 2023-06-14


36 <1%
Submitted works

archive.rsna.org
37 <1%
Internet

nature.com
38 <1%
Internet

University of Bradford on 2023-08-02


39 <1%
Submitted works

University of Canterbury on 2024-03-04


40 <1%
Submitted works

Yang, Jiawei. "Label-Efficient Representation Learning for Medical Ima...


41 <1%
Publication

dokumen.pub
42 <1%
Internet

fastercapital.com
43 <1%
Internet

pubmed.ncbi.nlm.nih.gov
44 <1%
Internet

thebusinessresearchcompany.com
45 <1%
Internet

Sources overview
Similarity Report

South Nottingham College, Nottinghamshire on 2019-01-09


46 <1%
Submitted works

exportersindia.com
47 <1%
Internet

Coventry University on 2021-12-16


48 <1%
Submitted works

University of Lincoln on 2022-09-22


49 <1%
Submitted works

epublications.vu.lt
50 <1%
Internet

lore.kernel.org
51 <1%
Internet

thesis.dlsud.edu.ph
52 <1%
Internet

Brunel University on 2024-01-18


53 <1%
Submitted works

Curtin University of Technology on 2018-10-19


54 <1%
Submitted works

University of Basrah on 2017-11-09


55 <1%
Submitted works

University of Hertfordshire on 2021-12-03


56 <1%
Submitted works

University of Northampton on 2024-04-21


57 <1%
Submitted works

Sources overview
Similarity Report

University of Stellenbosch, South Africa on 2020-11-09


58 <1%
Submitted works

uonjournals.uonbi.ac.ke
59 <1%
Internet

American University of Beirut on 2019-04-25


60 <1%
Submitted works

CSU, San Jose State University on 2022-03-12


61 <1%
Submitted works

Polytechnic University of the Philippines - Sta. Mesa on 2021-04-15


62 <1%
Submitted works

Radboud Universiteit Nijmegen on 2020-12-11


63 <1%
Submitted works

Swinburne University of Technology on 2020-07-08


64 <1%
Submitted works

University of Leicester on 2023-03-29


65 <1%
Submitted works

University of Portsmouth on 2020-05-29


66 <1%
Submitted works

fractalcatz.itch.io
67 <1%
Internet

rucore.libraries.rutgers.edu
68 <1%
Internet

drugwatch.com
69 <1%
Internet

Sources overview
Similarity Report

manualzz.com
70 <1%
Internet

science.gov
71 <1%
Internet

waveshare.com
72 <1%
Internet

American College of Education on 2023-04-16


73 <1%
Submitted works

Anna University on 2024-05-24


74 <1%
Submitted works

Babeș-Bolyai University
75 <1%
Publication

Cavite State University on 2023-06-28


76 <1%
Submitted works

dspace.ut.ee
77 <1%
Internet

journals.plos.org
78 <1%
Internet

nauchkor.ru
79 <1%
Internet

pureadmin.qub.ac.uk
80 <1%
Internet

thesai.org
81 <1%
Internet

Sources overview
Similarity Report

thesis.unipd.it
82 <1%
Internet

vdocuments.site
83 <1%
Internet

publiscience.org
84 <1%
Internet

researchgate.net
85 <1%
Internet

researchsquare.com
86 <1%
Internet

American Public University System on 2019-10-27


87 <1%
Submitted works

De La Salle University - Manila on 2020-07-06


88 <1%
Submitted works

University of Salford on 2023-12-15


89 <1%
Submitted works

University of Wales Institute, Cardiff on 2016-12-12


90 <1%
Submitted works

University of Wolverhampton on 2021-05-07


91 <1%
Submitted works

Heriot-Watt University on 2023-04-26


92 <1%
Submitted works

Melbourne Institute of Technology on 2023-04-08


93 <1%
Submitted works

Sources overview
Similarity Report

Michigan Technological University on 2019-04-01


94 <1%
Submitted works

University of Hertfordshire on 2024-04-29


95 <1%
Submitted works

University of Westminster on 2023-07-18


96 <1%
Submitted works

University of Wollongong on 2023-11-18


97 <1%
Submitted works

Sources overview

You might also like