Professional Documents
Culture Documents
OptiScan_ Early Warning Sign of Cataract Detection Using
OptiScan_ Early Warning Sign of Cataract Detection Using
77 Pages 109.2KB
9% Overall Similarity
The combined total of all matches, including overlapping sources, for each database.
5% Internet database 1% Publications database
8% Submitted Works database
Summary
OptiScan: Early Warning Sign of Cataract Detection Using
A Design Project
52
Presented to the Faculty of the
In Partial Fulfillment of
May 2024
INTRODUCTION
Over 55% of blind people over the age of 65 are becoming such due to cataracts,
making up 78% of the world's blindness rate above that age. The human eye's lens
makes it possible to see. Water and some proteins make up its composition.
and develops a cloud as a result of proteins clumping together and clouding the lens.
80
As a result, vision is impaired since light cannot enter the eye. Cataracts are the most
common cause of blindness around the world, especially in older people. The
condition's earliest signs include watery eyes, blurry vision, and white spots can be
In the Philippines, 62% of the more than two million individuals who suffer from vision
impairment have cataracts. In rural locations where access to eye care treatments is
limited, the majority of visually impaired Filipinos reside below the national poverty line.
People who urgently require eye care cannot afford or obtain eye therapy since among
the 1,573 ophthalmologists in the country, most of them work in various national and
regional government hospitals as well as private clinics in urban areas. (The Fred
stemming from the reliance on specialized equipment and the need for trained
the patient's eyes using an ophthalmoscope slit-lamp microscope or similar tool to look
for clouding of a normally bright lens in the eye. (P. Tripathi et al., 2022)
The costly nature of these diagnostic tools, coupled with the complexity of the
inaccurate diagnoses. The need for skilled professionals also contributes to disparities
The integration of technology, such as the OptiScan device, presents a new approach
enable non-invasive and convenient data collection, making them especially valuable
for early detection of medical conditions. In the context of cataract diagnosis, the
95
OptiScan device offers new approaches by utilizing Convolutional Neural Networks
providing a user-friendly solution for initial cataract detection. Its portability and
Convolutional Neural Networks (CNNs), have opened new avenues for transforming
healthcare. The integration of high-resolution image-capture capabilities into devices
response to this, this research design study presents "OptiScan," a device that the
researchers designed to capture high-resolution images of the eye and employ a CNN-
based algorithm for initial and automated cataract detection and diagnosis.
This research focused on the development of the OptiScan device. The primary
emphasis lies in two key aspects: accuracy and accessibility. Ensuring the OptiScan
cataracts. Strict testing procedures and comparative analyses are utilized to evaluate
the device's precision in distinguishing between cataracts, normal eyes, and eyes with
diseases other than cataract. This thorough examination seeks to confirm the
Simultaneously, the researchers made sure the OptiScan device is accessible for
communities to use. The researchers created a simple and user-friendly interface that
The training data set comprised 3,213 images, limited due to data privacy restrictions
enforced by ophthalmologists and the limited availability of suitable images from online
measures to protect sensitive eye images, thereby constraining the amount of data
that could be utilized. Additionally, the lack of publicly available eye images further
11
restricted the size of the training set. In return, the gathered images were validated one
by one by the official advising ophthalmologist. Despite these limitations, the
researchers were able to effectively use the available data to train the system,
The study examined the technical aspects of the OptiScan device, it assessed the
effectiveness in the diagnosis and detection of cataracts. In the long run, the project
remedy that might transform the cataract diagnostic process and improve its
The existing methods for cataract detection rely on conventional diagnostic tools and
and effectiveness of cataract diagnosis may vary. The following problems are what the
cataracts are not easily accessible in remote or underserved areas and primary health
clinics. This limits the ability to conduct screenings and diagnoses, particularly in
underserved areas where basic clinics are the only facilities available and the
conventional methods like manually drawing the structure or shape of the observed
with Convolutional Neural Networks (CNNs). Specifically, the core objectives of the
OptiScan project encompass, but are not limited to, the following:
enabling visualization of the eye and facilitating detailed patient education of the
patient's concern.
the burden of this vision-impairing condition among individuals. The OptiScan project
aimed to transform cataract diagnosis and increase access to quality eye care,
especially in regions where such resources are limited and will not include areas with
As part of the research objectives, the researchers narrowed the classification process
to three main classification categories: "cataract detected," "normal eye," and "not
categories, the researchers effectively assessed and classified a patient's eye in terms
of the early signs of cataracts and initial diagnosis. Additionally, this approach
facilitated more precise data analysis and interpretation, ultimately enhancing the
for cataract detection that captures images and uses convolutional neural networks
(CNNs). The project uses Raspberry Pi 4 as the single-board computer to drive all of
the system's components, including a high-resolution camera for visual and image
inputs to the system, an LCD display for displaying metrics and statuses while using
the device, and a custom circuit that is used to calibrate and control the device,
The developed system is also capable of adapting its Raspberry Pi board to a monitor,
2
allowing users to access the user's interface for a more thorough examination of the
captured image and live view of the eye. It also served the functionality of printing the
results and detection metrics such as confidence level and other necessary details.
The developed system can also adapt its Raspberry Pi board to a monitor, allowing the
2
user to access the user interface for a more thorough examination of the captured
image and live view of the eye. It also serves the functionality of printing the results
and detection metrics such as confidence level and other necessary details.
To be able to explore this study, the researchers noted and reviewed related literature
image processing, aims to identify cataract types without the need for dilating drops. It
1
also assesses cataract severity, grade, color, area, and hardness, displaying, saving,
acceptance and effectiveness levels. Accurate and reliable cataract detection were
evident with notable advancement over current methods of examination. This system
represents a modern approach to cataract detection for all patients. (Ivan Dave
2
In the study titled "SBC-Based Cataract Detection System Using Deep
1
Convolutional Neural Network for Asia-Pacific Eye Center Biñan Branch," the
1
researchers employed ISO/IEC 25010:2012 models to assess the accuracy,
functionality, and efficiency of the developed cataract detection system. These models
ensuring it met quality standards and user needs. By using standardized evaluation
criteria, the researchers could enhance the reliability and usability of the technology,
ensuring it effectively served its intended purpose at the Asia-Pacific Eye Center Biñan
Branch.
The YOLO (You Only Look Once) CNN Algorithm represents an important
performance using YOLOv5 for diagnosing cataract conditions. Across various eye
curve of 0.992 for cataracts, indicating excellent discrimination ability. Moreover, when
different urgency levels. These levels included 'urgent', 'semi-urgent', 'routine', and
'observation', showing its ability to prioritize cases based on severity. Overall, the
48
YOLO, or You Only Look Once, stands out as a real-time object detection algorithm
94
that surpasses various CNN-based counterparts in terms of both speed and accuracy.
15
The ability to detect multiple objects in the same image is its distinguishing feature.
This capability sets it apart from other algorithms that typically focus on identifying one
object at a time. One of the notable advantages of YOLO is its speed, attributed to its
single-shot approach for object detection. Unlike many CNN-based algorithms that
necessitate multiple passes through an image, YOLO can swiftly detect objects in a
self-driving cars and video surveillance. In summary, YOLO's speed, accuracy, and
efficient solution for real-time object detection tasks, outperforming several other CNN-
11
The YOLO (You Only Look Once) CNN algorithm is a significant advancement in
18
computer vision, renowned for its speed and accuracy in detecting objects in images
and videos. Unlike other methods that can only identify one object at a time, YOLO can
simultaneously detect multiple objects within the same image, making it highly efficient.
OptiScan study, which focuses on detecting cataracts in eye images, using the YOLO
model allows researchers to quickly and accurately identify cataracts, enhancing the
system's effectiveness. Thus, the study employs the YOLO framework to utilize its
real-time object detection capabilities for accurate analysis of early signs of cataracts.
In this study, the researchers utilized YOLOv3 to automatically identify and classify
cataracts from eye lens videos. The dataset included videos from 76 eyes of 38
85
individuals, collected via a slit lamp. Data were gathered using five random methods to
37
enhance accuracy, with each video producing 1,520 images. These images were
divided into training, validation, and test sets in a 7:2:1 ratio. (Shenming Hu et al.,
2021)
To maintain algorithm accuracy, data collection employed five random methods, with
each video lasting up to 10 seconds. From these videos, 1,520 images were extracted
25
and split into training, validation, and test sets. Verification on a clinical data test set
yielded a 94% accuracy rate. Additionally, frame detection was completed within 29
This algorithm's efficiency allows the use of lens scan videos as the primary research
object, improving screening accuracy. This aligns with the actual cataract diagnosis
2021)
In this study, YOLOv3 was utilized to detect and classify cataracts from eye lens
Despite the small sample size, the algorithm's performance was evaluated, highlighting
its potential applicability even with a constrained dataset. The study underscores the
The most frequent cause of vision impairment worldwide is cataracts, and prompt
and visual acuity tests. Computer-aided diagnostic (CAD) systems for detecting
cataracts have been the subject of several research, with promising findings regarding
71
One of the most common causes of blindness, particularly in the elderly, is cataract.
Nearly half of India's elderly population has cataracts by the age of 80, or has had
surgery to treat them. According to surveys conducted by the WHO and NPCB, there
are over 12 million blind persons in India, and 80.1% of them are blind due to
cataracts. To prevent total blindness, it is essential that cataract cases are identified
early on. This study uses digital image processing on ocular scans to identify the
presence and severity of cataracts. Images of eyes with various degrees of cataracts
automatic detection algorithm based on feature extraction is used in the first method.
them early, they can treat them better and prevent blindness. Right now, doctors
mostly use their judgment and some tests to diagnose cataracts, but new computer
systems are also helping. In this study, special computer programs are used to look at
pictures of eyes and find cataracts. Trying out different ways to do this to see which
one works best is also considered. Our goal is to make it easier and faster to find
cataracts so that people can get treated quickly and avoid losing their sight.
powerful tools for evaluating complicated medical data and providing insights essential
41
for diagnosis, treatment planning, and patient care in various healthcare disciplines.
41
In the field of medical image analysis, CNNs have achieved remarkable results. They
are great at extracting features and recognizing patterns, which makes them suitable
architectures, including VGG, Inception, and ResNet, are good at detecting cataracts
when used for ophthalmic image analysis tasks. (Mitchell Finzel, 2017)
Convolutional neural networks have been used more often in numerous applications
involving medical imaging throughout the last five years. This is caused by their
following their victory in the 2012 ImageNet competition. These uses are very diverse,
ranging from the identification of Alzheimer's illness in MRIs to the segmentation of
patterns from complex images, making them well-suited for identifying cataracts in
ophthalmic images. Their success can be attributed to their ability to utilize large
datasets and their adaptability across various medical imaging applications. In this
context, the YOLO (You Only Look Once) model, which employs CNNs for its AI
capability to quickly and accurately identify early signs of cataracts, thereby improving
diagnostic outcomes.
Cataract surgery is getting better and better thanks to new technology. One cool thing
diagnosis before, during, and after surgery. The researchers also got fancy machines
like femtosecond lasers and robots making surgery more precise, especially when it
comes to certain tricky parts like making cuts in the eye. Another neat feature is
feedback irrigation control during surgery, which makes the whole process safer. The
researchers also use clever techniques like guided implantation of special lenses and
choosing lens powers based on refraction to make sure patients get the best vision
possible. And speaking of lenses, there are some really cool new ones like trifocal and
extended-depth-of-focus lenses that can improve vision in different ways. Plus, there
are options like pinhole lenses and supplementary lenses that are changing how the
researchers think about choosing the right lens for each person. All these advances
mean that cataract surgery is becoming even more precise and effective, which is
which directly correlates with the need for accurate detection. Understanding these
advancements is crucial for cataract detection devices, as it provides the right insights
interaction between aging processes and ocular health, revealing significant factors
away among different ethnic groups. Cataracts, especially those that come with age,
like nuclear, cortical, and posterior subcapsular (PSC) cataracts, are a major reason
78
why people have trouble seeing. Even though having age-related cataracts seems to
increase the risk of dying, we're not sure how each type of cataract affects mortality.
Doctors and eye experts need to understand this because each type of cataract has
different causes, treatments, and impacts on eyesight. Figuring out how cataracts
relate to mortality could teach the researchers more about why they happen. If one
type of cataract is linked to a higher chance of dying, using imaging to look for that
type could help the researchers check overall health and how the body ages. This
question is important for public health because cataracts are becoming more common
worldwide. The researchers still don't know for sure if age-related cataracts directly
raise the risk of dying or if the link is just because older people tend to get them.
for effective cataract detection. As we age, our eyes undergo various changes, such
accumulation of substances that cloud the lens and cause cataracts. Recognizing
these changes allows for better prevention and treatment strategies, ultimately aiding
in the preservation of vision and older individual's quality of life. This knowledge is
essential for cataract detection devices as it gives insight into the fundamental
mechanism and risk factors associated with cataract formation, guiding the
development of more accurate and timely detection methods. By focusing on age-
related changes, cataract detection devices can target individuals at higher risk,
health.
balanced diet full of antioxidants, vitamins, and minerals helps your eyes fight off
oxidative stress that can lead to cataracts. Plus, avoiding smoking and cutting back on
alcohol can protect your eyes from damage that can speed up cataract formation.
Always remember to wear sunglasses and hats when out in the sun to shield eyes
from harmful UV rays. And if possible, try to keep a healthy weight and manage
conditions like diabetes, as they're linked to cataracts too. Staying active is also a
great way to improve general health and lower risk of cataract development as one
ages. So, by making these small lifestyle changes, one is not just looking after your
body but also safeguarding your vision for the long haul. (Shuai Yuan et al., 2022)
Senile cataract, a common eye condition impacting around 17% of people, is a major
contributor to vision loss worldwide. While past studies have suggested links between
metabolic syndrome and certain lifestyle habits like coffee, alcohol, and smoking with
the risk of senile cataracts, there's been uncertainty about whether these factors
actually cause cataracts. To get clearer answers, scientists used a method called
things like body weight, diabetes, blood pressure, lifestyle choices, and the risk of
developing cataracts. By doing this, they hope to shed light on ways to prevent senile
about these factors into the design and functionality of the device, such as integrating
features for assessing dietary habits, smoking history, or sun exposure, it becomes
Moreover, leveraging genetic variations and other biomarkers associated with lifestyle
choices can enhance the device's predictive capabilities, enabling earlier detection and
intervention. Thus, by considering lifestyle factors in tandem with clinical and genetic
data, the cataract detection device can offer personalized insights and
89
The accessibility and utilization of eye care services in underprivileged areas
39
emphasize the importance of ensuring equitable access to vision care, addressing
39
disparities in healthcare access, and promoting eye health awareness among
vulnerable populations.
Ensuring universal access to eye care, regardless of geographical location or financial
affordable options, and increasing awareness about eye health can mitigate barriers to
accessing eye care, enabling individuals to receive timely treatment and support.
Additionally, expanding clinic facilities, training more eye care professionals, and
promoting preventive measures can help alleviate the burden of vision problems in
underserved rural communities, ultimately enhancing overall quality of life and societal
services, such as limited clinic availability and financial constraints, people can tailor
the design and deployment of the detection device to reach these populations
Additionally, leveraging mobile and telemedicine technologies can extend the reach of
the device, enabling remote screening and consultation for individuals who lack access
Advanced technologies for remote eye examinations are reshaping ophthalmic care,
providing innovative solutions that enable convenient and efficient assessment of eye
how eye care is administered. These advancements are simplifying access to eye
care services for individuals in distant or rural areas, which is incredibly significant.
With these innovations, healthcare providers can now conduct virtual appointments,
analyze eye images remotely, and monitor changes in eye health over time. This
facilitates early detection of eye issues and prompt intervention, benefiting patients and
reducing healthcare expenses. Moreover, remote examinations reduce the need for
extensive travel and waiting times, enhancing convenience for all involved. Ultimately,
these cutting-edge technologies have the potential to revolutionize the delivery of eye
This study holds significant importance and potential impact for various stakeholders,
Patients with suspected vision impairments - The primary beneficiaries are individuals
who may have early signs of cataracts. OptiScan aims to enable early and accurate
diagnosis, leading to timely interventions that can prevent severe vision loss and
other eye care specialists, and even basic healthcare professionals in communities
with limited accessibility stand to benefit from OptiScan's diagnostic capabilities. The
Underserved and Remote Communities - OptiScan has the potential to reach areas
74
with limited access to specialized eye care facilities. These communities can benefit
Primary Healthcare Systems and Providers - More efficient cataract diagnosis can
reduce the burden on healthcare systems and providers. Early cataract detection and
treatment can reduce costs and optimize the use of available resources.
eye care, especially for detecting cataracts early. Future researchers can use
OptiScan as a starting point to improve how the researchers find and diagnose
cataracts. By working on this, the researchers can help make eye care better and
more accessible. OptiScan shows the researchers the importance of detecting eye
problems early and gives them ideas on how to do it even better in the future.
METHODS
The methods section laid out a detailed plan for the research, covering steps like
stakeholders and used appropriate tools to gather valuable insights. Through thorough
analysis and specification, the researchers defined what were needed for the project,
including both software and hardware aspects. The researchers provided clear
evaluation plan was put in place to ensure the system worked effectively. The methods
section concluded by outlining strategies for translating research findings into practical
specifically cataract detection and diagnosis, and the implementation of the OptiScan
82
system. These research instruments were carefully chosen to ensure a comprehensive
understanding of the subject matter. In order to get a comprehensive look at the data
in each phase of the study, specifically for data gathering procedures and evaluation
and scoring, interviews, and 4-point Likert scale-based questionnaires were employed.
The research design chosen for this study was action research, which is well-suited for
investigating cataract disease detection and finding ways to improve it. Action research
focused on collaborating with others to solve real-world problems and make positive
and patients, the researchers gained a better understanding of cataract detection from
solutions to improve detection methods. The researchers also adjusted their methods
based on the resources they had and the constraints they faced. Action research
involved an iterative process of learning and refining their approach, ensuring that their
findings were improving. Ultimately, their goal was to contribute to better cataract
detection methods and make a positive impact in this area (Lewin K., 1946).
The researchers utilized various research instruments and tools to gather and analyze
data effectively. These included survey questionnaires, which allowed the researchers
composite mean to statistically analyze the gathered data, providing insights into
5
trends, patterns, and overall outcomes of the research. These tools enabled the
implementation.
The researchers focused on cataract disease and its implications for early
90
detection. To explore the impact of cataract disease and detection in the specific
context of the study, the researchers engaged a representative group of stakeholders.
The study incorporates evaluators who are Ophthalmologists, Health Care Clinic Staff,
professionals.
For the evaluation process, the demographic profile of the evaluators was
as their highest educational attainment and position in their institutions were recorded.
Meanwhile, for the Ophthalmologists, Health Care Clinic Staff, and Other Medical Field
Officers, information regarding their respective positions was gathered. This data
collection aimed to provide insights into the qualifications and expertise of the
In light of the practical limitations and the restricted quantity of individuals, the
of people with particular traits or experiences related to the study goals is known as
concentrate on patients with different eye diseases and professionals who specialize in
on cataract disease detection, even though it might not yield a representative sample
of the general community. Patients who already suspected they had an eye disease
were the target of the specific factors that researchers took into consideration while
choosing our respondents. Patients went to Kapampangan Development Foundation,
Inc. for advice since they needed a professional assessment for their eye conditions
and where the device was utilized. As a result, the device was used to evaluate any
patient who asked for a check-up during the utilization trials. In this study, to ensure
the protection of patient privacy, actual images of the eyes tested in the actual testing
of the system were not included. This decision was made following the strict
Data Privacy Act of 2012. By adhering to these regulations, the study maintained the
confidentiality and privacy of all patient data. It is crucial to select particular disciplines
and persons for project evaluation and feedback since they possess the knowledge
and perspective necessary to evaluate and provide insightful information. They are
exhaustive and well-informed evaluation procedure that improves the accuracy and
12
Agile project management is used in the System Development Life Cycle (SDLC) that
was selected for the OptiScan study. Agile is an iterative methodology that places a
high priority on providing value to the client through regular communication and
93 33
cooperation. It is governed by the Agile Manifesto's tenets, which place an emphasis
used for any project that calls for teamwork and iteration, which makes it a suitable fit
project is divided into iterative phases by agile, which include planning, designing,
developing, testing, deploying, reviewing, and maintaining. In this specific instance and
its applications, the researchers apply the SDLC model to Agile methodology principles
adaptation and flexibility, which helps the researchers swiftly adjust to the study's
make sure that the OptiScan system is created iteratively, incorporating feedback at
each stage and providing stakeholders with additional value by adopting Agile
principles.
The researchers create the system development life cycle (SDLC) model, choose
suitable research instruments for data collection, list the materials required, and carry
As they proceed to the design phase, the researchers define and analyze the
requirements, design the system interface, produce block and circuit diagrams, and
use PROCESS flowcharts to build logical specifications. The researchers follow a
sequence, and they outline the hardware and software needs. Additionally, the
researchers determine crucial elements and equations to test and assess the system's
functionality during the testing phases. Ultimately, the OptiScan system's efficacy and
efficiency are evaluated by the researchers using a variety of criteria and assessment
formulae throughout the maintenance and evaluation phase. Because of its flexibility
continues in cycles and iterations long after the system has been launched, with
Getting into the planning phase, the researchers' focus lay in gathering comprehensive
insights to address the identified challenges within cataract detection and diagnosis
researchers assessed the concerns to understand the issue thoroughly. This process
for the initial diagnosis or cataract detection. These requirements encompassed the
development of the OptiScan device capable of capturing detailed images of the eye,
software, and documentation of the OptiScan project. The requirements and following
plans for the system workflow were identified through the tables and figures in the
Figure 2 represents the system flowchart. It started from components, including both
processes, such as loading the CNN model for eye detection and landmark
initializations. The flowchart also visualized the process of diagnosing and detecting
cataracts through the software application. A fallback system and a calibration mode
for camera position were included if camera initialization failed. The system then
proceeded to capture images for processing and detection. After the procedure, the
system generated different filtered and raw images to compile the findings. The
64
Figure 3 represents the OptiScan system's block diagram. It comprised the Raspberry
Pi 4 Model B with 8GB RAM. The RPi4 was the main board that controlled all the other
modules and components of the system. This included the High-Quality Camera that
supplied the system with visual inputs and images for processing. The system could
send feedback, current status, metrics, system light for better eye observation, and
system button controls for calibration and inputs through this display. A 5V three-amp
stepper motors and facilitating control over camera position. This circuit integrated two
Figure 5 illustrates the structural architecture of the system. Most modules in the
touchscreen display showed the OptiScan Python App, while a 64GB micro SD card
provided storage. The Raspberry Pi Camera was paired with a lamp for imaging, and a
Canon Printer was used for printing results. Additionally, there was a Dual Atmega
board for joystick control of the dual-axis system, and a TB6600 Motor Driver regulated
power to stepper motors. Lastly, a power supply unit provided electricity to the entire
system, along with a head-chin rest for patient positioning. A separate microcontroller
was included in the system to control the dual-axis platform. This was due to the
motors utilizing high voltages and currents in their operation. This was also to prevent
those current and voltage fluctuations from affecting major components such as the
Figure 6 showed the Single Window Graphical User Interface and Preview Modes:
This feature offered users a convenient interface where all tools and options were
available within one window. Additionally, it included preview modes, enabling users to
see the eye with various visualization techniques such as grayscale, a binarized
image, which specifically showed the shape of an observation in the eye, and a color
map. A grid and eye placement guidelines were also present, as well as statistics of
The OptiScan system had several color modes to improve eye exams: Normal Color,
Grayscale, Binarized, and Color Map. Normal Color Mode showed the eye in its
natural colors for a standard view. Grayscale Mode changed the image to shades of
gray, which helped to see fine details and structures in the eye. Binarized Mode
converted the image to black and white, making it easier to see the shape and edges
of cataracts. Color Map Mode used a gradient of colors to show light intensity and
depth, with green being the darkest and red the lightest. This mode helped to visualize
the thickness and reflectivity of a cloudy and or white lens part of the eye as well as its
different parts of the eye. Each mode highlighted different features, making the
to ensure the efficiency and effectiveness of the solution. Below, the researchers
outlined the specifications of both the software and hardware aspects, detailing the
1
Table 1 outlined the Python System Specifications for Personal Computers. For the
Operating System, it was recommended to use Windows 10, macOS, or Linux. The
minimum requirements included Windows 7 or later, macOS, or Linux. Regarding the
91
Python was a dynamically semantic, high-level, object-oriented programming language
19
that was interpreted. Its adaptability made it ideal for Rapid Application Development
and scripting or glue languages to connect multiple components. This was due to its
19
extensive built-in data structures, dynamic typing, and dynamic binding capabilities.
One of Python's standout features was its straightforward and easily readable syntax,
49
which facilitated learning and reduced program maintenance costs. Additionally,
Python's support for modules and packages encouraged code modularity and
reusability in software projects. That's why the researchers deemed it proper to be the
the operating system. However, OpenCV may also run on older versions such as
minimum of 1.30 GHz was sufficient, but a recommended processor speed of 2.60
GHz or higher was ideal for smoother performance. Regarding disk space, a minimum
of 1 GB was required, but it was recommended to have at least 3 GB of available disk
Additionally, having a RAM of at least 2 GB was the minimum requirement. Still, for
handle complex image processing tasks efficiently. These specifications ensured that
One of the critical advantages of OpenCV was its open-source nature, which offered
algorithms within the library to enhance its performance and capabilities. Furthermore,
OpenCV was distributed under a BSD license, simplifying its use and code
modification by companies (Asal, 2018). OpenCV was employed for image processing
tasks in the context of the mentioned application. This underscored its versatility and
was the main graphical processing unit of the software system of OptiScan.
Windows 7 or later. Processors should support x86 and x64 architecture, with a
platforms.
56
TensorFlow is a comprehensive open-source machine learning platform that offers a
27
wide and flexible ecosystem of tools, libraries, and community resources. This
learning advancements (Google Open Source, 2022). In the specific case mentioned,
63
TensorFlow is used to create a deep convolutional neural network as a fundamental
with a range of options based on their preferences and existing setups. On the other
30
hand, the minimum requirements, covering Windows 7 or later, macOS, and Linux,
ensured that the YOLOv8 model could still function adequately on older or less
advanced systems. These specifications were crucial for ensuring compatibility and
objects in images and videos. Unlike other methods that can identify only one object
at a time, YOLO can detect multiple objects simultaneously, making it highly efficient.
Its ability to work well in challenging scenarios makes it valuable for applications like
self-driving cars and surveillance systems. Incorporating YOLO into our OptiScan
project, which focuses on detecting cataracts in eye images, could enhance the speed
The processor should ideally have a clock speed of 2.60 GHz for optimal performance,
with a minimum requirement of 1.30 GHz. Regarding disk space, it was recommended
that Tkinter operated efficiently across different platforms and hardware configurations.
Tkinter served as the platform for designing OptiScan's user interface, ensuring user-
friendly interactions. Compatible with Windows, macOS, and Linux, Tkinter ensured
was compatible with different operating systems like Windows 10, macOS, and Linux,
Google Colaboratory served as a tool for training the YOLO model, the core
28
Table 7 outlined the key specifications of the Raspberry Pi Model 4B Single-Board
Gigabit Ethernet. The device also offered GPIO headers, micro HDMI ports for video
output, and a micro SD card slot for storage. Operating within a temperature range of
0–50ºC, it could be powered via USB-C or GPIO header with a minimum requirement
of 3A.
25
The Raspberry Pi 4 Model B, representing the latest iteration of the Raspberry Pi
served as the foundation for integrating other necessary elements in developing the
OptiScan system.
55
Table 8 presented the technical specifications of the Raspberry Pi Camera Module.
Compatible with all versions of Raspberry Pi, it featured a fisheye lens with a 5-
megapixel OV5647 sensor and a 1/4-inch CCD size. With an aperture of 2.35 and an
adjustable focal length, it provided a broad field of view with a diagonal angle of 160
degrees and a horizontal angle of 120 degrees. The compact module measured 25mm
functionality. The Nikon D3200 camera was originally chosen, but it couldn't show real-
time video. This was a problem because OptiScan needed to see video in real-time to
work well. So, the researchers switched to using the Raspberry Pi Camera Module,
which could show video in real-time and worked better for OptiScan.
image size and a cutting-edge back-illuminated sensor architecture, it's set to redefine
imaging experiences. It also offers adjustable back focus, providing precise control
over your focus. To suit your specific needs, the Raspberry Pi HQ Camera offers
flexibility in mounting options. It is used to capture high-quality images of the eye for
inches, it featured an 800 RGB x 480-pixel display format and an active area of
154.08mm x 85.92mm. Utilizing TFT LCD technology, the monitor included a multi-
configuration, it ensured clear and vibrant visuals. The backlight type was LED, and
the monitor operated on a 5V/1.8A power supply, making it suitable for various
The 7-inch Capacitive Touchscreen offered effortless and intuitive interaction. Its
ample size, vibrant colors, and accurate touch response enhanced your content
experience. Versatile for various uses, from interactive kiosks to digital signage.
Enhance user engagement with this stylish and highly responsive display.
specifically the Nema 17 model 42BYGHW215-X with a 35mm motor length. This
22
stepper motor operated with a step angle of 1.8 degrees and featured two phases. It
had a rated voltage of 3.83 V and a rated current of 2A, with a phase resistance of
2.55 ohms and a phase inductance of 4.9 mH. The motor was equipped with No. 4
lead wires and weighed 0.35kg, making it suitable for various precision control
applications. We switched to Wantai motors because the original ones could not
handle the weight of the dual-axis. Wantai motors are stronger and can support the
load better, ensuring that the dual-axis functions properly without any issues.
A stepper motor is a precise type of electric motor that moves in fixed increments,
making it ideal for controlling a camera platform's x and y-axis movement. Mounted on
the platform, stepper motors received commands from a control system to move the
camera to specific positions. This precise control allowed for accurate positioning,
featured an 8-bit AVR CPU with 28 pins and operated within a voltage range of 1.8V to
5.5V. With 23 programmable I/O lines and 6 PWM channels, it offered flexible
interfacing capabilities for diverse applications. The CPU speed was 1MIPS for 1MHz,
and it included 2KB of internal SRAM and 1KB of EEPROM, providing ample memory
The Power Supply Technical Specifications, outlined in Table 12, detailed the
essential properties required for efficient operation. With an Input Voltage Range
found worldwide. The Output Voltage was fixed at 12V, ensuring consistent power
amperes), this power supply unit could efficiently handle the demands of OptiScan's
components. Its Power Rating of 12V 120W solidified its capability to provide stable
and reliable power, essential for the optimal performance of the system.
The power supply served as the primary source of electrical power for the OptiScan
system. It took in electricity from a standard power outlet, converting it into a suitable
form for the various components within the system to operate. The Input Voltage
worldwide. Once converted, the power supply delivered a consistent output voltage of
12V, which was essential for powering components such as the Raspberry Pi, motors,
and sensors. With a Wattage rating of 120 watts (equivalent to 10 amperes), the power
supply could meet the energy demands of the system without interruption or voltage
In Table 13, the Stepper Motor Driver was a crucial component of the OptiScan
system, responsible for controlling the movement and position of stepper motors. It
regulated Output Current between 0.5 and 4.0 amps, suitable for driving stepper
motors effectively. The Control Signal operated within a range of 3.3 to 24 volts,
power output of 160 watts, it could efficiently handle the energy requirements of
stepper motor operations. The driver supported various Micro Step configurations,
including 1, 2/A, 2/B, 4, 8, 16, and 32, allowing for fine-grained control over motor
drive IC, TB67S109AFTG, ensured reliable and precise motor control under diverse
Table 13 showed the Stepper Motor Driver, which played a vital role in the OptiScan
system, particularly in controlling the movement of stepper motors for the dual-axis
platform. Stepper motors are precise and reliable electromechanical devices used to
components, such as lenses or mirrors, along two axes (X and Y) to scan and capture
The Stepper Motor Driver served as the intermediary between the control system and
the stepper motors, converting control signals from the system into precise movements
of the motors. It regulated the amount of current supplied to the motors, ensuring they
moved with the required speed, accuracy, and torque. Additionally, the driver enabled
the system to implement microstepping, which divided each full step of the motor into
smaller increments, allowing for smoother motion and finer control over positioning.
By accurately controlling the stepper motors through the Stepper Motor Driver, the
examinations and diagnostics, as it enabled the system to scan different areas of the
eye with precision and consistency, facilitating accurate analysis and detection of eye
conditions.
The Joystick, outlined in Table 14, operated effectively within a voltage range of
5 volts (V) and featured an internal potentiometer value of 10k. It utilized 2.54mm pin
interface leads for connectivity and was designed to withstand operating temperatures
The joystick served as a control mechanism for managing the movement of the dual-
axis platform within the OptiScan device. By manipulating the joystick, users could
effectively navigate and adjust the positioning of the platform along both the horizontal
and vertical axes. This functionality enabled precise control over the scanning and
positioning of the optical components, facilitating accurate and efficient operation of the
OptiScan system.
The lamp outlined in Table 15 used LED lights and could be powered through a USB
port. It worked with voltages ranging from 5 to 12 volts and provided cool-colored light.
examinations. It provided illumination to help capture clear images of the eye for
examination and analysis, enhancing visibility and ensuring accurate results in the
OptiScan device.
17
Table 16 depicts the Canon Pixma TS207 printer is a printing solution compatible with
70
Windows 10, 8.1, and 7 SP1, as well as macOS 10.12 or later systems. With power
requirements of AC 100 - 240 V, 50/60 Hz, it offers flexibility for various environments.
17
Equipped with a total of 1,280 nozzles, it achieves a maximum print resolution of 4,800
x 1,200 dpi, ensuring sharp and detailed prints. The recommended printing area
includes top and bottom margins of 31.6 mm and 29.2 mm, respectively, allowing for
This printer is essential for producing hard copies of the results obtained from
the OptiScan device. It ensures that the findings are available in a tangible format for
enables clear and detailed output of the detected eye conditions and other relevant
information.
The SanDisk Storage outlined in Table 17, with a capacity of 64 GB, offers
reliable data storage for various applications. It boasts a high read speed of 200 MB/s,
ensuring swift access to stored data for seamless operations. The write speed ranges
from a maximum of 90 B/s to a minimum of 30 B/s, accommodating different data
writing needs with efficiency. Suitable for recording Full HD, 3D, and 4K content, this
storage solution provides ample space and performance for capturing high-quality
multimedia content.
66
Using a 64GB micro SD card for the operating system and processing on a Raspberry
Pi 4 is a practical choice. This ample storage capacity allows for storing the operating
system, various software packages, and data generated during processing tasks. With
sufficient storage space, you can accommodate updates, installations, and data
manipulation without running into storage constraints. It's important to ensure the SD
card's compatibility and performance to optimize the Raspberry Pi's functionality and
efficiency.
These materials are essential components that contribute to the functionality and
76
The testing and evaluation of each aspect of the OptiScan system involved unit testing,
integration testing, and system testing. Testing and evaluation were critical
different input conditions, edge cases, and error-handling scenarios to verify the
correctness and robustness of the unit. Unit testing involved taking the whole code or
developed software and testing each aspect of it manually, covering different aspects
of its behavior and functionality. As per the hardware, individual components were
tested to ensure each was functioning correctly and within the required functionality.
The unit tests involved each component of the system, specifically the hardware and
Integration Testing. This phase focused on verifying the interactions and interfaces
and functioned cohesively as part of the larger system. Integration tests covered
detect defects in the interactions between components. The system was tested using
the top-down approach where testing began from the top-level modules or subsystems
higher-level modules were tested first, and then progressively deeper levels of the
58
system were integrated and tested until the entire system was tested as a whole.
Specifically, the testing started at the power supply module following the booting of the
app of the system through the Raspberry Pi that was in connection or integrated with
the hardware such as capturing and printing through the camera and printer. The next
module to be tested was the dual-axis platform that was connected through digital
signals from the GPIO pins, and then each individual component was to follow.
System Testing. This phase involved validating the entire software system as a whole,
57
covering all functional and nonfunctional requirements. The focus was on testing the
system from an end-user perspective, ensuring that it met specified requirements and
functioned correctly in its intended environment. System tests covered all aspects of
system behavior, including user interactions, workflows, and system performance. The
executing test cases that simulated real-world user scenarios and interactions with the
31
system. Tests covered all functional requirements, verifying that the system behaved
system testing. The system test was based on several ISO/IEC standards presented in
Tables 19, 20, 21, and 22 in the form of a Likert scale. The evaluation was divided into
two tests: those intended for use by professionals related to the development part of
the project, such as IT and engineers, and the quality-in-use test, which was intended
for use by professionals directly involved in testing the system in real-world usage.
1
The researchers utilized the following statistical methods and tools to evaluate the data
collected during the testing and survey in relation to the development of the system.
4-Point Likert Scale. It is utilized by the researchers to gather data for determining the
level of agreement in the pre and post surveys, as well as gathering insights for
specifically used to omit the neutral response present from the 5-point likert scale to
Confidence Computation. The researchers used the YOLOV8 (You only look once)
36
algorithm for the detection. In this algorithm, the confidence is represented as the
that the bounding box contains an object) and the class probability scores (probabilities
1
Weighted mean. It used to verify the weighted mean values of the level of agreement
for the pre-survey and post-survey as well as accuracy, functionality and user-
friendliness of the developed system which uses 4-point likert scales. Weighted mean
is also used to describe the results of the initial tests and final tests in identifying the
Calculating the weighted mean of the confidence scores for instance i as:
1
Composite mean. The weighted mean is used to calculate and verify the overall mean
values of the rates of the level of agreements, the likert scale rates of the nonfunctional
aspect of the system, and the overall detection correctness, confidence and detection
classification as a whole.
Likert scale have emerged to suit different research contexts. One such variation is
the 4-point Likert scale, which omits the neutral option. This scale offered simplicity
and efficiency, making it advantageous for surveys with limited space or to reduce
respondent confusion. Additionally, the 4-point scale may elicit more definitive
The 4-point Likert scale made it a practical choice for research studies, which the
researchers specifically applied for the Optiscan study which needs firm analysis and
evaluation of the different aspects of the data and the paper as a whole.
Table 19 is the 4-point Likert scale of level of agreement of the users on various
Assigned point has a scale range gap of 0.75 from 1.00 to 4.00 which is equivalent to
25% gap for each point in terms of percentages. Assigned point 4 of the scale is
has a categorical with a verbal interpretation counterpart as it will be used as basis for
the developed OptiScan Cataract Detection System. Assigned point 4 of the scale was
1
Table 21 displayed the 4-point Likert scale for the level of functionality of the
normal eyes and cataract as well as evaluating its overall functionality. Assigned point
4 of the scale was verbally interpreted as highly functional, followed by assigned point
Table 22 is the 4-point Likert scale of the level of efficiency as a result of the
the developed OptiScan Detection System. In this scale, point 4 was labeled as highly
reliable, point 3 as reliable, point 2 as less reliable, and point 1 as unreliable. This
scale was specifically utilized during the beta testing phase by Technical/Technology
81
Professionals to evaluate the overall performance of the system. It aimed to gather
feedback on the system's reliability, providing valuable insights for further refinement
and improvement.
Tables 24, 25, and 26 presented the System testing form used during the Beta testing
issues or areas for improvement, and provide valuable feedback for refinement.
Table 24 showed the Integration Tests done during the Beta Testing phase for
a complete picture of the testing process, explaining how professionals in the field
evaluated different functions of the system. Each entry in the table described particular
system functions and the corresponding test metrics, which helped professionals
check how well the system worked. By doing this thorough evaluation, professionals
could find any problems or ways to make the system better, making sure it met the
During the Beta testing phase, Technical/Technology Professionals utilized the system
test form depicted in Table 25 to assess the time efficiency of the system. This form
professionals ensured that the system met the necessary standards and user
format for evaluating the reliability of various Systems and Subsystems after 1 hour of
83
The successful implementation of the OptiScan system relied on the selection of the
right tools for the overall assessment. In this section, the survey questions for system
evaluation were outlined. The questions were based on the 4-point Likert scales
Quality in use shifted the attention from the inherent attributes of a system to its
performance and satisfaction in real-world usage contexts. This evaluation in Table 27,
vital for IT professionals and engineers assessing system quality, emphasized factors
like user satisfaction, productivity, efficiency, and safety during interactions with the
system in its intended environment. Unlike product quality, which relied on predefined
3
criteria, quality in use was subjective and context-dependent. It necessitated gathering
feedback from users and stakeholders to comprehend their experiences, preferences,
and requirements.
In Table 28, the System Quality Evaluation for Ophthalmologists, Health Care Clinic
1
Staff, and Other Medical Field Officers assessed the quality in use of the OptiScan
system. Quality in use referred to how effectively and satisfactorily users interacted
with the system during their tasks. It assessed factors such as user satisfaction,
productivity, efficiency, and safety, reflecting the real-world usability and effectiveness
deployment to the community for patient use with a major focus on maintaining the
system functionalities and detection accuracy. To ensure that the OptiScan device was
guidance, which included outlining a detailed written manual about the device's
were prominently displayed near the control system for quick reference during use.
Recognizing that hands-on demonstrations were often effective too, the researchers
conducted interactive sessions with the staff at the Health Care Clinic on how to use
the OptiScan device. During the sessions, the researchers demonstrated how to
operate the device, addressed any questions or concerns, and provided practical tips
for maintenance, care, and use cases of the device. This proactive approach aimed to
empower users with the knowledge and skills needed to utilize the OptiScan device
optimally and ensure its continued functionality within the community healthcare
setting.
The OptiScan system, as outlined in Table 29, offered detection for Normal Eye,
included key indicators for status feedback and ran on Raspbian - Raspberry Pi OS
with a screen resolution of 1024x768. The exterior featured a PVC Sintra Board as the
case and wood as the system's base, ensuring protection from outside interferences
and hazards that the system components might present to the user.
The OptiScan device provided a simple and user-friendly experience for performing
eye examinations. Figure 7 showed the basic usage guide for the device. Users began
by following the specified power sequences to ensure proper startup. Once powered
on, users could explore various preview modes to enhance the inspection process and
adjust settings accordingly. Preview guides were also placed in the instructions for
eye. Then, the initiation of the eye inspection process allowed for the analysis of
captured images in detail. Lastly, the printing of examination results was facilitated for
This study presented the most important numerical data and evaluation results,
section included all the assessments and data gathered, examined, and processed for
1
interpretation. Charts, tables, and figures were used to present the results
1
comprehensively. The primary objective of this section was to identify the data
obtained from the actual testing for the overall analysis of the study. This was to verify
The result of the digital evaluation and analysis of the trained model through the test
images divided from the dataset used for training. (YOLOV8 Training Results and
Statistics)
Figure 8 showed how well the YOLOV8 classification model learned during training
and validation and how it performed on new data in a graph called an F1 curve. It
8
indicated that the model's best performance, with an F1 score of 0.90, occurred when
8
the classification decision threshold was set to 0.344. This meant that the model was
most accurate when it was confident about its predictions but not overly cautious.
predict the locations of objects during training. The train/class loss represented the
average class loss or error in predicting classes on the training dataset, indicating how
4
well the model learned to classify objects during training. The val/box loss represented
4
the average box loss on the validation dataset, indicating how well the model
generalized to unseen data in terms of predicting bounding boxes. Finally, val/cls loss
4
represented the average class loss on the validation dataset, showing how well the
model generalized to unseen data in terms of classifying objects. All metrics showed a
steady decrease in loss over time during the training and validation of the YOLOV8
8
model, indicating that the model steadily improved its ability to predict bounding boxes
84
Figure 10 depicted the confusion matrix resulting from the training and
evaluation of the YOLOV8 Cataract Detection Model. The matrix consisted of the Y-
axis as the "Predicted" values from the model evaluation, while the X-axis represented
the True or Expected value. In detecting cataracts, out of 73 tested, 3 were incorrectly
detected as non-cataract or not normal, while 70 or 96% of the total values were
detected incorrectly, while 36 or 92% were correct. For the "normal" class, 12 out of 54
were incorrectly detected, while 42 or 78% were correct. Lastly, the background class
did not have any detections, as the system or model was not trained to determine a
background image due to the nature of the system setup. Overall, the detection results
accuracy and functionality. While not reaching 100% accuracy due to factors such as
lighting, eye size, or shape variations, the model performed effectively, detecting
cataracts with high accuracy. The model's probabilistic detection method, facilitated by
eye images.
Table 30 detailed the testing and discussion of initial results derived from the hardware
and software system unit testing. The test comprised expected output based on the
OptiScan system's software and hardware requirements, alongside actual unit test
results. Interpretation of the results relied on the deviation of actual results from each
hardware's minimum expected operational output value. This demonstrated that all
such as the single-board computer, camera module, and Nema 17 stepper motors,
22
fulfilled their tasks without issues. The Nema 17 stepper motor was chosen for its
reliability and performance after the initial stepper motor (Nema 24) used only for trial
For Tables 31, 32, and 33, the confidence level did not reach 100% due to several
factors. Firstly, the detection process relied on probabilities and bounding boxes,
where the YOLOv8 model assigned a certainty level to each detection based on
factors like lighting conditions could also affect the accuracy of the detection process
Table 31 illustrated a high level of accuracy in the detection process, with a weighted
mean of 90.24%. This figure indicated that out of the total trials examined, almost all
were identified correctly, totaling 10 successful detections. This level of accuracy
real-world applications where precision was important. Moreover, the fact that there
system's stability. This designation confirmed not only the high level of accuracy in
individual detections but also the overall reliability and effectiveness of the system in
weighted mean of 78.10%. This indicated that the majority of trials resulted in correct
capacity to not only identify targets correctly but also to do so with a high degree of
of a confidence level for incorrect detections, such cases were considered invalid
because the detection itself was incorrect. This indicated a moderate level of accuracy
in identifying targets. Specifically, there were 7 instances of "correct detection" and 3
"very inaccurate". This distribution reflected varying degrees of precision and reliability
in the detection process. Despite the moderate overall accuracy, the presence of 5
The composite mean in Table 34 showed that the accuracy rate for identifying
"Cataract," "Normal," and "Not Normal" eye conditions was calculated at 73.58%. This
various eye conditions, encompassing both normal and abnormal states. Despite
identification.
Table 35, 36, and 37 showed the alpha test integration test results. The tables detailed
the comprehensive evaluation of the OptiScan system and its individual subsystems
In Table 35, the integration testing outcomes for system functionalities were presented,
indicating that all tested Systems and Subsystems had successfully achieved their
Expected Output Values of Test Steps. As per the results of the integration tests,
100% or all components were very functional and successful in the integration of the
OptiScan system. This suggested that the various components of the system were
designated functions. In essence, the results signified that the system functionalities
were not only operational but also delivering the expected outputs as specified during
In Table 36, the results of system testing for time efficiency were presented, detailing
breakdown of testing metrics and corresponding results for various Systems and
Subsystems within the OptiScan system. The test results indicated that each
conducting diagnostic tasks. For example, the OptiScan System efficiently completed a
basic eye examination procedure for both eyes of a patient in approximately 1 minute
86
and 45 seconds. These findings offered valuable insights into the performance and
efficiency of each subsystem, aiding in the optimization and improvement of the
OptiScan System.
indicating that 100% or all of the Systems and Subsystems were functional during
testing based on the test metric for continuous usage of 8 hours. This highlighted that
Tables 38, 39, 40, and 41 provided the results of Beta test integration tests, offering a
functionality, accuracy, and reliability of the system components. Each test examined
applications.
In Table 38, it illustrated that all tested Systems and Subsystems had effectively met
their intended functionalities as per the provided Test Description and Minimum
Expected Output Values of Test Steps. The integration tests produced a weighted
mean of 93.57%, denoted as "Highly Functional," indicating successful performance
Moving on to Table 39, the results demonstrated a high level of efficiency across
various Systems and Subsystems within the OptiScan, with a weighted mean of
Table 40 further elaborated on the efficiency of each subsystem within the OptiScan
system, with all components successfully completing their tasks in under one minute.
results, contributing to an average time of 64.75 seconds for a basic eye examination
essential functions, thus enhancing the overall efficiency of the eye examination
process.
that all Systems and Subsystems remained fully functional during continuous usage
Testing. The tests resulted in a weighted mean of 96.43%, which was verbally
interpreted as "Highly Reliable." This high reliability score indicated that the OptiScan
and subsystems, ensuring that it met strict standards for sustained operation. This
strong reliability demonstrated the system's capability to function effectively under
extended use, providing confidence in its practical application and overall durability.
Table 42 presented the results of the system testing phase, specifically focusing on the
classification accuracy for cataract cases. The weighted mean accuracy stood at
the verbal interpretation labeled this accuracy level as "Accurate," further confirming
the system's accuracy. All 10 out of 10 patients with cataracts were detected correctly
during testing, highlighting the reliability and strength of the developed system in
Table 43 presented the classification results for Normal Eye during system
testing. The weighted mean accuracy was documented at 71.19%, indicating a high
Table 44 illustrated the classification outcomes for Abnormal Eye during system
testing. With a weighted mean accuracy of 68.88%, the results indicated a significant
"Cataract," "Normal," and "Not Normal". Despite variations in accuracy rates for
Table 46 showed the results of the System Quality Evaluation conducted among
for Medical Field Professionals. The weighted mean of 92.52% was observed for each
general and sub-characteristics with the equivalent features of the developed system.
Positioning, Optiscan User Interface, Eye Detection and Classification Model, Printing
of Results, Eye Images, Examination Results, Device’s Interiors and Exterior, System
Generated Virtual Results, and System Generated Printed Results. The evaluation
indicated that the system was considered "Highly Acceptable" across different
1
characteristics, including Maintainability, Security, Reliability, Functional Suitability,
mean.
Based on Table 47, the evaluation results from various Technology/Technical Field
composite mean weighted at 90.61% signified a high level of acceptance for the
developed system. This indicated that the system's functionality and performance
different groups of professionals, it was concluded that the system had garnered
including ophthalmologists, health care clinic staff, and other medical field officers,
1
demonstrating a composite mean weighted at 92.02%. This indicates a strong level of
acceptance for the developed system across various general and sub-characteristics,
different groups of medical experts underscores the positive feedback and approval of
a strong level of acceptance for the developed system across various aspects,
consistent rating of "Highly Acceptable" emphasizes the positive reception and support
application utilized frameworks such as TKinter for the user interface of the Optiscan
system and mainly employed essential libraries like Pandas, PIL, and OpenCV for
capabilities. The development process and the final features of the device aligned
closely with the study's defined objectives. These included the creation of a device
capable of precise eye image capture, offering standardized initial cataract diagnosis
terms of detecting cataracts and facilitate patient education regarding the diagnosis.
69.86% on actual use, classified as "Accurate" based on the 4-point Likert scale.
encompassing both usage trials and actual patient testing, stood at 71.72%, also
1
classified as "Accurate". These findings underscored the effectiveness and reliability of
basic health center use and initial cataract diagnostic applications as well as gaining
tests, integration tests, and system tests, the developed system demonstrated levels of
device's utility and address broader healthcare needs, a few of these professionals in
the field recommended that future developments expand its scope beyond cataract
significantly increase its clinical impact. For evaluators, notably from the community
where the device was deployed, a high level of agreement and acceptance toward the
developed system was shown as well as their regular use of the device during a week
of usage trials within the community. This exhibits the device's impact with the target
demographic and highlights its potential for seamless integration into local healthcare
device, the project made significant marks in addressing the stated problems and
remote areas lacking access to specialized eye care facilities. Through the developed
hardware device capable of capturing eye images and integrating Convolutional Neural
Networks (CNNs) for automated cataract identification, the project fulfilled its primary
the Health Care Clinic at Barangay Concepcion, Lubao, Pampanga, gained access to
an initial diagnostic tool that enhanced patient care and treatment recommendations.
The underserved community gained access to initial cataract diagnosis and early
intervention.
REFERENCES
21
B & H Foto & Electronics Corp.(2024). SanDisk 64GB Extreme PRO UHS-I SDXC
REG/sandisk_sdsdxxu_064g_ancin_64gb_extreme_pro_uhs_i.html
https://ph.canon/en/business/pixma-
ts207/specification?category=printing&subCategory=pixma
3
Codacy Quality (2021). An Exploration of the ISO/IEC 25010 Software Quality Model.
model#ISO25010inPractice%C2%A0
53
Components101 (2018). Joystick Module. Retrieved from
https://components101.com/modules/joystick-module
69
Dr. Emma Nash et al. (2013). Cataracts. Retrieved from
https://journals.sagepub.com/doi/10.1177/1755738013477547
40
DFRobot Support. TB6600 Stepper Motor Driver. Retrieved from
https://www.dfrobot.com/product-1547.html
https://medium.com/@hakanasal51/what-is-opencv-fc73e4695625
29
Hongpeng Sun et al. (2014), Age-Related Cataract, Cataract Surgery and Subsequent
42
Mortality: A Systematic Review and Meta-Analysis. Retrieved from
https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&doi=4b18a90ecfc338980
48cacc3f692eb8f7b6e6bef
release-notes-intel-28r-29-distribution-for-python-2018-update-1.pdf
Ishitaa Jindal et al. (2019). Cataract Detection using Digital Image Processing,
https://ieeexplore.ieee.org/abstract/document/8978316/authors#authors
2
Ivan Dave F. Agustino et al. (2019). SBC-BASED CATARACT DETECTION
SYSTEM
1
USING DEEP CONVOLUTIONAL NEURAL NETWORK FOR ASIA PACIFIC EYE
5
Lewin, K. (1946). Action research and minority problems. Journal of Social Issues, 2,
4,
14
Makerlab Electronics (2024). Wantai Stepper Motor Nema 34 97mm 4A 54kg-cm
88
86BYGH450B-003. (n.d.-b). Retrieved from https://www.makerlab-
14
electronics.com/products/wantai-stepper-motor-nema-34-97mm-4a-54kg-cm-
86bygh450b-003
9
Manish Chablani, (2017). YOLO — You only look once, real time object detection
real-time-object-detection-explained-492dc9230006
7
Microchip Technology (2008). 8-bit AVR Microcontroller with 32K Bytes In-System
https://ww1.microchip.com/downloads/en/DeviceDoc/Atmel-7810-Automotive-
Microcontrollers-ATmega328P_Datasheet.pdf
6
Mitchell Finzel (2017), "Convolutional Neural Networks in Medical Imaging," Scholarly
16
Nana Yaa Koomson et al. (2019), Accessibility and Barriers to Uptake of Ophthalmic
Services among Rural Communities in the Upper Denkyira West District, Ghana.
26
Raspberry Pi (2022). Raspberry Pi High Quality Camera and Lenses. Retrieved from
https://www.raspberrypi.com/products/raspberry-pi-high-quality-camera/
from
92
https://www.researchgate.net/figure/Hardware-specifications-of-Google-
Colaboratory_tbl2_353925144
50
Roberto Bellucci (2019). Newer Technologies for Cataract Surgeries. Retrieved from
https://link.springer.com/chapter/10.1007/978-981-13-9795-0_1
35
Shenming Hu et al, (2021). ACCV: automatic classification algorithm of cataract video
based on deep learning. Retrieved from https://biomedical-engineering-
online.biomedcentral.com/articles/10.1186/s12938-021-00906-3
38
Shuai Yuan et al. (2022). Metabolic and lifestyle factors in relation to senile cataract:
a
59
Mendelian randomization study. Retrieved from
https://www.nature.com/articles/s41598-021-04515-x
68
SparkFun Electronics. Raspberry Pi LCD - 7" Touchscreen. (n.d.). Retrieved from
https://www.sparkfun.com/products/13733
46
Stack Exchange Inc,. (2024). Minimum System Requirements to run Python & tkinter.
46
Retrieved from https://stackoverflow.com/questions/64305083/minimum-system-
requirements-to-run-python-tkinter
20
Suranjit Kosta (2023). Comparison of Agile and Scaled Agile Framework. Retrieved
from https://www.linkedin.com/pulse/comparison-agile-scaled-framework-suranjit-kosta
54
TensorFlow, (2022). Google Summer of Code. Retrieved from
https://summerofcode.withgoogle.com/archive/2022/organizations/tensorflow
13
The Fred Hollows Foundation (2020). PHILIPPINES Where We Work. Retrieved from
https://www.hollows.org/us/where-we-work/south-east-asia/philippines-2
10
Tripathi, P., et al (2022). MTCD: Cataract detection via near infrared eye images.
https://doi.org/10.1016/j.cviu.2021.103303
https://docs.ultralytics.com/models/yolov8/#supported-tasks-and-modes
72
Waveshare (2016). RPi Camera (D), Raspberry Pi Camera Module, Fixed-focus.
(n.d.).
79
Retrieved from https://www.waveshare.com/rpi-camera-d.htm
32
Yadav, M. R., & M, W. N. (2017). Cataract detection. International Journal of
https://doi.org/10.17148/ijarcce.2017.6662
Similarity Report
9% Overall Similarity
Top sources found in the following databases:
5% Internet database 1% Publications database
8% Submitted Works database
TOP SOURCES
The sources with the highest number of matches within the submission. Overlapping sources will not be
displayed.
ijrte.org
2 <1%
Internet
d-nb.info
5 <1%
Internet
digitalcommons.morris.umn.edu
6 <1%
Internet
komunikacie.uniza.sk
9 <1%
Internet
Sources overview
Similarity Report
makerlab-electronics.com
14 <1%
Internet
openaccesspub.org
16 <1%
Internet
filedriv.com
17 <1%
Internet
Sources overview
Similarity Report
coursehero.com
24 <1%
Internet
mdpi.com
25 <1%
Internet
ouci.dntb.gov.ua
29 <1%
Internet
Sources overview
Similarity Report
pure.qub.ac.uk
35 <1%
Internet
archive.rsna.org
37 <1%
Internet
nature.com
38 <1%
Internet
dokumen.pub
42 <1%
Internet
fastercapital.com
43 <1%
Internet
pubmed.ncbi.nlm.nih.gov
44 <1%
Internet
thebusinessresearchcompany.com
45 <1%
Internet
Sources overview
Similarity Report
exportersindia.com
47 <1%
Internet
epublications.vu.lt
50 <1%
Internet
lore.kernel.org
51 <1%
Internet
thesis.dlsud.edu.ph
52 <1%
Internet
Sources overview
Similarity Report
uonjournals.uonbi.ac.ke
59 <1%
Internet
fractalcatz.itch.io
67 <1%
Internet
rucore.libraries.rutgers.edu
68 <1%
Internet
drugwatch.com
69 <1%
Internet
Sources overview
Similarity Report
manualzz.com
70 <1%
Internet
science.gov
71 <1%
Internet
waveshare.com
72 <1%
Internet
Babeș-Bolyai University
75 <1%
Publication
dspace.ut.ee
77 <1%
Internet
journals.plos.org
78 <1%
Internet
nauchkor.ru
79 <1%
Internet
pureadmin.qub.ac.uk
80 <1%
Internet
thesai.org
81 <1%
Internet
Sources overview
Similarity Report
thesis.unipd.it
82 <1%
Internet
vdocuments.site
83 <1%
Internet
publiscience.org
84 <1%
Internet
researchgate.net
85 <1%
Internet
researchsquare.com
86 <1%
Internet
Sources overview
Similarity Report
Sources overview