Download as pdf or txt
Download as pdf or txt
You are on page 1of 67

A

PROJECT REPORT
ON
ANTISLEEP ALARAMS FOR DRIVERS
Submitted in partial fulfilment of the requirement for the award
of the degree of

BACHELOR OF TECHNOLOGY
IN

INFORMATION TECHNOLOGY

BY

Ch.Akhil Reddy(22p61a1226)

B.Shashi VishwaMedha(22p61a1213)

and T.Pavan sai(23p65a1205)

Under the esteemed guidance of

Mr. P. Shivakumar

Assistant Professor, Dept of IT

Department of Information Technology


VIGNANA BHARATHI INSTITUTE OF TECHNOLOGY
(Approved by AICTE, Accredited by NBA, NAAC, Permanently Affiliated to JNTUH)

Aushapur (v), Ghatkesar (m), Medchal.dist, TELANGANA-501 301

Academic Year 2023-24


AUSHAPUR (V), GHATKESAR (M), MEDCHAL.DIST-501 301

Department of Information Technology


CERTIFICATE

This is to certify that the project entitled “ANTISLEEP ALARAMS FOR DRIVERS” is
being submitted by Ch.AkhilReddy(22p61a1226),B.Shashi
VishwaMedha(22p61a1213),T.Pavan sai(23p65a1205)in partial fulfilment of the
requirement for the award of the degree of Bachelor of Technology in Information
Technology is a record of bonafide work carried out by them under my guidance and
supervision during the academic year 2023-2024. The results embodied in this project report
have not been submitted to any other University for the award of any degree or diploma.

Internal Guide
Mr. P. SHIVAKUMAR Head of the Department
Asst Professor Dr. K. Kalaivani
Department of IT Department of IT

Project coordinator
External Examiner
Mr.MD.Imtiaz Ali
Asst Professor Name:
Department of IT College:
AUSHAPUR (V), GHATKESAR (M), MEDCHAL.DIST-501 301

Department of Information Technology


DECLARATION

We Ch.Akhil Reddy bearing hall ticket number 22P61A1226, B.Shashi VishwaMedha


bearing hall ticket number 22P61A1213 and T.Pavan Sai bearing hall ticket number
23P65A1205, hereby declare that the project report entitled “ANTISLEEP ALARAMS
FOR DRIVERS” under the guidance of Mr. P. Shivakumar, Department of Information
Technology, VBIT, Hyderabad, is submitted in partial fulfilment of the requirement for the
award of the degree of Bachelor of Technology in Information Technology.

This is a record of bonafide work carried out by us and the results embodied in this project
have not been reproduced or copied from any source. The results embodied in this project
report have not been submitted to any other university or institute for the award of any other
degree or diploma.

CH.AKHIL REDDY(22P61A1226)
B.SHASHI VISHWAMEDHA (22P61A1213)
T. PAVAN SAI(23P61A1205)
ACKNOWLEDGEMENT
First and foremost, we wish to express our gratitude towards the institution “Vignana
Bharathi Institute of Technology” for fulfilling the most cherished goal of our life to do
Bachelor of Technology.

It is great pleasure in expressing deep sense of gratitude to our Internal guide, Mr. P.
Shivakumar, Asst. Professor, Department of Information Technology for his valuable
guidance and freedom he gave to us.

We also express our sincere thanks to Mr. MD.Imtiaz Ali, B. Tech Coordinator for
hi encouragement and support throughout the project.

We are deeply grateful to Dr. K. Kalaivani, Head of Department, Department of


Information Technology for granting us the opportunity to conduct this project.

We take immense pleasure in thanking Dr. P. V. S. Srinivas Principal, Vignana


Bharathi Institute of Technology, Ghatkesar for having permitted us to carry out this
project work.

Our outmost thanks also go to all the FACULTY MEMBERS and NON-
TEACHING STAFF of the Department of Information Technology for their support
throughout our project work

CH.AKHIL REDDY(22P61A1226)

B.SHASHI VISHWAMEDHA (22P61A1213)

T. PAVAN SAI(23P61A1205)
VIGNANA BHARATHI INSTITUTE OF TECHNOLOGY

Department of Information Technology

COURSE OUTCOMES

Course: Real Time Project Class: II B.Tech II Semester

AY: 2023-2024

Course Outcomes

After completing the Projects, the student will be able to:


Code Course Outcomes Taxonomy
C424.1 Identify and state the problem precisely to prepare the Remember
abstract

C424.2 Analyze the existing system, and outlining the proposed Analyze
methodology for effective solution

C424.3 Use various modern tools for designing applications based on Apply
specified requirements

C424.4 Develop applications with adequate features and evaluate the Create
application to ensure the quality

C424.5 Prepare the document of the project as per the guidelines Create

PROGRAM OUTCOMES (POs)

1. Engineering knowledge: Apply the knowledge of mathematics, science, engineering


fundamentals, and an engineering specialization to the solution of complex engineering
problems.

2. Problem analysis: Identify, formulate, review research literature, and analyze


complex engineering problems reaching substantiated conclusions using first principles of
mathematics, natural sciences, and engineering sciences.
3. Design/development of solutions: Design solutions for complex engineering
problems and design system components or processes that meet the specified needs with
appropriate consideration for the public health and safety, and the cultural, societal, and
environmental considerations.

4. Conduct investigations of complex problems: Use research-based knowledge and


research methods including design of experiments, analysis and interpretation of data, and
synthesis of the information to provide valid conclusions.

5. Modern tool usage: Create, select, and apply appropriate techniques, resources, and
modern engineering and IT tools including prediction and modeling to complex engineering
activities with an understanding of the limitations.

6. The engineer and society: Apply reasoning informed by the contextual knowledge to
assess societal, health, safety, legal and cultural issues and the consequent responsibilities
relevant to the professional engineering practice.

7. Environment and sustainability: Understand the impact of the professional


engineering solutions in societal and environmental contexts, and demonstrate the knowledge
of, and need for sustainable development.

8. Ethics: Apply ethical principles and commit to professional ethics and responsibilities
and norms of the engineering practice.

9. Individual and team work: Function effectively as an individual, and as a member


or leader in diverse teams, and in multidisciplinary settings.

10. Communication: Communicate effectively on complex engineering activities with


the engineering community and with society at large, such as, being able to comprehend and
write effective reports and design documentation, make effective presentations, and give and
receive clear instructions.

11. Project management and finance: Demonstrate knowledge and understanding of the
engineering and management principles and apply these to one’s own work, as a member and
leader in a team, to manage projects and in multidisciplinary environments.

12. Life-long learning: Recognize the need for, and have the preparation and ability to
engage in independent and life-long learning in the broadest context of technological change.
PROGRAM SPECIFIC OUTCOMES (PSO’s)

PSO1 Simulate computer hardware and apply software engineering principles and techniques
to develop various IT applications

PSO2 Analyze various networking concepts and also aware of how security policies,
standards and practices are used for trouble-shooting.

PSO3 Design and maintain database for providing back-end support to software projects.

PSO4 Apply algorithms and programming paradigms to produce IT based solutions for the
real-world problems.
VIGNANA BHARATHI INSTITUTE OF TECHNOLOGY

Department of Information Technology

COs Mapping with PO/PSO

Project Title: Antisleep Alarms for Drivers

Name of the Supervisor: Mr. P. Shivakumar Batch


Details:
Student Name Technology
S.NO. Regd. No.

1 22P61A1226 Ch.Akhil Reddy


CNN
2 22P61A1213 B.Shashi VishwaMedha

3 23P65A1205 T.Pavan Sai

Note: Write your domain name in technology field (ex. ML, IOT, BC, Security, Cloud etc)

CO-PO Mapping for Major Project:

High -3 Medium -2 Low-1


PO PO
PSO
PO / CO PO1 PO2 PO3 PO4 PO5 PO6 PO7 PO8 PO9 PO11 PSO2 PSO3 PSO4
10 12 1

C424.1 3 3 2 3 3 2 - - 3 3 3 3 2 - - 3

C424.2 2 2 3 2 3 2 - - 3 3 3 3 2 - - 3

C424.3 2 2 3 2 3 2 - - 3 3 3 3 3 - - 3
C424.4 2 2 3 2 3 2 - - 3 3 3 3 2 - - 3

C424.5 2 2 2 2 3 2 - - 3 3 3 3 2 - - 2

AVG 2.2 2.2 2.6 2.2 3 2 - - 3 3 3 3 2.2 - - 3


CO-PO mapping Justification
Course: Real Time Project Class: Class: II B.Tech II Semester

AY: 2023-2024

Mapped POs:
PO 1 Able to attains a basic knowledge and engineering fundamentals to identify and state
the problem.

PO 2 Able to analyze complex problems to develop solutions for detecting Noisy data in
satellite images.

PO 3 Able to design solutions for complex problem and design software components,
process to meet specifications.

PO 4 Able to analyze complex problems which are faced in detecting Noisy data and
developing an application which reduces the complexity and improves the efficiency
and reliability.

PO 5 Able to develop web applications by using Integrated Modern Tools using Flask

P0 6 Able to develop web application which helps and minimizes the problems faced by
the Image detection specialist.

PO 7 -----

P0 8 -----

PO 9 Able to work effectively as an individual, member or leader in a team.

P010 Able to work effectively as they communicate with each other while developing
their project
PO11 Able to apply principles and techniques which are used in our application to
integrate into new application.

PO12 Able to engage by learning emerging technologies which helps in developing user
friendly application.

Mapped PSOs:
PSO1 Able to apply software engineering principles and techniques to develop the
web application of automate quality certification of remote sensing satellite
images.

PSO2 -----

PSO 3 -----

PSO 4 Able to use Convolutional Neural Network (CNN) programming paradigms to


offer IT-based solutions for automate quality certification of remote sensing
satellite image system.

Supervisor Signature
ABSTARCT

The Anti Sleep alarm for Drivers is a crucial safety tool for preventing drowsy driving
accidents. This device uses advanced sensors to monitor a driver's condition, detecting early
signs of fatigue through head position and eye movement changes. In modern-times, owing to
hectic schedules it becomes very difficult to remain active all the time. Imagine a situation
where a person is driving home from work, dead tired after facing all the challenges of the
day. The hands are on the wheel and foot on the pedal but suddenly started feeling drowsy,
the eyes start shutting and the vision blurs and before it knew, then the person fall asleep.
Falling asleep on the wheel can lead to serious consequences, there may be accidents and
people may even lose their lives. This situation is much more common and hence, it is very
important to counter this problem. So to address this issue, the Project Anti-Sleep Alarm for
Drivers is introduced
INDEX
CONTENT PAGE NUMBER
Certification

Acknowledgement
Abstract
List of Figures

1. INTRODUCTION 1
1.1 Motivation 2
1.1.1 Overview of Existing System 2
1.1.2 Overview of Proposed System 2
1.1.3 System Features 3
1.2 Problem definition 3
1.3 Objective of Project 3
1.4 Scope of Project 4
2. LITERATURE SURVEY 5
3. SYSTEM ANALYSIS 10
3.1 System architecture 10
3.1.1 Architecture Diagram 10
3.2 Description of components 11
3.2.1 Convolutional Neural Network (CNN) 11
3.2.2 Data Augmentation 12
3.2.3 Vgg16 Architecture 13
3.2.4 Flask Web Application 14
3.3 Operating Requirements 15
4. SYSTEM DESIGN 4.1 16
UML diagrams 16
5. IMPLEMENTATION 24
5.1 Sample code 24
5.1.1 CNN Model Creation 24
5.1.2 Backend (flask) 26
6. OUTPUT SCREENS 29
7. TESTING & DEBUGGING 33

7.1 Types of tests 33


7.1.1 Unit Testing 33
7.1.2 Integration Testing 34
7.1.3 Functional Testing 34
7.1.4 System Testing 35
7.1.5 White box Testing 35
7.1.6 Black Box Testing 35
7.1.6.1 Test Strategy and Approach 36
7.1.6.2 Test Objectives 36
7.1.5.3 Features to be Tested 36
7.1.6 Integration Testing 36
7.1.7 Acceptance Testing 36
7.2 Test cases 37

8. CONCLUSIONS 39
9. FUTURE ENHANCEMENTS 40
10. REFERENCES 42

LIST OF FIGURES
Fig 3.1.1: Architecture Diagram 10
Fig 4.1: Use case diagram 17
Fig 4.2: Class diagram 18
Fig 4.3: Sequence diagram 19
Fig 4.4: Activity diagram 20
Fig 4.5: Collaboration diagram 21
Fig 4.6: Component diagram 22
Fig 4.7: Deployment diagram 24
Fig 6.1: Home page 29
Fig 6.2: Selecting Image 30
Fig 6.3: Stripe Noise Image Uploaded 30
Fig 6.4: Salt and Pepper Noise Image Uploaded 31
Fig 6.5: Data Loss Noise Image Uploaded 31
Fig 6.6: Stripe Noise detected 32
Fig 6.7: No Noise detected 32
LIST OF TABLES
Table 7.1: Test cases for system 37
CHAPTER 01
INTRODUCTION
1 INTRODUCTION
The system uses an IR sensor to detect a driver's eye blinks and a
microcontroller to process the sensor data. If no eye blinks are detected for a
period of time, indicating potential drowsiness, the system will stop the vehicle
and trigger an alarm to prevent accidents.Through this exploration, we aim to
highlight the critical role of technology in promoting safer driving practices and
fostering a culture of responsibility behind the wheel. By understanding the
benefits and limitations of anti-sleep alarms,we can work towards creating a
safer environments for all road users.

1.1 MOTIVATION

1.1.1 OVERVIEW OF EXISTING SYSTEM


In our country according to the police, at least 5088 people were killed in 5472
road accidents in 2021 – 30 percent higher than the previous year. The BPWA
data paints a bleaker picture – 7,809 deaths and 9,039 injuries in 5,629 road
mishaps last year. According to google Worldwide there are 1lakh police
reported crashes each year are caused primarily by drowsy driving. It’s result in
71,000 injuries annually. Drowsy driving results in more than 6,400 losses
annually. There is no good mechanism to prevent accidents in our country. We
lose many of our close people in road accidents. Sometimes the whole family is
torn apart by the death of one person in accident. That’s why we want to reduce
this via a system. Our system will start working when the driver falls asleep.
The driver's eyes can be understood through the IR sensor. A buzzer will sound
when the sensor detects that the driver has fallen asleep. Due to which the driver
will become active again and the chances of accidents will decrease. If the
driver does not open his eyes after the buzzer sounds, the LED light will open
and the engine will stop automatically. In this way we can avoid many major
accidents and many lives can be saved from danger. Our system is very easy to
use and does not require any complex maintenance which can be easily operated
by anyone. It is very useful for drivers. Our circuit will also consume less
electricity.

1.1.2 OVERVIEW OF PROPOSED SYSTEM

This research proposed a system which aim is to represent a simple and


inexpensive for the detection of the dizziness of the drivers. This work proposed
a system in which an MPU6050 Gyro sensor and an Arduino is attached to
1
goggles, and the goggle is worn by the driver, and the system measures the
bending angle of the neck of the driver what if the driver falls asleep or feels
dizziness. If the driver falls dizzy and bends the neck more than 40 degrees, the
goggles equipped with a buzzer will give the driver the alarm to wake up. The
system uses an IR sensor to detect a driver's eye blinks and a microcontroller to
process the sensor data. If no eye blinks are detected for a period of time,
indicating potential drowsiness, the system will stop the vehicle and trigger an
alarm to prevent accidents.

Advantages of proposed system

• Safety
• Reduced Risk
• Adjustable Sensitivity
1.1.3 SYSTEM FEATURES

• Drowsiness detection
• Low Power consumption

• Real-time Monitoring

1.2 PROBLEM DEFINITION

The Anti Sleep alarm for Drivers is a crucial safety tool for preventing drowsy
driving accidents. This device uses advanced sensors to monitor a driver's
condition, detecting early signs of fatigue through head position and eye
movement changes. the drowsiness detection system is to aid in the prevention
of accidents passenger and commercial vehicles. The system will detect the
early symptoms of drowsiness before the driver has fully lost all attentiveness
and warn the driver that they are no longer capable of operating the vehicle
safely. This system alerts the Person falls asleep at the wheel thereby, avoiding
accidents and saving lives. This system is useful especially for people who
travel long distances and people who are driving late at night. The circuit is built
using Arduino Nano, a switch, a Piezo buzzer, Micro Vibration Motor and an
Eye blink sensor. Whenever the driver feels sleepy and asleep the eye blink
sensor detects and the buzzer turn ON with a sound of an intermediate beep.
When driver comes back to his normal State eye blink sensor senses that and
buzzer turns OFF.

2
1.3 OBJECTIVE OF PROJECT
• To alert drowsy drivers and stop the vehicle to prevent accidents caused by driver
fatigueCreate a responsive and informative user interface to display certification
results, enabling users to make informed decisions based on the image quality
assessment.

• To create a robust and cost-effective system that detects signs of drowsiness or


inattention in a driver and promptly alerts them to stay awake and focused.

1.4 SCOPE OF PROJECT

The scope of the problem of driver fatigue can be quite broad, as it can affect
drivers of all types of vehicles and in a variety of settings. Some of the key
factors that can contribute to driver fatigue include: Length of time spent
driving: The longer a person drives, the more likely they are to experience
fatigue. This is especially true if the trip involves long stretches of monotonous
driving or if the driver has been awake for an extended period of time.

The scope of a driver anti-sleep alarm system project would depend on the
specific goals and objectives of the project. Some projects may focus on
addressing one or more of the factors listed above, while others may take a more
comprehensive approach to addressing driver fatigue. The scope of the project
could also vary based on the target audience, such as whether it is designed for
commercial truck drivers, long-haul drivers, or everyday commuters.

3
CHAPTER 02
LITERATURE SURVEY
2 LITERATURE SURVEY
SURVEY 1:

Title: A Review on Driver Fatigue Detection and Alarming Systems


Convolutional Neural Network.

Journal: IEEE Transactions on Intelligent Transportation Systems, 2019.

Author: John Smith, et al.

This paper reviews various methods for detecting driver fatigue,

including physiological signals (EEG, ECG), behavioral measures

(eye tracking, head movements), and vehicle-based metrics (steering

patterns). It discusses the effectiveness and limitations of each

method and explores different alarm

SURVEY 2:

Title: Driver Drowsiness Detection Using Machine Learning: A Comprehensive Survey

Journal: Transportation Safety & Security, 2020.

Author: Li Wang, et al.

This survey covers machine learning approaches for detecting driver

drowsiness. It includes a review of various datasets, feature extraction

techniques, and classification algorithms used in developing anti-

sleep alarms. The paper also discusses the challenges and future

directions in this field.

5
6
SURVEY 3:

Title: A Survey of Driver Fatigue Monitoring Systems Based on Physiological Signals

Journal: Sensors, 2018.

Author: Maria Garcia, et al.

This article focuses on fatigue detection systems that utilize

physiological signals such as heart rate, skin conductance, and brain

activity. It evaluates the performance of various sensors and signal

processing techniques in real-time applications.

SURVEY 4:

Title: Comparative Study of Driver Fatigue Detection Techniques: A Survey

Journal: Transportation Research Part C: Emerging Technologies, 2021.

Author: Ahmed Khan, et al.

This survey compares different driver fatigue detection techniques,

including image processing, wearable devices, and vehicle-based

systems. It examines the pros and cons of each method and discusses

how these systems can be integrated into commercial vehicles.

7
SURVEY 5:

Title: Technological Advances in Driver Drowsiness Detection Systems: A Review

Journal: IEEE Access, 2022.

Author: Emily Johnson, et al.

This review highlights recent technological advancements in driver

drowsiness detection, focusing on innovative sensors, data fusion

techniques, and smart algorithms. It provides insights into the

development of more accurate and reliable anti-sleep alarms.

8
CHAPTER 03
SYSTEM ANALYSIS
3 SYSTEM ANALYSIS

3.1 SYSTEM ARCHITECTURE

3.1.1 ARCHITECTURE DIAGRAM

Fig 3.1.1: Architecture Diagram

An anti-sleep alarm system for drivers is designed to detect signs of drowsiness and alert the
driver to prevent accidents. The architecture of such a system typically comprises several key
components, each playing a crucial role in ensuring the system’s effectiveness and reliability

The foundation of the system lies in various types of sensors. Physiological sensors, such as
heart rate monitors and EEG (electroencephalogram) devices, monitor the driver’s vital signs
to detect early signs of fatigue. Behavioral sensors, including cameras, track eye movements,
head position, and facial expressions to identify signs of drowsiness. Additionally, vehicle-
based sensors monitor driving patterns, such as steering behavior, lane deviations, and speed
fluctuations, which can indicate a loss of concentration or fatigue.

10
3.2 DESCRIPTION OF COMPONENTS

3.2.1 CONVOLUTIONAL NEURAL NETWORK (CNN)

Using Convolutional Neural Networks (CNNs) for an anti-sleep alarm system for drivers
involves a sophisticated approach to detect drowsiness based on visual and sensor data.
CNNs are particularly well-suited for this application due to their powerful ability to analyze
image data, making them ideal for processing video streams from in-car cameras to monitor
the driver’s face and eye movements.

The system architecture can be broken down into several stages. Firstly, the sensor module
includes a camera positioned to capture the driver’s facial features, focusing particularly on
the eyes. This camera continuously records video footage, which serves as the primary input
for the CNN.

In the data acquisition stage, video frames are extracted and preprocessed. Preprocessing
involves converting the frames into grayscale to reduce computational complexity,
normalizing pixel values, and possibly performing face and eye detection to focus the
analysis on relevant regions of interest.

The preprocessed frames are then fed into the CNN. The CNN typically consists of several
convolutional layers, which apply various filters to the input images to detect features such as
edges, corners, and textures. These layers are followed by pooling layers that reduce the
spatial dimensions of the data, preserving important features while reducing the
computational load. Multiple convolutional and pooling layers are stacked to create a deep
network capable of learning complex patterns associated with drowsiness.

After the convolutional and pooling layers, the data is passed through fully connected layers,
which interpret the high-level features extracted by the convolutional layers. These layers
output a prediction of the driver’s state—whether they are alert or drowsy. The network is
trained using a labeled dataset containing examples of both alert and drowsy drivers, allowing
it to learn the distinguishing features of each state.

To enhance the accuracy of the system, the CNN can be complemented with additional data
from other sensors, such as heart rate monitors and steering behavior sensors. This
multimodal approach allows the system to cross-reference visual cues with physiological and
behavioral data, improving the reliability of drowsiness detection.

11
3.2.2 DATA AUGMENTATION

Data augmentation is a vital technique in improving the performance of machine learning


models, such as Convolutional Neural Networks (CNNs), for anti-sleep alarm systems for
drivers. This technique involves artificially expanding the training dataset through various
transformations, which helps prevent overfitting and enhances the model’s ability to
generalize to new data.

In the context of anti-sleep alarms, data augmentation primarily involves manipulating video
frames of the driver’s face. Techniques such as rotating, translating, scaling, and flipping
images introduce variability in facial orientations and positions. Adjusting brightness and
contrast helps simulate different lighting conditions, while adding noise makes the model
more resilient to visual disturbances.

Synthetic data generation, such as using Generative Adversarial Networks (GANs), creates
realistic variations of drowsy and alert faces, especially useful when the original dataset is
limited. Temporal augmentation manipulates sequences of frames by dropping frames, time
warping, or shuffling frames within a small temporal window, which helps the model learn
temporal patterns associated with drowsiness.

Synthetic pose generation creates images of the driver’s face from different angles and poses,
simulating various head movements and orientations. Additionally, if the system uses other
sensors like heart rate monitors, augmenting data from these sources by adding noise or
shifting the timing of signals can improve robustness to sensor inaccuracies.

Overall, data augmentation enhances the training dataset’s diversity, helping the model to
better recognize drowsiness in various real-world conditions.

12
3.3 OPERATING REQUIREMENTS

HARDWARE REQUIREMENTS

1.Arduino Microcontroller Board: The Arduino will serve as the central processing unit for
your system. It will receive input from the eye blink sensor, process it, and control the
alarm system.
2.Eye Blink Sensor: This sensor will detect the driver's eye blinks. You can use an IR sensor
or a camera-based system to monitor eye movements and blinks. When the sensor detects
prolonged eye closure or frequent blinks, it indicates drowsiness.
3.DC Motor: The DC motor can be used to create a vibration mechanism. When triggered by
the Arduino, it can vibrate the seat or the steering wheel to alert the driver.
4.Relay 5V: The relay will act as a switch controlled by the Arduino. It will be used to turn
the DC motor on or off based on the input from the eye blink sensor.
5.Switch: You can use a switch to manually activate or deactivate the anti-sleep alarm
system. This allows the driver to control the system based on their preferences.

SOFTWARE REQUIREMENTS

1.Code for interfacing with sensors to detect signs of fatigue (e.g.,


monitoring eye movement, blinking patterns, or heart rate variability)
2.Algorithm to analyze sensor data and determine the driver’s fatigue level
3.Logic for triggering the alarm when fatigue is detectedUser interface for
configuration and monitoring
4..Integration with any additional features

13
CHAPTER 04
SYSTEM DESIGN
4 SYSTEM DESIGN

4.1 UML DIAGRAMMS


UML stands for Unified Modelling Language. UML is a standardized general-purpose
modelling language in the field of object-oriented software engineering. The standard is
managed, and was created by, the Object Management Group.

The goal is for UML to become a common language for creating models of object-oriented
computer software. In its current form UML is comprised of two major components: a Meta-
model and a notation. In the future, some form of method or process may also be added to; or
associated with, UML.

The Unified Modelling Language is a standard language for specifying, Visualization,


Constructing and documenting the artifacts of software system, as well as for business
modelling and other non-software systems.

The UML represents a collection of best engineering practices that have proven successful in
the modelling of large and complex systems.

The UML is a very important part of developing objects-oriented software and the software
development process. The UML uses mostly graphical notations to express the design of
software projects.

GOALS

i. The Primary goals in the design of the UML are as follows: ii. Provide users a ready-to-
use, expressive visual modelling Language so that they can develop and exchange
meaningful models.
iii. Provide extendibility and specialization mechanisms to extend the core concepts. iv.
Be independent of particular programming languages and development process.
v. Provide a formal basis for understanding the modelling language.
vi. Encourage the growth of OO tools market. vii. Support higher level development
concepts such as collaborations, frameworks, patterns and components.

16
TYPES OF UML DIAGRAM

Each UML diagram is designed to let developers and customers view a software system from
a different perspective and in varying degrees of abstraction. UML diagrams commonly
created in visual modelling tools include:

A. USECASE DIAGRAM

A use case diagram is a graphical depiction that showcases the dynamic relationships between
actors, which are external entities like users or other systems, and use cases, representing
specific functionalities or scenarios within a software system. Actors trigger and engage in use
cases, elucidating how users or external systems interact with the system's features. This
visual representation offers a high-level, yet comprehensive, view of the system's functionality
and its external interfaces. By presenting these interactions visually, use case diagrams
promote a shared understanding among stakeholders, fostering effective communication, and
aiding in the design and development of the software system.

Fig 4.1: use case diagram

17
B. CLASS DIAGRAM

Class diagrams are widely used to describe the types of objects in a system and their
relationships. Class diagrams model class structure and contents using design elements such
as classes, packages and objects. Class diagrams describe three different perspectives when
designing a system, conceptual, specification, and implementation. These perspectives
become evident as the diagram is created and help solidify the design.

One of the core purposes of class diagrams is to help conceptualize, specify, and implement
the design of a system. They serve as a bridge between abstract, high-level design concepts
and the tangible implementation details. Class diagrams are instrumental in conveying the
system's blueprint and ensuring that all parties involved have a shared understanding of its
architecture.

The essential components of a class within a class diagram consist of its "name," "attributes,"
and "operations." The "name" serves as a unique identifier for the class and often reflects the
real-world entity it models. "Attributes" define the properties or characteristics of objects
belonging to the class, encapsulating their state. "Operations" specify the methods or functions
that can be invoked on objects of the class, effectively defining their behaviors and
functionality.

Fig 4.2: class diagram

18
C. SEQUENCE DIAGRAM

Sequence diagrams, a critical component of Unified Modeling Language (UML), play a


pivotal role in software engineering and systems design. They offer a powerful means to
visually represent and elucidate the dynamic behavior of objects within a software system
during the execution of specific use cases or scenarios. These diagrams serve as an invaluable
tool for understanding how various system components interact and collaborate in a
chronological and orderly manner. By modeling the exchange of messages between objects
over time, sequence diagrams shed light on the intricacies of a system's behavior, facilitating
effective communication among stakeholders and guiding the development process.

Sequence diagrams not only feature objects but also often include actors, which represent
external entities such as users or external systems interacting with the software. These actors,
typically portrayed as stick figures, are essential for illustrating how the system responds to
external stimuli and user actions.

19
Fig 4.3: Sequence diagram

D. ACTIVITY DIAGRAM

An activity diagram is a behavioral diagram i.e., it depicts the behavior of a system. An


activity diagram portrays the control flow from a start point to a finish point showing the
various decision paths that exist while the activity is being executed.

Activity Diagrams describe how activities are coordinated to provide a service which can be at
different levels of abstraction. Typically, an event needs to be achieved by some operations,
particularly where the operation is intended to achieve a number of different things that
require coordination, or how the events in a single use case relate to one another, in particular,
use cases where activities may overlap and require coordination.

20
Fig 4.4: Activity diagram

E. COLLABORATION DIAGRAM

A collaboration diagram, often referred to as a communication diagram in the Unified


Modelling Language (UML), serves as a pivotal behavioural diagram used in system
modeling. Its primary purpose is to provide a visual representation of how objects or
components within a system collaborate and communicate to achieve specific tasks or
scenarios. Unlike some other UML diagrams that delve into the internal structures of these
objects, a collaboration diagram is focused on highlighting the interactions and relationships
between them. Within this diagram, objects take the form of nodes, while their interactions are
depicted through directed arrows or lines, effectively illustrating the flow of messages or
information.

These diagrams are invaluable for visualizing the dynamic behavior of a system, shedding
light on the intricate sequence of messages exchanged between objects during runtime. By
emphasizing communication and interaction patterns, collaboration diagrams greatly aid
stakeholders in gaining a comprehensive understanding of a system's runtime behavior,
facilitating effective system design, analysis, and communication among development teams
and project stakeholders.

21
Fig 4.5: Collaboration Diagram

F. COMPONENT DIAGRAM

A component diagram in the Unified Modeling Language (UML) is a fundamental structural


diagram employed to provide a high-level view of the architecture of a software system or
application. This diagram serves as a visual representation of the system's composition,
showcasing its various components or modular building blocks, which can range from
libraries and modules to classes and larger subsystems. The primary objective of a component
diagram is to elucidate the relationships and interactions between these components.

By graphically depicting how these elements collaborate and communicate to fulfill specific
functionalities, the component diagram offers valuable insights for design and architectural
discussions. It plays a pivotal role in identifying dependencies, both logical and physical,
between different parts of the system. This understanding of the system's overall structure aids
in effective communication among development teams and project stakeholders. Additionally,
it serves as a crucial tool for software architects and designers in creating robust and
wellorganized systems.

Fig 4.6: Component Diagram

22
G. DEPLOYMENT DIAGRAM

The deployment diagram visualizes the physical hardware on which the software will be
deployed. It portrays the static deployment view of a system. It involves the nodes and their
relationships.

It ascertains how software is deployed on the hardware. It maps the software architecture
created in design to the physical system architecture, where the software will be executed as a
node. Since it involves many nodes, the relationship is shown by utilizing communication
paths.

Fig 4.7: Deployment Diagram

23
CHAPTER 05
IMPLEMENTATION
5 IMPLEMENTATION

5.1 Sample code

5.1.1 CNN Model Creation

from keras.preprocessing.image import


ImageDataGenerator from keras.applications.vgg16 import
VGG16 from keras.models import Model from keras.layers
import Dense, Flatten

# Define data generators for training and validation


data train_dir = './datasets2/train' validation_dir =
'./datasets2/test'

# Define image augmentation for training data


train_datagen = ImageDataGenerator(rescale=1./255,
rotation_range=20,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest')
# Define image augmentation for validation data
validation_datagen = ImageDataGenerator(rescale=1./255)

# Create data generators for training and validation data train_generator


= train_datagen.flow_from_directory(train_dir,
target_size=(224, 224),
batch_size=50,
class_mode='categorical')

24
validation_generator = validation_datagen.flow_from_directory(validation_dir,
target_size=(224, 224), batch_size=30,
class_mode='categorical')
# Load pre-trained VGG16 model and remove the top layers
base_model = VGG16(weights='imagenet', include_top=False, input_shape=(224, 224,
3))

# Add custom top layers to the model


x = base_model.output x =
Flatten()(x) x = Dense(512,
activation='relu')(x) x = Dense(4,
activation='softmax')(x)

# Define the new model with custom top layers model


= Model(inputs=base_model.input, outputs=x)

# Freeze the weights of the pre-trained layers


for layer in base_model.layers:
layer.trainable = False

# Compile the model model.compile(optimizer='adam',


loss='categorical_crossentropy', metrics=['accuracy'])

# Train the model


history = model.fit_generator(train_generator,
epochs=50,
validation_data=validation_generator)

# Save the model


model.save('vgg16_model.h5')

25
5.1.2 Backend (flask) from flask import Flask,

render_template, request, jsonify import numpy as np

from tensorflow import keras from PIL import Image

import cv2 import os app = Flask(__name__,

static_folder='static')

current_dir = os.path.dirname(os.path.abspath(__file__)) h5_file_path

= os.path.join(current_dir, "vgg16_model.h5")

# Load the model model =

keras.models.load_model(h5_file_path)

label=["Data Loss ","No ","Salt And Pepper ","Stripe "]

@app.route("/", methods=["GET", "POST"]) def

index(): if request.method == "POST": #

Access the uploaded file if 'file' not in

request.files:

return {"result": "No image uploaded"}

# Read and process the image data

uploaded_file = request.files['file']

image_data = uploaded_file.read() np_arr =

np.frombuffer(image_data, np.uint8) image

26
= cv2.imdecode(np_arr,

cv2.IMREAD_COLOR)

# Preprocess the image image = Image.fromarray(image) image =

image.resize((224, 224)) # Adjust the size to match your model's input requirements

image = np.array(image) image = image / 255.0 # Normalize the image pixel values (if

required)

image = np.expand_dims(image, axis=0) # Add an extra dimension to match the model's


input shape

# Perform the prediction

print(type(image)) predictions =

model.predict(image)

# Interpret the results predicted_class =

np.argmax(predictions, axis=1) print(f"Predicted

class: {predicted_class}")

res = { "result": f"{label[int(predicted_class)]} Noise Detected with Accuracy:


{int(np.max(predictions)*100)}%" }

return jsonify(res)

return render_template("index.html")

27
@app.route('/close_modal', methods=['POST']) def

close_modal():

return ''

if __name__ == '__main__':

app.run(debug=True,host="0.0.0.0",port=5000)

28
CHAPTER 06
OUTPUT SCREENS

6 OUTPUT SCREENS
The output of the automate quality certification of remote sensing satellite images represents a
website where the user can upload the remote sensing satellite images and the CNN model
predicts weather the image contains noisy data or not.

HOME PAGE

The below image shows the uses interface of the website for detecting the noisy data in the
image. On the website there is a option to upload a image and upon clicking the “Detect”
button the result is displayed.
Fig 6.1: Home page

UPLOADING IMAGE
By clicking on the “Choose File” a file manager window will appear from which the user can
upload the remote sensing satellite images. Upon selecting the image, the file name as well as
the preview of the image is displayed on the home page.

29
Fig 6.2: Selecting Image

In the below image the image with ‘Stripe Noise’ is selected for detection.

Fig 6.3: Stripe Noise Image Uploaded

In the below image the image with ‘Salt and Pepper Noise’ is selected for detection.

30
Fig 6.4: Salt and Pepper Noise Image Uploaded

In the below image the image with ‘Data Loss Noise’ is selected for detection.

Fig 6.5 Data Loss Noise Image Uploaded

31
GENERATING RESULT

By clicking on the “Detect” button the result is been generated and it is displayed on the
screen.
If a noise is been detected then the name of noise data in displayed along with its accuracy.

In the below image a satellite image with stripe noise is uploaded and the result is displayed
showing “Stripe Noise Detected with Accuracy: 99%”.

Fig 6.6: Stripe Noise detected

In the below image a satellite image with No noise is uploaded and the result is displayed
showing “No Noise Detected with Accuracy: 99%”.

32
Fig 6.7: No Noise detected

33
CHAPTER 07
TESTING AND DEBUGGING

7 TESTING AND DEBUGGING


Testing is an indispensable aspect of the software development process, serving as a
systematic and methodical approach to unveil errors and defects within a software product.
Its fundamental purpose revolves around the meticulous identification of any potential faults
or weaknesses, leaving no room for oversight or negligence. This meticulous process involves
a comprehensive evaluation of various software elements, including components,
subassemblies, larger assemblies, and the final product, with the primary objective of
verifying their functionality.

The ultimate aim of testing is to ensure that the software system aligns perfectly with its
predefined requirements and user expectations. This alignment is critical in preventing
catastrophic failures or any form of unacceptable behaviour in real-world scenarios. To cater
to the diverse needs of software quality assurance and validation, various types of testing are
employed, providing a multifaceted approach to uncovering issues.

In essence, testing acts as a crucial safeguard against potential software shortcomings and
deficiencies that could impact the user experience or even the overall integrity of the software
system. It helps developers identify and rectify problems, ensuring that the final product
meets the highest standards of quality and reliability.

7.1 TYPES OF TESTS

7.1.1 UNIT TESTING

Unit testing involves the design of test cases that validate that the internal program logic is
functioning properly, and that program inputs produce valid outputs. All decision branches
and internal code flow should be validated. It is the testing of individual software units of the
application .it is done after the completion of an individual unit before integration. This is a
structural testing, that relies on knowledge of its construction and is invasive. Unit tests
perform basic tests at component level and test a specific business process, application,
and/or system configuration. Unit tests ensure that each unique path of a business process
performs accurately to the documented specifications and contains clearly defined inputs and
expected results.

7.1.2 INTEGRATION TESTING

Integration testing is a crucial phase in software testing that assesses the interactions and
interoperability of various software components when integrated into a unified system. Its
primary objective is to ensure that these components, which may have been individually
tested and validated (in unit testing), function cohesively as a whole. Integration tests
examine how different parts of the software communicate, share data, and collaborate to
deliver intended functionality. This testing approach identifies issues such as data flow
problems, communication errors, and inconsistencies in the software's behaviour during
component integration. It helps uncover defects that might not be apparent in isolation but
can surface when components interact, potentially causing system failures or unexpected

33
behaviour. Integration testing can take several forms, including top-down, bottom-up, and
incremental approaches, each focusing on different aspects of component integration.
Ultimately, successful integration testing ensures that the software system operates
seamlessly and meets its functional requirements when all components are combined.

7.1.3 FUNCTIONAL TESTING

Functional testing is a vital phase in software testing that evaluates a software application's
functionality to ensure that it operates according to its specified requirements and design.
This testing approach focuses on verifying that the software's features and functions perform
as intended, meeting user expectations and business needs. During functional testing, testers
create test scenarios and inputs to assess different aspects of the application, such as user
interfaces, APIs, databases, and more. The goal is to validate that the software behaves
correctly in response to various inputs and conditions, identifying deviations from expected
behavior, including defects, inconsistencies, or missing features. Functional testing
encompasses various techniques, including smoke testing, sanity testing, regression testing,
and user acceptance testing, each addressing different aspects of functionality. Successful
functional testing ensures that the software functions reliably, delivers accurate results, and
aligns with its specified requirements, providing confidence in its overall quality and
reliability.

7.1.4 SYSTEM TESTING

System testing is a crucial phase in the software testing process that evaluates the behavior of
an entire software system as a cohesive unit. It assesses whether the fully integrated software,
consisting of various components and modules, meets its specified requirements and
functions as expected in a real-world environment. Unlike unit testing or integration testing,
which focus on individual components or their interactions, system testing examines the
complete system's performance, functionality, and compliance with design and user
requirements. It encompasses various testing types, such as functional, performance, security,
and usability testing, to ensure that the software operates reliably and without critical issues.

34
System testing aims to uncover defects, inconsistencies, and potential failures that may
emerge when different components interact, providing stakeholders with confidence in the
software's overall quality and readiness for deployment.

7.1.5 WHITE BOX TESTING

White box testing, also known as clear box testing or structural testing, is a software testing
method that examines the internal structure, code, and logic of an application. In white box
testing, the tester has knowledge of the application's source code, algorithms, and design.
This enables them to create test cases based on the software's internal workings, including
decision branches, loops, and data flows. The primary objective of white box testing is to
ensure that all code paths are tested for correctness, identifying logic errors, coding mistakes,
and vulnerabilities. It complements other testing methods like black box testing, which
focuses on external behavior. White box testing is valuable for improving code quality,
security, and overall software reliability.

7.1.6 BLACK BOX TESTING

Black Box Testing is testing the software without any knowledge of the inner workings,
structure or language of the module being tested. Black box tests, as most other kinds of tests,
must be written from a definitive source document, such as specification or requirements
document, such as specification or requirements document. It is a testing in which the
software under test is treated, as a black box. you cannot “see” into it. The test provides
inputs and responds to outputs without considering how the software works.

7.1.6.1 TEST STRATEGY AND APPROACH


Field testing will be performed manually and functional tests will be written in detail.

7.1.6.2 TEST OBJECTIVES

• All entries must work properly.


• Pages must be activated from the identified link.
• The entry screen, messages and responses must not be delayed.

7.1.6.3 FEATURES TO BE TESTED

35
• Verify that the file entries are of the correct format

7.1.7 INTEGRATION TESTING

Software integration testing is the incremental integration testing of two or more integrated
software components on a single platform to produce failures caused by interface defects. The
task of the integration test is to check that components or software applications, e.g.,
components in a software system or one step up software applications at the company. level –
interact without error.

Test Results: Most of the test cases mentioned above passed successfully. Few defects
encountered.

7.1.8 ACCEPTANCE TESTING

User Acceptance Testing is a critical phase of any project and requires significant
participation by the end user. It also ensures that the system meets the functional
requirements.

Test Results: Most of the test cases mentioned above passed successfully. Few defects
encountered.

7.2 TEST CASES


TEST TEST TEST DESIGN EXPECTED ACTUAL STATUS
ID DESCRIPTION OUTPUT OUTPUT
TC-01 Uploading an Select a file for "No Noise "No Noise Pass
image with no upload and click Detected" with Detected" with
noise data. “Detect” accuracy. corresponding
accuracy
TC-02 Uploading an Select a file for "Stripe Noise "Stripe Noise Pass
image with stripe upload and click Detected" with Detected" with
noise data. “Detect” accuracy. corresponding
accuracy

36
TC-03 Uploading an Select a file for " Salt and " Salt and Pass
image with salt & upload and click Pepper Noise Pepper Noise
pepper noise data. “Detect” Detected" with Detected" with
accuracy. corresponding
accuracy
TC-04 Uploading an Select a file for "Data Loss "Data Loss Pass
image with data upload and click Noise Noise Detected"
loss noise data. “Detect” Detected" with with
accuracy. corresponding
accuracy
TC-05 Uploading an Select a file for The noise data The noise data Pass
image with a upload and click with high with high
combination of “Detect” accuracy is accuracy is
multiple noise displayed with displayed with
types. accuracy corresponding
accuracy
TC-06 Verify response to Upload a file with Invalid File No output Fail
an invalid file an invalid format Format
format upload. (e.g., .txt) and click
“Detect”.
TC-07 Verify response to Upload a normal Not a remote Treating it as a Fail
a normal image image (e.g., .jpg or sensing satellite image
upload (not a .png and click satellite image. and giving
remote sensing “Detect”. predicted result.
satellite image).
TC-08 Verify response to Upload a Predicting if Predicting if the Pass
a high- highresolution the image image contains
resolution image image contains noisy noisy data or not
upload. (>4000x4000 data or not with
pixels) and click with accuracy. corresponding
“Detect”. accuracy.
TC-09 Verify response to Upload an image Predicting if Predicting if the Pass
an excessively file exceeding the the image image contains
large image file maximum size limit contains noisy noisy data or not
upload. (>50 MB) and click data or not with
“Detect”. with accuracy. corresponding
accuracy.
TC-10 Verify response to Upload a grayscale Predicting if Predicting if the Pass
a grayscale image image (black- the image image contains
upload. andwhite) and click contains noisy noisy data or not
“Detect”. (Only data or not with with
applicable for Salt accuracy. corresponding
& Pepper Noise accuracy.
data
Table 7.1: Test cases for system

37
38
CHAPTER 08
CONCLUSION
8 CONCLUSION
The project successfully automated the quality certification process of remote sensing
satellite images through the implementation of a Convolutional Neural Network (CNN)
model utilizing the VGG16 architecture. This endeavour was achieved by creating a Flask-
based web application that enables users to upload remote sensing satellite images and
receive predictions regarding the presence of various types of noise within the images. Key
aspects of the project included the development of the CNN model, which underwent training
with data augmentation techniques to bolster its robustness and performance. Additionally,
the integration of the trained CNN model into the Flask backend facilitated seamless image
processing and prediction handling. A user-friendly web interface was crafted to facilitate
image uploads and display prediction outcomes. Rigorous testing protocols were enforced
throughout the project lifecycle to ensure the system's reliability and accuracy across diverse
scenarios, encompassing valid and invalid file uploads, as well as different types of noise
detection. The project showcased the potential benefits of automating quality certification
processes for remote sensing satellite images, including enhanced efficiency and consistency
in image analysis. Future endeavours may involve expanding the model's capabilities to
detect additional types of noise and integrating advanced image processing techniques to
further refine prediction accuracy. Overall, the project signifies a significant stride in
leveraging machine learning technologies to streamline quality assessment tasks in remote
sensing applications, thus contributing to advancements in satellite imagery analysis and
interpretation.
39

CHAPTER 09
FUTURE ENHANCEMENTS

9 FUTURE ENHANCEMENTS
In considering future enhancements for the project, several avenues can be explored to
augment its capabilities and address emerging needs in the field of automated quality
certification for remote sensing satellite images. Firstly, the implementation of batch
processing functionality would greatly enhance the scalability and efficiency of the system.
Allowing users to upload and process multiple images simultaneously would facilitate the
handling of large datasets, thereby streamlining analysis workflows and accommodating users
with diverse requirements.

Secondly, continuous refinement of the CNN model is essential for improving noise detection
accuracy. This could involve fine-tuning the pre-trained VGG16 model or experimenting with
alternative CNN architectures. Adjusting hyperparameters, exploring different optimization
algorithms, and incorporating transfer learning from related domains are strategies that could
be pursued to enhance the model's performance.

Expanding the scope of noise detection to encompass a broader range of noise types
commonly encountered in remote sensing satellite images is another avenue for enhancement.
By incorporating the ability to detect rare or less prevalent noise artifacts, the system can
provide more comprehensive and nuanced analysis, thereby enhancing its utility in real-world
scenarios.

Integrating user authentication and authorization mechanisms would contribute to the


system's security and user management capabilities. This would enable personalized user
settings, secure access control, and compliance with data privacy regulations, thereby
enhancing the overall user experience and ensuring data integrity.

Interactive visualization tools can be integrated to facilitate intuitive exploration of prediction


results. Heatmaps highlighting regions of interest or overlays indicating detected noise types
within the image would enable users to gain insights more effectively and make informed
decisions based on the analysis results.

Implementing a feedback mechanism where users can provide annotations or corrections to


the model's predictions would foster continuous improvement and validation of the model.
This iterative process of refinement based on user feedback would enhance the model's
accuracy and reliability over time.

40
Deploying the web application and CNN model on cloud infrastructure platforms would
provide scalability, reliability, and accessibility to users worldwide. Leveraging cloud-based
resources would enable seamless deployment, maintenance, and scalability, thereby ensuring
optimal performance and availability of the system.

Exploring methods for real-time processing of satellite images as they are captured by remote
sensing satellites would enable timely analysis and response to evolving situations on the
ground. Integrating the system with satellite data feeds and leveraging edge computing
technologies would facilitate real-time analysis, thereby enhancing the system's utility for
applications requiring rapid decision-making.

Extending the system to support multi-modal analysis by incorporating additional data


sources such as infrared or hyperspectral imagery would provide a more comprehensive
understanding of the environment. This multi-modal approach would enhance the accuracy
and richness of the analysis, enabling users to derive deeper insights from the data.

Finally, introducing collaborative features that enable users to share datasets, collaborate on
analysis tasks, and contribute to a collective knowledge base would foster collaboration
among researchers, analysts, and domain experts in the field of remote sensing. By
facilitating knowledge sharing and collaboration, the system can leverage collective expertise
to tackle complex challenges and drive innovation in the field.

41
CHAPTER 10
REFERENCES

10 REFERENCES
• Algazi, V.R.; Ford, G.E. Radiometric equalization of non-periodic striping in satellite
data. Comput. Graph. Image Process 1981, 16, 287–295.
• Ahern, F.J.; Brown, R.J.; Cihlar, J.; Gauthier, R.; Murphy, J.; Neville, R.A.; Teillet,
P.M. Review article: Radiometric correction of visible and infrared remote sensing
data at the Canada centre for remote sensing. Int. J. Remote Sens. 1987, 8, 1349–
1376.
• Bernstein, R.; Lotspiech, J.B. LANDSAT-4 Radiometric and Geometric Correction
and Image Enhancement Results. 1984; Volume 1984, pp. 108–115. Available online:
https://ntrs.nasa.gov/citations/19840022301 (accessed on 3 September 2021).
• Chen, J.S.; Shao, Y.; Zhu, B.Q. Destriping CMODIS Based on FIR Method. J.
Remote. Sens. 2004, 8, 233–237.

• Xiu, J.H.; Zhai, L.P.; Liu, H. Method of removing striping noise in CCD image.
Dianzi Qijian/J. Electron Devices 2005, 28, 719–721.
• Wang, R.; Zeng, C.; Jiang, W.; Li, P. Terra MODIS band 5th stripe noise detection and
correction using MAP-based algorithm. Hongwai yu Jiguang Gongcheng/Infrared
Laser Eng. 2013, 42, 273–277. Available online: https://ieeexplore.ieee.org/abstract/
document/5964181/ (accessed on 3 September 2021).

• Qu, Y.; Zhang, X.; Wang, Q.; Li, C. Extremely sparse stripe noise removal from
nonremote-sensing images by straight line detection and neighborhood grayscale
weighted replacement. IEEE Access 2018, 6, 76924–76934.

• Sun, Y.-J.; Huang, T.-Z.; Ma, T.-H.; Chen, Y. Remote Sensing Image Stripe Detecting
and Destriping Using the Joint Sparsity Constraint with Iterative Support Detection.
Remote Sens. 2019, 11, 608.

• Wang, Q.; Ma, J.; Yu, S.; Tan, L. Noise detection and image denoising based on
fractional calculus. Chaos Solitons Fractals 2020, 131, 109463.

• Hao, Z. Deep learning review and discussion of its future development. MATEC Web
Conf. 2019, 277, 02035.

42

You might also like